QPHP.net

The language (and machines) the Internet is built on.

Archive for the 'Uncategorized' Category

When Developing Software, Take It Slow, Man!

dpswIn the 1980s, a decade in which it was impossible to tell the actors from the politicians, it was fitting that the Macintosh gave anyone with a little cash the ability to produce absolutely gorgeous documents with absolutely no content.

Now rapid software-development tools bring the same potential to smart developers. If you are quite reasonably looking to such tools to reduce your software backlogs, be careful: It’s as easy to abuse them as it is to use them well.

We’ve seen such abuses up close, and they’re scary. Here are a few of the symptoms. If you spot them in your developers, take action quickly.

All interface, no content. You may have seen this one before. The tool lends itself to creating graphical interfaces, so that’s what the developer does first. The interface looks great, but none of the underlying code is present.…

posted by Coder Carl in Uncategorized and have No Comments

Server Construction That Replaced Mainframes Still Effective

svrcnCorporations are moving to multiprocessor servers from two directions. Some are stepping down from outdated and expensive mainframe environments, while others are stepping up from the slightly souped-up PCs they’ve been using as file servers.

Either way, buyers seek out power, scalability and fault tolerance, though not necessarily in that order.

“Super” servers flex their multiple processors in one of two architectures. Asymmetric multiprocessing dedicates each processor to a specific task. In symmetric multiprocessing, a more advanced and costlier technology, tasks are distributed to whichever processor is available. This makes their hard drives easier to recover in general.

But some network operating systems, most notably Novell Inc.’s NetWare, do not support more than one processor. So compatibility with network operating systems is as important to super-server buyers as multiprocessing power is.

As part of its conversion from an IBM 4381 mainframe to a networked-PC …

posted by Coder Carl in Uncategorized and have Comment (1)

The Birth Of Acrobat And Adobe’s Screw-ups

ababFor a company that typically shies away from preannouncing products, Adobe Systems Inc.’s recent formal unveiling of its Acrobat document-interchange technology seems a little out of character.

While almost everyone agrees that Adobe is working on an important technology, the hoopla over the announcement turned out to be little more than a public relations effort.

Yes, Adobe unveiled the formal name of the technology — Acrobat — performed a “live” demonstration and announced two components, Acrobat Viewer and Distiller, which will be delivered within six months. But the Mountain View, Calif., company has previewed the technology, code-named Carousel, publicly for the past year, so most of this was not news.

As one Adobe official put it, “This was meant to bring the uninitiated up to date.”

So why the big splash for a technology that has been public knowledge for quite some time? The …

posted by Coder Carl in Uncategorized and have No Comments

Computers Aren’t The Math Wizards We Think They Are

catmwThe myth of the computer’s math prowess runs so deep that it’s even built into the name. The verb “compute” comes originally from a Latin root that means “to think,” but during the last 400 years the English word “computation” has become almost synonymous with doing arithmetic.

The less we know about computers, the more likely we are to think of them as giant math machines — a belief that leads to excessive trust in computers’ mathematical abilities, despite their potential for making fundamental errors. As with almost every kind of computer problem, these errors are a result of the decisions made by programmers seeking to find the best combination of low cost, high speed and accuracy of results.

Such trade-offs are impossible to avoid, but it’s important for both the application developer and the user to be aware that such decisions are being …

posted by Coder Carl in Uncategorized and have Comments (3)

Early Multiprocessing: A Killer App

mpgWhile lack of software support for multiprocessing has deterred buyers from investing in expensive multiprocessing servers, Schatt and other analysts forecast better days ahead. The proliferation and expansion of LANs and the demands of the mission-critical applications running on them will spur buyers and advance the market over the long run, they said.

“I think the important factor is to look at the big picture. Downsizing [and] enterprise networking [are] definitely going to be a force,” Schatt said. “Probably 1994 will be the year of the super server. We’ll see significant growth in 1993 and 1994.”

According to Schatt, overly optimistic expectations have contributed to the perception that the market has not fulfilled its promise. Market researchers had estimated a $600 million to $700 million market by now, Schatt said, “and it’s not. It’s a $400 million market.”

The sagging economy is one reason …

posted by Coder Carl in Uncategorized and have No Comments

CDPD: It Died So Faster Cell Service Could Live

cdpdNearly 100 years after Guglielmo Marconi successfully transmitted and received electronic signals via his wireless telegraphic invention, a budding wireless network is poised to become the mobile data transmission route of the future.

The Cellular Digital Packet Data (CDPD) network is designed to let cellular subscribers send digital data from mobile PCs over existing cellular networks. Mixing a myriad of technologies, including the ability to send packets of data over cellular airwaves instead of traditional analog transmission, CDPD could provide PC users with quick and reliable data transmission, analysts said.

“CDPD is going to happen and it will be a tremendous challenge for RAM [Mobile Data Inc.] and Ardis,” said Paul Callahan, senior industry analyst for Forrester Research Inc., a market-research firm in Cambridge, Mass. “When it’s available, CDPD will basically be as cheap as [RAM Mobile’s] Mobitex or Ardis’ on-line service charge, and …

posted by Coder Carl in Uncategorized and have No Comments

Things To Consider With Data Recovery Services

Have you ever wondered how you can recover files that have already been lost due to problems with your computer or hard drive? This is indeed a frustrating scenario especially if those files are very important. If you want to retrieve your files, one thing you can do is hire an expert as they can do the job without running the risk of damaging your hard drive. Aside from find out about the data recovery services cost, you also need to obtain additional information such as the number of years the company has been operating and their reputation. The data recovery services cost is not enough as there are still some information that will serve as determining factor of the company’s effectiveness.

disk-recovery-and-repairYou should expect the data recovery services cost to be a bit expensive because the job is not going to be easy. However, if the company can really be trusted, the price will never be an issue. Ask questions so you will have an idea if you are dealing with the right company. They should also provide you some ideas about the techniques they are going to use. It is important that you ask for some references for you to find out about their previous accomplishments. Read more…

posted by Coder Carl in Uncategorized and have Comments (7)

RAID’s Baby Stages Built A Great Future For Data Storage

rbsbadsFeeding on the proliferation of PC LANs, user interest in RAID — Redundant Arrays of Inexpensive Disks — is building rapidly.

“As the network applications become more critical to the company, you’ve got to take significant steps to make sure that when the network goes down it doesn’t take everything with it,” said Roy Wilsker, manager of end-user servers for Kendall Co., a health-care and adhesives company based in Mansfield, Mass.

Wilsker said one of those steps he is considering is installing RAID products in his network servers.

“It’s believed to be potentially a $5 billion market and there’s not a clear market leader, so everyone’s rushing in,” said Seth Traub, storage-market analyst for International Data Corp., a market-research company based in Framingham, Mass.

Recently unveiled new products include Micropolis Corp.’s Raidion disk array subsystems, AST Research Inc.’s array controllers for its server line and IBM’s AS/400 RAID array.

In order to sift through the profusion of recently released RAID products, users like Wilsker need to clearly understand the technology — which lets several disk drives work together to boost reliability and performance — observers said.

“Anybody who is buying anything that’s complex and doesn’t understand it is looking for trouble,” said Joe Molina, chairman of the RAID Advisory Board, which was created four months ago to help clear up some of the confusion.

The Advisory Board was in part the brainchild of Molina, who spent the last decade promoting the Small Computer System Interface (SCSI).

Tired of facing customers with unfamiliar technology, Molina said he left a SCSI marketing job 10 years ago to start Technology Forums. The Lino Lakes, Minn., firm is educating both vendors and users on data storage-related topics.

RAID is currently in a positionsimilar to that of SCSI 10 years ago, according to Molina, and Technology Forums serves as a facilitator for vendors who want to elevate RAID beyond buzzword status.

So far, 24 companies, including IBM, Digital Equipment Corp., NCR Corp. and Seagate Technologies Inc., have signed on as board members.

Closer to being an advocacy group than a standards-setting body, the RAID advisory board is trying to sort through the technical fine points that separate RAID products and develop guidelines to make the products more uniform.

For example, the group wants to encourage all disk drive makers to make their drives’ spindle-synchronization mechanism work the same way, Molina said. If they did, RAID developers wouldn’t have to accommodate different spindle-synchronization signals, a bit of re-engineering that can add to a RAID product’s price.

Until standards are set, RAID can mean different things depending on a particular vendor’s point of view. Most vendors look to an academic paper written by professors at the University of California at Berkeley in 1987 to develop their form of RAID.

In that paper, titled “A Case for Redundant Arrays of Inexpensive Disks,” the technology was grouped into several categories (see chart, Page 81). Although RAID categories are called levels, they are not hierarchical.

Simply put, a drive array ties disk drives together so they can share the task of storing data. Should one of the drives fail, other drives in the array are there to keep the data intact. The RAID products spread the data around differently, depending on what type — or level — of technology is employed.

Generally, RAID employs striping, which distributes data evenly across the disks, and mirroring, which makes duplicate copies of data on separate disks.

Each type of RAID has its own advantages and disadvantages. RAID 5, for example, can cause drives to perform slower than RAID levels 0 or 1 because it takes extra time to compute and write error-correction data. However, RAID 5 affords the high level of data protection that many users require for their network servers.

In some RAID configurations, the drives store data faster together than a single drive alone. So a grouping of less-expensive slower drives can offer greater throughput than a faster, more expensive drive. For example, in some mirrored arrays, the controller reads alternate clusters of files from each drive simultaneously, then pieces the information together and delivers it to the PC. Thus, reading time is cut significantly when two drives are linked through mirroring.

However, some vendors implement those RAID levels with slight differences; some support a given level in hardware, and others support a level in software.

Still others have developed their own type of RAID. For example, Storage Computer Corp., of Nashua, N.H., is now selling a patented hardware design it calls RAID 7 (see story, below). The subsystem is the first RAID architecture to implement a truly standards-based data storage system, according to company officials.

Storage Computer Corp.’s president says his company has created a superior RAID product by defying conventional wisdom.

Ted Goodlander isn’t shy about saying that the Nashua, N.H., firm’s RAID 7 storage subsystem doesn’t fit into the six Redundant Arrays of Inexpensive Disks categories followed by most disk-array vendors.

Indeed, Goodlander claimed that Storage Computer (which is known as StorComp) was working on the basic technology for the product long before the publication of the so-called Berkeley papers, an academic work on disk arrays written by three University of California computer-science researchers that is often cited as the foundation of RAID products.

“So many people took that paper and said it was the Holy Grail,” Goodlander said.

Unlike other varieties of RAID, in which the disk drives rotate in sync, StorComp’s RAID 7 subsystem has an asynchronous design, he said. RAID 7 moves the drive heads independently of each other to increase the number of reads and writes that the array controller can handle, Goodlander said.

StorComp’s RAID 7 also utilizes special algorithms that help prevent the controller’s data cache from becoming saturated. As a result, the company claims its RAID 7 subsystem transfers data two to four times faster than other RAID subsystems and still provides fault-tolerance for as much as 141G bytes of data. The only downside to this configuration, says Goodlander, is the fact that it cannot be self recovered in failure scenarios. Instead, a RAID data recovery expert such as Hard Disk Recovery Services must be used in order to rebuild the array.

The RAID 7 desktop Read more…

posted by Coder Carl in Uncategorized and have No Comments

Microsoft Showed Its Evil Early On

mssieAs word leaked out that Microsoft has included a subset of its E-mail product in beta copies of Windows 4.0, other vendors began screaming bloody murder.

This is exactly what they were afraid of, they say. Microsoft, they insist, needs to be sensitive to its dominant position in the PC business and not add features to its operating systems that threaten vendors selling those functions as add-ons.

There is some logic here. If Microsoft replaced the amiably feeble Windows Write with Word for Windows, I think we could all agree that Redmond had gone too far. Or if it added a gray-scale editor to Windows, we’d think things were getting out of hand.

But E-mail? Of all the things that belong in an operating system — but aren’t in DOS or Windows yet — E-mail is the most obvious missing link, the most needed …

posted by Coder Carl in Uncategorized and have Comments (2)

Oh OSI… So Much Promise

osiTen years ago, conventional wisdom said that the computing industry would be hurtling toward wholesale adoption of the OSI standards by now. TCP/IP protocols were positioned as a stepping-stone to OSI. Likewise, SNMP was merely a precursor to the OSI network-management standards that were expected to gain world dominance.

But if you stop and take a look around, it hasn’t happened. In fact, it hasn’t even started to happen. TCP/IP continues to gain market momentum, and products based on OSI (Open Systems Interconnect) standards are few and far between.

In fact, rumblings can be heard within the industry that OSI as a whole is dead, and that TCP/IP, rather than acting as a stepping-stone to standardization, will itself become the standard of choice. Indeed, many users and vendors no longer talk about compliance with the OSI standards, but rather discuss technology implementations such as …

posted by Coder Carl in Uncategorized and have Comments (2)

Borland Destroyed Itself, Frankly

bamcWell, yes, $99 prices do get our attention, don’t they?

But an introductory price under a hundred bucks certainly isn’t going to be the key to whether the Redmond gang succeeds in its effort to muscle into the serious database market — one of only two applications-software areas where it has never been able to compete.

(The other area? Async communications, where Microsoft sold briefly seven years ago one of the worst programs ever shipped. It was named — eerily — Microsoft Access. I think I might have been a little more sensitive to history, Mr. Gates.)

Nope, $99 prices won’t do it. All Gates & Co. are doing with that teaser price is getting our attention and asking us to take a look — in effect, asking us to pick up the production and distribution costs, plus a pence or two for the …

posted by Coder Carl in Uncategorized and have Comment (1)

Data Warehousing Battles Are Harsh

James Curran recently found himself face to face with a data warehousing time bomb that would not be dismantled by any technology fix. The senior vice president of management information services at State Street Bank & Trust Co., in Boston, wanted to change the format for some key financial data, but the users in control were balking. Grounds for a skirmish? Not according to Curran, who instead opted for compromise to keep his data warehousing project from detonating.

After heated debate, Curran agreed to leave this particular data alone. In return, the users promised to show support for the warehouse in other ways that were important to Curran’s group. Specifically, they agreed to start using a new feature that lets users input annotative data, including notes and comments, on financial activity, helping Curran’s staff create a financial intelligence database as a core component of …

posted by Coder Carl in Uncategorized and have No Comments