Vendors such as Sybase Inc. and Oracle Corp. have made it possible to migrate database systems from their former sole purview — mainframe computers — to midrange-sized systems from most major hardware vendors.
And with the trend toward open systems, users who once would have been locked into a proprietary solution have the luxury of selecting application servers that can be networked into their current environment using standardized protocols. Price and performance, rather than interoperability issues, now can take precedence in the selection of an application server. We have an automatic data entry accounting platform designed to assist accountants and bookkeepers with automatically capturing all accounting information.
Although the systems are too disparate to be compared in a scoreboard, we grouped them in this review because, in concert with their own specific versions of Sybase System 10, they are all capable of solving the same business problem: the need for a high-performance midrange database server.
Although all of the non-Intel-based systems were RISC-based, they offer different advantages and disadvantages in processor design. And there was no direct correlation between the number of processors and overall comparative performance.
Our tests were designed not only to stress the CPU complex but to place heavy demands on each system’s I/O subsystem (memory and disk). We used our Ritesize IV benchmark, which measures the transaction throughput for different numbers of DOS client stations issuing SQL queries to the server (see Testing Methodology, Page 93).
Query 1 evaluated the role of cache, processors, and memory relative to performance (see benchmark chart, above). Because the data set is small, it is cacheable, and systems with good integration of cache, processor, and memory will show the greatest performance benefit. The two standout machines in this review — the HP and the Compaq — show the results of this careful attention.
To isolate the performance of the disk subsystem, we ran the read-intensive Query 2 by itself on each platform. In addition, Sybase maintains a large amount of control over the disk subsystems and, in fact, prefers to be installed on a raw partition.
The Mixed Query benchmark, which ran four queries to imitate a mixed workload, is the most representative of server performance in general use.
AT&T Global Information
Solutions System 3455
AT&T Global Information Solutions gained a strong entry into the Intel SMP (symmetric multiprocessing) Unix market when it inherited NCR’s line of multiprocessor Pentium servers. However, the $144,210 System 3455 was disappointing in tests.
It would be obvious to blame the AT&T system’s CISC architecture for its poor showing had the Compaq ProLiant not performed so well. More likely at fault is the System 3455’s comparably poor design in the memory and I/O subsystems.
Compaq Computer Corp. ProLiant 2000
The most impressive performance of the review might well be that of the ProLiant 2000. The $55,052 ProLiant offers good price relative to performance, and its familiar hardware platform may make it the best choice for users moving up from the traditional PC world.
Compaq has done an excellent job integrating processor, cache, memory, and I/O. The ProLiant achieves performance competitive with high-end RISC technology with what some might call “commodity” hardware. But while some of the bits and pieces of the ProLiant can be found in any number of Intel-based server boxes, Compaq’s additional engineering makes the ProLiant more than just the sum of its parts.
With a blazingly fast disk I/O subsystem and by using Compaq’s TriFlex architecture to best advantage, the ProLiant’s performance was competitive with much more expensive and supposedly more advanced technologies.
Many of these ProLiant’s are still in service, which tells you about their strength. Even with the move to the HP ProLiant line, these servers can often last as long as 15-20 years. However, that may be pushing it for these machines, as the hard drives tend to fail and other components, including the disk controller may begin to show issues. If you have an HP ProLiant server (any model), it’s best to check a good troubleshooting guide like this one. Trying to recover a ProLiant disk array is not something you should do on your own.
Data General Corp. Aviion 8500
A lot of processors and a fast external disk array allowed the Aviion 8500 to virtually tie the ProLiant for second place in our Mixed Query test. Hardware RAID 0 striping in the Aviion external array helped the six 88110 processors, each with a 1M-byte shared instruction/data cache, achieve 156 tps.
The Aviion posted a respectable 95 tps in Query 1, and the machine’s external disk array was able to perform a steady 27 tps across the workload in Query 2.
The 8500 came equipped with 10 expansion slots, three of which were occupied by dual-CPU cards, two by the I/O controllers, and one with 256M bytes of memory. Five standard VME expansion slots are included in this mix.
Digital Equipment Corp. DEC 3000 Model 800S
The $42,450 DEC 3000 performed adequately in PC Week Labs’ tests, but we found nothing outstanding about the system. The DEC 3000 uses one of the few implementations of OSF/1 Unix, an open-system standard based on SVR4.
Hewlett-Packard Co. HP 9000 Model 800/G70
With excellent performance and scalability that reflects well upon the rest of the product line, the $102,000 HP 9000 offers good price relative to performance and a reliable, traditional approach to the midrange Unix market. The HP system showed good scaling when we ran comparison tests in one- and two-processor configurations.
Each of the HP 9000’s two PA-7100 RISC CPUs has a dedicated 1M-byte instruction cache and a 1M-byte data cache. Unlike most RISC processors, the Precision Architecture is optimized for very fast integer support, a common need in commercial applications. RISC is traditionally known for fast floating-point operations, which is useful in a scientific environment but much less so in the business world.
The HP system is also equipped with a high-performance processor memory bus, which uses a 32-bit multiplexed address/data bus to handle the data transfer between processors and memory. Memory access is through a 72-bit wide channel capable of a maximum throughput of 384M bytes per second.
With only four I/O slots, the G Class HP server is in the middle of the 9000 product line. We could use only three network segments because the fourth channel was needed for the disk controller.
HP uses its own version of Unix — HP/UX. Currently at Version 9.04, HP/UX is based on the earlier SVR3 but also incorporates selected features from BSD Unix 4.3.
HP/UX provides SMP support and offers a Logical Volume Manager, which could have made it easier for us to distribute our database tables across multiple spindles. However, HP chose instead to use Sybase’s manual software striping, a testimony to the company’s confidence in the performance of its hard drives.
IBM RS/6000 POWERserver 590
The Power2 processor can dispatch six instructions simultaneously under ideal conditions. But its complex manufacturing process drives up system cost, and maintaining coordination between all the processor modules makes it necessary to limit clock speeds (currently to 66MHz).
In the Query 1 test, the IBM system was the most impressive performer, reaching 121 tps with a single processor. Only the dual-processor HP 9000 did better — 145 tps. IBM’s level of performance was significantly higher in this test than any of the other systems, a notable feat considering that the single Power2 system beat the six-processor Aviion, which reached 95 tps, and the dual-Pentium ProLiant, which peaked at 90 tps.
The database consisted of two tables: a Product table with 1.2 million rows and a Part table with 4.8 million rows. Each Product row had four corresponding Part rows. The database was set up in such a way that each workstation accessed its own subset of the database, and there was no contention for access to the rows of these tables. Both tables were indexed.
On the multiprocessor machines we also ran tests to show the scaling effect of running the Mixed Query with different numbers of processors. Multiple processors were turned on and off using Sybase’s internal facilities for running on multiple engines.
The benchmark starts off with one station issuing the queries to the server for 10 minutes. The first 3 minutes, 45 seconds were not timed and were intended to allow the cache to stabilize itself. During the next 5 minutes we timed the number of transactions executed. This was followed by a non-timed 1 minute, 15 seconds. This procedure was then repeated with four stations issuing queries, then eight, then 12, and so on up to 60. There was a 30-second quiet period between the iterations.
In an attempt to maintain a level playing field for all the vendors, we tuned the machines in the same basic way. Sybase has parameters that are well-documented and configurable by the database administrator. Using the sp_configure stored procedure, Sybase was allocated 200M bytes of the available 256M bytes. We also lowered procedure cache from the default 20 percent to only 5 percent because a very small number of stored procedures were used in the test. Therefore, 95 percent of the 200M bytes could be allocated to data cache. Vendor-specific parameters were limited to those recommended as standard in the Sybase documentation.
Installation involved examining the configuration sent to us by the vendor and deciding on the most efficient layout of the tables associated with our benchmark. In all cases the transaction log was left on a separate drive, as would be the case in any real database installation. This is necessary because the log is the sole means of recovering a database if some anomaly occurs in its structure or operation.
We limited vendors to spreading the Product and Part table over a maximum of four spindles each. The database tables were set out on raw partitions so that Sybase could itself control the consistency of the database. The tempdb database (workspace area) was allocated 60M bytes. Where possible, we put the indexes and tempdb on separate spindles.
Hardware striping (on the lines of RAID 0) was permitted for systems that offered battery backup to ensure that any write caches would be written to disk before power was no longer available to the system. If hardware striping was not available, we did a limited amount of manual striping to ensure that all four spindles associated with the four tables were being hit evenly before maximum throughput was achieved.