Apple tricks the benchmarks

The First bit…
Jobs’ claim that its new G5 systems will be the fastest personal computers on the market was reiterated several times during Monday’s keynote.

Historically, Apple has validated such claims by using a very small number of benchmarks, and often just one: a now ancient benchmark developed by BYTE Magazine, called BYTEmark,and one which even BYTE suggestedcould not, by itself, accurately describe the performance of the system. Apple, however, used BYTEmark to test its own G3 processors as this Google cache shows.

Apple seemed to break its habit on Monday morning, when Jobs trotted out benchmarks based upon what is considered to be an industry-standard benchmark, SPEC, administered by Apple’s tests, a copy of which can be found in this PDF document, were performed under contract from Apple by Lionbridge Technologies’ VeriTest testing labs, an independent testing agency.

Tweaks apparently made to optimize Apple’s CPU
Although no benchmark is perfect, posts a list of submitted scores to its web site, and each list includes a detailed description of the system configuration. None of the Apple scores, however, have yet to appear.

As is often the case, Lionbridge appears to have performed the tests to the letter. Those letters, however, are very telling.

In the system configuration for the G5 system, Apple appears to have asked Lionbridge to do quite a bit of tweaking. According to the section titled “Initial Power Mac G5 Configuration for all SPEC CPU2000 Testing,” the following steps were taken:

"Install the Computer Hardware Understanding Development kit ( CHUD ) version 3.0.0b19. This tool is designed to simplify performance studies of PowerPC Macintosh systems running Mac OS X by providing a set of tools for developers to analyze their applications. CHUD will be available for download after June 23, 2003 at

*Using the “Reggie” tool available from CHUD, modify CPU registers to enable memory Read By-pass. As Read requests are speculatively sent to the memory controller, this eliminates the need to wait for the snoop response required in a multiprocessor configuration thus reducing the time required for a read request.
*Used the command “hwprefetch -8” to enable the maximum of eight hardware pre-fetch streams and disable software-based pre-fetching.
*Installed a high performance, single threaded malloc library. This library implementation is geared for speed rather than memory efficiency and is single-threaded which makes it unsuitable for many uses. Special provisions are made for very small allocations (less than 4 bytes). This library is accessed through use of the -lstmalloc flag during program"


And it goes on…

For both the Dell Dimension 8300 and the Dell Precision 650, Apple/Veritest performed the multi-processor “Rate” benchmarks with hyperthreading DISABLED. They had hyperthreading ENABLED for the single-processor benchmarks, but DISABLED for the multi-processor benchmarks, despite the fact that hyperthreading would have improved the performance of the multi-processor “Rate” benchmarks, while having little or no effect on the single-processor benchmarks. In either case, this performance-enhancing feature of the Intel processors should not have been disabled.

And more…

Apple crippled the floating-point performance of the Pentium 4 by setting a compiler option incorrectly. Apple/Veritest enabled the GCC “-mfpmath=sse” option. A number of people have e-mailed me to say that this option causes GCC to use SSE1 and SSE2 instructions for floating-point, however this is an experimental feature and it actually DECREASES performance. They say that the regular x87 instructions are faster and should be used for floating-point, not SSE/SSE2.


Another source for uncovering Apple’s inferior system. I always get a chuckle after seeing what the people over at macrumors says about the Macs… poor bastards. Imagine what AMD system you can build w/ $3K, the cost of a dual 2GHz G5 system, and w/o the 23" Cinema Display, which will run you another $2K.

actually, you’ll find that if you’re doing actual work with the computer (compiling source code, 3D work, video and audio post-production work, motion graphics, or serious math) the price for the power you get with a dual G5 setup crushes any chip or combination of Intel or AMD chips. There just aren’t as many games, that’s all most PC owners care about.
Plus you wouldn’t be stuck using Windows. Although gcc (compiler) is available for Linux, most people here run Windows, I’m sure.
Windows users can own the new Apple 30" display too, but there’s no video card on the market for PC machines that will drive that many pixels yet.

The point of this post was to show that despite many people (like yourself) claiming Apple computer systems were superior to comparably equipped x86 systems, Apple cannot prove it without messing with the test setup. Many benchmarks these days try to mirror real-world applications. Still no reason to drag up threads from over a year ago though.

Perhaps the best way to really compare is render a huge graphic, like 2 gigs or so, in the same adobe photoshop software, one on apple, one on PC, and have a good ole’ fashion stopwatch to benchmark it…lol :wink:

Can’t fraudulate a stopwatch result now can ya…

I thought it was standard practice for manufacturers to play around with the test so their product comes out on top.Maybe xyacydima is right lets go back to the good old stopwatch,on the other hand they would find a way around that to.I suppose you will never know what it can really do until you get it home.Maybe rent first ,buy later.

Why are people digging up threads of more than a year old? :confused:

Something to do? :confused:
Probably just newbies reading them for the first time & just have to stirr up old controversies, but not recognising them as over a year old, and hence Cdfreaks CareFactor=0 :stuck_out_tongue:

Looking at tonmeister’s posts, he must have just search for mac and apple and answered every thread they were mentioned in :bigsmile:

Macintosh publicist :wink: