Lab Testing
Product testing in the Lab can be a challenging process.
September 14, 1999
The adventure continues . . .
To bring our readers up-to-date information, we in the Windows NT Magazine Lab often must pry products out of vendors as early in the product release cycle as we can. If we want to review early software versions, we must be willing to test beta products. Usually, doing this isn't a problem. If we get a feature-complete product, we can run it through its paces, examining the functionality that makes the product unique. We won't performance-test beta software, however, because examining the speed of a product version that might contain check code or isn't yet optimized for performance serves no purpose. Ideally, we can start the testing process with the beta code and finish with the release-level code, which lets us double-check our results on the release product and run our performance tests. When our testing process runs smoothly, you get the product review in the most timely manner possible.
With software testing, a smooth-running process is necessary. Because product-version life cycles run only about 6 months, our monthly magazine's 3- to 4-month lead time means we must aggressively pursue products so that we can complete reviews quickly. However, many of the products we try to review are unavailable when we need them or exhibit too many bugs during testing. Even then, we can usually review the product; software bug fixes are comparatively easy to accomplish, and vendors can release them quickly.
Unfortunately, our approach with software doesn't apply to hardware. Vendors don't want to send out pre-release-level hardware, and we're not interested in testing products that our readers can't purchase. You might think that although this situation would make our hardware reviews less timely, we'd at least be dealing with available and stable products.
If only that were the case. At least 20 to 30 percent of the hardware products we select for testing need some updating by the vendor. Some of the updating is simple; a fairly regular occurrence is that the vendor-supplied standard video card doesn't work correctly with our video benchmark. To solve this problem, we can often use another video card for our testing. Because our readers can buy the alternative configurations we use and we can easily document the changes we make for testing, these situations don't compromise the accuracy of our published reviews.
The real problems with hardware reviews crop up in two other situations. The first is when we and a vendor must ship a system back and forth several times. We don't always know what vendors are doing to solve the problems that we discover, and often, what vendors actually do doesn't solve the problems. So, between reshipping a system, setting it up again, rerunning the failed tests, and occasionally discovering that the problem we discovered still exists, we find it downright aggravating, if not impossible, to test certain products and publish the results while the products are still shipping.
The second situation is more likely to concern our readers. When we swap hardware and fix systems before we test them, we have a paper trail that documents exactly what we're testing. But when a vendor supplies a firmware upgrade fix, describing what we're testing becomes a problem. Unless the firmware upgrade identifies itself in some fashion (e.g., as a BIOS revision number during the boot), describing to our readers exactly what the upgrade has done can be difficult. All we can tell you is that we've had to flash the BIOS or update the firmware to fix a specific problem, and that you'll need to look out for such problems on your own.
These challenges make testing in the Lab an adventure. We hope our testing helps improve products before you purchase them. If that's not the case, you can read in our lab reports about the problems you might encounter.
About the Author
You May Also Like