Certifications: Pass or Fail?
Vendor-controlled certifications have plenty of failings--and one big plus
March 23, 2004
I’ve never been a fan of vendor-controlled certifications, such as the Cisco Certified Network Associate (CCNA) or MCSE, for a number of reasons. Vendor-controlled certifications have always been troublesome. First and most heinous is these certifications' strong marketing aspect. Remember Microsoft's threat a few years ago to decertify Windows NT 4.0 MCSEs? Microsoft knew that people would replace NT 4.0 servers slowly—the price Redmond paid for having created a fairly good product in NT 4.0, I guess—and hoped to use certification to change that. Only letting people call themselves “certified” if they'd passed the test for the latest, but not most heavily used, server version would have created legions of “experts” who were actually inadvertent salespeople. (“Why are you using that old NT? I learned in MCSE class the 20 reasons why you’ve got to upgrade!”) Second is the tests' format: multiple choice. To create tests that can be administered and graded quickly and automatically, Microsoft provides simplistic answers that are either right or wrong—at least in the vendor's eyes.
Vendor-specific certifications also tend to require successful applicants to adopt an unrealistic, single-vendor approach to solving problems. For example, the Windows 2000 infrastructure test presented me with a case study wherein a large multisite organization had built its wide-area IP infrastructure out of Win2K servers acting as IP routers—as if the sample organization existed in some parallel dimension in which Cisco Systems, Extreme Networks, Nortel Networks and other vendors of dedicated router hardware didn’t exist. I quite honestly chuckled aloud at the thought, drawing curious looks from other people in the room.
Despite these downfalls, finding out your score after being tested on a specific domain of knowledge can give you feedback about how much you still have to learn. I've known many struggling would-be network experts who eschewed the exam cram and braindump-style books and Web sites and instead tried to learn Microsoft networking by playing with a few old computers in a basement. After many hours of hands-on education, these folks risked $125 of their hard-earned money to find out whether they had what it took to be an MCSE. In some cases they failed, but knowing that they’d scored 620 points on a test that required 630 to pass and that they were so close to their goal re-energized them to return to their test networks and zero in on the areas that they'd felt unsure about during the test. On a second try, these people generally passed the test—and came out with real knowledge in the process. If, however, such folks scored, say, 200 points on the first try, they at least had a good idea that the certification game wasn't for them.
Back in 2002, Microsoft decided to switch to pass/fail reports. Fortunately, the company reversed that decision. Test metrics are important—a significant source of good feedback for anyone who wants to measure how well they’ve mastered the material. That fact goes for those who pass as well as those who fail: There’s a difference between what you still have to learn when you pass a test with a 700 score as opposed to when you pass it with a 1000 score. Why would any company take that feedback away from its customers? I can come up with only a couple of possible reasons. First, a vendor might want to make it more difficult for people to post questions and answers on braindump Web sites. Someone who has taken a test a few times and walks out with a high score knows that he or she has correctly answered nearly all the questions and, if the person has a good memory, can post a few dozen questions on a braindump Web site. With a simple passing grade, such a person might not feel so confident and the answers on such sites might not be so reliable. Therefore, the vendor could create and recycle a pool of a few hundred questions through tens of thousands of test takers before having to revise the questions. Second, the vendor might just not want people to have enough information to challenge the validity of some of the test answers. Whatever the reason, it’s clear that such decisions have nothing to do with making a test that's a more valuable learning and evaluation tool.
Fortunately, Microsoft’s new crop of tests (beginning with the Windows Server 2003 tests) concentrate on practical, how-to questions and testing methods. I can only hope that the company applies that laudable goal to the test’s answer and feedback mechanism.
About the Author
You May Also Like