Neat Tricks in NT 4.0 and Benchmarks

The Lab Guys tell you about some great features in NT 4.0, and they introduce the Lab's new approach to benchmarking.

Joel Sloss

October 31, 1996

7 Min Read
ITPro Today logo in a gray background | ITPro Today

The Lab Guys fill you in on NT 4.0 and benchmarks

The new NT 4.0 Explorer interface has some neat features you'll want tolearn. Table 1 lists these navigation tricks.

This capability is not new, but do you know that in User Manager, you canassign rights and permissions to groups of users by selecting all their namesand then performing the administration functions? Beware, though! This actionwill overwrite any pre-existing permission setups (i.e., if certain users havecertain rights, you will erase those rights in favor of the new attributes--thisaction is handy, though, for initial setups of numerous users).

Changing Security
NT File System (NTFS) is a secure file system and generally easy toadminister. To grant or deny specific users and groups the permissions to read,write, execute, delete, change permissions, or take ownership of the selectedobjects, you simply highlight certain files or directories and select a few menuitems.

But what if you already have different permissions set up throughout yourfile system and you want to remove access from a group or a user without muckingup the existing set? If you glance at the menu items, you might think you haveto go through all the files, one at a time, looking at the permissions andremoving the particular user or group. This procedure is time-consuming,susceptible to error, and just plain painful with thousands of files.

Lucky for us, Microsoft has provided a command-line utility, cacls.exe,that lives in your %systemroot%system32 directory. cacls stands for ChangeAccess Control Lists. The utility lets you change the user and group accesspermissions for files and directories.

You can tighten your system security by removing the Everyone group fromall the files and directories without wiping out the permissions that are inplace. First, issue the command cacls*.*/T/E/g Administrator:F to ensure thatthe Administrator account can access all files and directories, just in case youremove your ability to modify them further. Next, issue the command cacls *.* /T/E /r EVERYONE from the command prompt in the root directory to change thepermission of every file and directory on your system.

To see the permissions in place, as in Screen 1, issue the commandcacls *.* /T. Add | more to the end of the command line so the listing doesn'tscroll by too fast.

As with most of the command-line utilities that Microsoft provides, onlineHelp is available. However, as Screen 2 shows, the Help is a bitcryptic.

Birth of a Benchmark
And now for something completely different: The Lab's mandate is to givereaders criteria for judging and selecting software and hardware products. Todevelop those criteria, we Lab guys test products, just as the labs of othermagazines do. However, we've become frustrated with the methods of most testingand the standards for benchmarking. So we've decided to develop meaningful andrepeatable benchmarks.

A realistic way to define the word benchmark is as a distortionof reality. No one has discovered a way to accurately duplicate real-worlduser loads without sitting down at every company in the world and testing eachnetwork, server, and workstation in the application mix and their user load.

With that reality check in mind, you can further define a benchmark as areasonable simulation of average user activity: Shoot for the median,and you stand a good chance of representing a useful cross-section of user loadand configurations in the real world.

Some benchmarking strategies, such as TPC, ServerBench, NetBench, AIM,RPMarks, and RDBMarks, are synthetic: They test system performance by generatingloads or transactions that do not occur in the natural IS world. Thesestrategies don't always relate cleanly to real, end-user system performance andactivity, so you don't get a feel for real environment scaleability. You arebest served by not using these numbers to extrapolate system performancefor your corporate environment. Some published results can mislead you.

The currently fashionable method of reporting system performance is withone number, such as TPC-C. Vendors and magazines use this number to say, "Thismachine is the fastest computer in the world," or, "This system is thebest price/performer."

The trouble is that you frequently see system comparisons in environmentsthat are not even close to real user environments, and evaluations stack systemsagainst each other that have no business competing in the same market. You stillsee comparisons of $70,000 symmetrical multiprocessing (SMP) platforms to $6000clones, with the statement that the SMP box performed only marginally better.The implication is that the SMP box is therefore a bad purchase relative to theclone. But, how does the SMP box scale at low to high user counts or differentsystem configurations? What other features does it offer that add value for thecustomer? What is the test using the system for?

Lumping the entire system's performance characterization into one numberdoesn't really tell you anything, and if you base a buying decision on such anumber, you may find that you went down the wrong path. Just because a systemscored a high TPC or ServerBench score doesn't mean that when you plug thatsystem into your network, it will scale infinitely. And you have no way ofknowing where the new server's architectural bottlenecks are.

Looking at trends--using capacity planning tools and performancemonitors--is a much better indicator than raw performance and maximum capacity.You use trends to analyze where potential problems are, where you can improveyour system, where an upgrade is necessary, and so forth. Suppose you use thelatest TPC-C result from IBM to buy a server with a specific configuration, andyou expect it to outperform every other system because it carries this month'shighest number. You'll probably find that, up to a certain point under a certainload, this expectation will be fulfilled.

However, any system will fall at certain points, be they at low loads,medium loads, or high loads. You'll be caught unaware, $50,000 in the hole, witha system not performing the way you hoped. And you'll be spending countlesshours on the phone with tech support trying to figure out why the system isn'tdoing what you expected.

A New Way of Thinking
Where does this situation leave you when you're trying to decide what systemto buy? We hope we've debunked the urban myths about benchmarking. So what's thealternative?

In the last week of July, several server manufacturers and software vendorsgathered at the Windows NT Magazine offices to discuss the answer, andother key industry players contributed to the discussion: IBM, Compaq,Microsoft, Tricord Systems, HP, Digital Equipment, Bluecurve, SQA, and GreatPlains Software. The consensus was that synthetic benchmarks are difficult andexpensive to run and tough to understand if you try to dig past that single rosynumber.

The question remained: How can we performance-test systems and software(the OS and applications) in a real-world fashion that users will accept,understand, and be able to duplicate without incurring high cost andunreasonable complexity? The answer is to use industry-standard tools andautomate user activity that represents what users actually do, instead of whatthey ought to do (e.g., not to test with synthetic routines that move datablocks from one memory location to another). At the same time, we don't want toproduce statistical nightmares that let us cook the numbers to say whatever wewant. The key is to use simple metrics that answer what people want to know: Howlong do we have to wait for a process to run, and how many users can we supportbefore system responsiveness disintegrates?

Right now, several tools are available for this kind of test, and more areunder development: Microsoft Exchange via LoadSim, SQL Server 6.X viaBluecurve's Dynameasure, application serving with Citrix WinFrame, generalaccounting with SQA Suite and Great Plains Dynamics C/S+ SQL, and Internetapplications with WebStone. To clarify the distinction between the existingindustry benchmarks and what we propose to implement, the issue is a matter ofwhat you want to learn: Do you want one performance number that seems easy tograsp but really doesn't give you any context for understanding itsimplications? Or do you want to know how the system will perform in yourenvironment?

We will tell you exactly what the testing environment is, what weare testing for, where the potential bottlenecks are, how various aspects of thesystem perform under test, and where the areas of concern are. Then you candecide whether the transactions and applications we test represent what you doon your systems, and you can use this data to determine your direction ofresearch. We can't tell you which system to buy, but we can suggest what to lookfor and consider and show you how a system performs under conditions thatemulate your work environment.

Whether NT scales is a tough question, and Windows NT Magazine isgoing to work to answer it for you. In this issue, we begin the first round ofperformance analysis techniques and present test results of MS Exchange Serveron Windows NT 4.0 Server (in "Optimizing Exchange to Scale on NT.").Perhaps we raise more questions than we answer, but future articles will addresseach issue and delve into the intricacies of client/server computing.

Sign up for the ITPro Today newsletter
Stay on top of the IT universe with commentary, news analysis, how-to's, and tips delivered to your inbox daily.

You May Also Like