Windows IT Pro Community Forum, November 2011

Readers respond to a Bill Stewart PowerShell article, Windows IT Pro’s increased cloud coverage, Nate McAlmond’s StorageCraft review, Brian K. Winstead’s real-world Exchange series, and Sean Deuby’s Enterprise Identity column.

ITPro Today Staff

October 26, 2011

8 Min Read
ITPro Today logo

PowerShell Help Appreciated

I want to thank Bill Stewart for his article “Auditing 32-Bit and 64-Bit Applications with PowerShell” and the included script. After hours and hours of searching Google and learning PowerShell, I had just about given up when I happened upon the article. Thanks for making this Canuck's day!!

—Karen


Meaningful Content

I’m emailing to share an IT pro’s feelings about your magazine. Once a month, I walk into the mail room and I see the little corner of Windows IT Pro peeking out of my mail bin, and I get excited. "What cool, detailed, and accurate technical information will I find in this excellent resource?" Sadly, that feeling of excitement and anticipation has been dwindling over the past four or five months. 

Don’t get me wrong: I still love the magazine. But can you please spend less effort selling me on the cloud and get back to teaching me about servers, applications, and administrative procedures? I know the cloud is out there, and my company uses hosted applications, but no one in authority at my company is ready to house our data outside of our own purview, and we’re just not interested in IT as a utility bill. 

Having said that, thanks for the excellent magazine. I still look forward to getting it each month. I just hope it centers on helping professionals manage their own assets and advance their abilities and careers.

—Stoney Heflin

I hear your concerns about the cloud. When new technologies and trends emerge, we find ourselves struggling to strike a balance between educating readers on what is coming and on the technology they use today. Regarding cloud coverage, many of our readers wonder what it means for their career if more services are moved to the cloud. And they want to know what skills and technologies they need to master to stay relevant. Then again, we have many readers like you who can’t or won’t move to the cloud anytime soon for various reasons. It's quite a challenge to meet everyone's needs. But we’ll keep trying! We appreciate the feedback and welcome other readers to share their thoughts. Drop us a line at [email protected].

—Amy Eisenberg, Editor in Chief


Backup and Recovery Review

I’m looking to move away from tape backup using Symantec Backup Exec 2010. I saw Nate McAlmond’s review of StorageCraft’s ShadowProtect Server and wanted to know if you had any more input between it and Symantec System Recovery Virtual Edition. My company is in the process of moving its physical servers into one virtual powerhouse and then a smaller offsite server for backing up our virtual disks for disaster recovery. I evaluated StorageCraft, and it seemed nice, but I haven’t had a chance to evaluate Symantec’s product. Also, do you have any opinion about Microsoft Data Protection Manager 2010?

—Rick Rosenhagen

According to its spects, System Recovery Virtual Edition looks like a comparable product to ShadowProtect Server. One major difference I see is the price. StorageCraft’s product is a third the cost per server, but the company does offer a free trial. I've used Microsoft Data Protection Server in the past and liked it at first. But we ran into problems on a pretty regular basis; it failed to complete backups. Microsoft Data Protection Server does advertise the ability to back up virtual machines (VMs) from the host, whereas StorageCraft needs to be installed on each VM (unless you shut down the VMs, but this isn't supported). If I were you, I’d take a look at your Microsoft licensing and determine how much Data Protection Server is going to cost fully installed. Then decide whether it saves enough to be worth the trouble. StorageCraft was pretty trouble-free even with hundreds of gigs of data. I've been using Acronis at work for a few years now and have had good luck with that. But from what I've seen, the speed of backup and recovery with StorageCraft is quite a bit better.

—Nate McAlmond


Real-World Exchange 2010 Migration

I read, with keen interest, Brian Winstead’s “Real-World Exchange 2010 Migration” ("Real-World Exchange 2010 Migration: Preparing for the Move," "Real-World Exchange 2010 Migration: Staging the Move," and "Real-World Exchange 2010 Migration: Implementing the New Stuff"), about the Penton Exchange 2010 Migration.  At my office, we’re going through the same process. We’re a much smaller organization than Penton. We have about 400 users in several locations.  We’ve faced the same real-world budget issues as Penton, though. We aren’t on subscription for Microsoft Office and are “stuck” with Outlook 2007. We’ll have only two mailbox servers and will operate without DAG. Instead, we’ve opted to virtualize Exchange under VMware with CAS, Hub, and Mailbox roles all in one box.  

One area that you didn’t cover in the article is backup. I understand some organizations have multiple copies of the DAG so don’t even bother to back up anymore. We have an EqualLogic SAN and are using Veeam for disk-disk daily backup and are doing monthly backups to transfer to tape. At some point, we hope to start replicating between two sites, but we don’t have the bandwidth for that just yet.

One area where I have to differ with the Penton IT staff is regarding the importance of the new server-based archive feature. The Penton IT staff didn’t see the need to bother when they can simply offer oversized mailboxes. One key point for them to consider is bandwidth. I assume most of Penton’s sites will operate in Cached Exchange Mode. Let’s say everyone gets a multi-gigabyte mailbox with no archive.  What happens when a person sits down at a new computer? It will really strain the WAN link while it initializes the mailbox.

My approach is to set up a default archive policy that aggressively moves messages to the online archive (say, after two or three months). Outlook and OWA can still see the archive and search it, but the archive isn’t part of the Exchange cache. In so doing, when a user sits down in front of a new computer, only the modest mailbox data has to download. Again, thanks for the interesting article.

—Jonathan Shapiro

Thanks so much for writing! I appreciate hearing about your migration. I’d like to address your specific points about backup and personal archives. The interview originally appeared on our website in three parts, beginning with “Real-World Exchange 2010 Migration: Preparing for the Move.” We didn’t have room for the entire interview in print. But we did talk a little bit about backup at the end of part 2 on the web ("Real-World Exchange 2010 Migration: Staging the Move"). We discussed using lagged copies and moving away from tape backup. In any case, I think it was clear that Penton hoped the implementation of DAGs would be a step away from running regular backups.

Regarding personal archives, when I look at this again, I think it’s kind of funny that Brent and Sean talked about the mailbox size as a reason not to use them. What we didn’t talk about in this interview is our email retention policy, which is probably a bigger reason why the personal archives wouldn’t be too beneficial. A couple years ago, Penton established an email retention policy based on managed folders in Exchange 2007 and Outlook 2007 such that all email is deleted after six months—unless there’s a specific business case for saving it and it’s moved into the appropriate managed folder. Believe me, that was a hard transition for a lot of people! The upside is that we’re not maintaining the volumes of old email on the system that we otherwise would be.

Thanks again for writing. I hope your migration goes well—and let me know how everything turns out.

—B.K. Winstead


My Calculations Concerning MaxTokenSize

In "The Care and Feeding of the Active Directory Security Access Token," Sean Deuby writes, "By default, MaxTokenSize is 12,000 bytes; if a user is a member of more than 120 groups, he or she might begin to experience slow logons and other erratic behavior." And then he also writes, "The MaxTokenSize value can be adjusted upward to accommodate more groups."

I think that instead of 120 groups, the number of groups mentioned should be 900, for two reasons: First, this is the number mentioned in "MaxTokenSize and Kerberos Token Bloat” at blogs.technet.com/b/shanecothran/archive/2010/07/16/maxtokensize-and-kerberos-token-bloat.aspx. Second, to adjust upward from 12,000 to 65,535 means that you increase capacity approximately by 5.5 (65,535/12,000 = 5.46125). If you have 120 groups, you increase to 120*5.5=660. Only 660 groups. This calculation does not allow us to reach the 1,015-group limit.

—Dimitrios Kalemis

Your calculations are accurate. My number of 120 groups is based on practical experience; we saw that users began to experience slow logons and occasional Kerberos-related issues beginning as low as that number of groups, though they continued to be functional to higher group levels. My text, unfortunately, makes it appear to be mathematical.

—Sean Deuby

Sign up for the ITPro Today newsletter
Stay on top of the IT universe with commentary, news analysis, how-to's, and tips delivered to your inbox daily.

You May Also Like