My Summary of PASS Summit 2014

Well, it's over. It was fun. It was exciting. It was educating. But it's over. PASS Summit 2014 was a great success, as always.

Guy Glantser 1

November 18, 2014

7 Min Read
My Summary of PASS Summit 2014

This year I spent my time more in networking and socializing rather than attending sessions. I met so many old friends as well as new people, members of this big family called "SQL Family." I spent some time in the community zone as well as in the exhibitors' hall, and, of course, in and between sessions. I also took advantage of the opportunity to consult with some of Microsoft experts at the SQL Clinic. They helped me resolve some issues I encountered lately, which I will probably blog about soon.

As for the sessions, here is a recap of some of the more interesting sessions I attended:

Performance Tuning Your Backups, Sean McCown

I didn't know what to expect from this session, and I was curious about what kind of tuning Sean is going to talk about. First of all, I would like to say that Sean is a great presenter. With a good sense of humor, he managed to present a relatively boring subject in a fun and engaging session.

Sean talked about basic things like using backup compression and striping the backup among multiple files (according to Sean, there is almost no difference between locating each file on a different volume and locating all of them on the same volume). He also talked about instant file initialization, and he demonstrated a few useful trace flags that produce a wealth of information about what's going on behind the scenes during the backup or restore operation.

I learned two new things from this session, and it was well worth it. The first is the ability to backup to 'NUL' as the target. When you do that, SQL Server reads all the data from the data files and perform all other activities as if it was a regular backup, except for actually writing to a target. The data is not written anywhere, and the backup operation completes. This is not useful as a backup method, of course, but it is very useful when troubleshooting and tuning your backups. It can serve as a baseline for the duration it takes to read data during a backup operation. You can then tune the write portion of the backup and come as close as possible to the baseline.

The second thing I learned from this session is that I should configure the buffers used during the backup and restore operations rather than just use the default values. According to Sean, the backup operation is designed to run at low priority, and for that reason it uses a few relatively small buffers to move data around. This is good in some cases, but it might not be such a good idea in other cases. For example, if you plan a big upgrade, and you have just closed down the application, and now you are about to run a backup before the actual upgrade, you probably want the backup to complete as quickly as possible. In this case, you should increase the number of buffers and the buffer size in order to utilize more resources and speed the operation.

Thanks to Sean, now I have a few more tools in my toolbox for the next time I need to troubleshoot backup and restore operations.

Query Tuning Mastery: Manhandling Parallelism, 2014 Edition, Adam Machanic

If you're looking for a mind blowing session, this is the one. Adam is the master of parallelism, and I always enjoy reading his posts and viewing his presentations. He demonstrated some crazy techniques using T-SQL to fool the optimizer and force it to use parallel plans, and also to force on-demand row distribution among threads. As fascinating as it can be, I don't see myself ever using one of these solutions in a production environment. I have two reasons for that. First, these solutions are a result of a long research of how the optimizer works, and they rely on very specific behaviors. These behaviors can change in the next service pack or even in the next cumulative update, not to mention different versions. Second, the solutions are so complicated. I think that even with good documentation, they are still hard to follow. If someone else needs to maintain this code in the future, it's not going to be very pleasant.

Nevertheless, what I took from this session is the creativity and the idea that there is always a way. If you encounter a performance issue, and you can't seem to force the optimizer to do what you want it to do, then don't give up. There are so many ways to work around the optimizer. You just need to be creative and don't give up. If it still doesn’t work, call Adam…

This session is available on demand on PASStv.

Advanced Data Recovery Techniques, Paul Randal

Well, it's Paul Randal. Regardless of what he talks about, I just enjoy watching him on stage. Fortunately, Paul also has some really cool things to talk about. One of the things Paul loves to talk about (and does it so well) is recovery from a corrupted database. I just had a customer with a corrupted database, for which all the methods I knew failed. So this session came just on time.

First, Paul demonstrated how to perform a hack attach, which is to create a new database, bring it offline, replace the new files with the corrupted files, and then bring it online again. This method can let you enter the database sometimes and either try to repair it or at least extract data out of it. He also demonstrated how to fix a corrupted boot page or file header pages by using a hex editor and replacing the corrupted pages with healthy pages from an old backup. And finally Paul demonstrated how to recover data from non-clustered indexes by creating a dummy table and updating system tables.

I'm not going to go into all the details here, but this was really cool stuff. Next time I encounter a database corruption, I have many optional methods to use before it is game over…

This session is available on demand on PASStv.

Working with Very Large Tables like a Pro in SQL Server 2014, Guy Glantser

Yeah, that's me. Believe it or not, I attended this session as well. They even let me get on stage. It was really exciting for me to present my session in PASS Summit, and I enjoyed it very much.

I talked about the challenges involved in working with very large tables, and demonstrated ways to overcome these challenges and manage these very large tables like professionals. I specifically demonstrated the ascending key problem, the WRITELOG problem and the index rebuild problem. In order to solve these problems, I presented some of the new and exciting features in SQL Server 2014, such as the new cardinality estimator, incremental statistics, delayed durability and online index enhancements.

This session is also available on demand on PASStv.

There were two keynotes during Summit. The first keynote wasn't very impressive. Microsoft announced that a major release is expected for Azure SQL Databases soon as well as a new free tier for Machine Learning. But apart from that, it was quite boring. The second keynote presented by Rimma Nehme, a senior research engineer from Microsoft Jim Gray Systems Lab, was really nice and very well presented. Rimma talked about cloud computing in general, but she didn't say anything new in terms of content.

And, of course, there were the parties. In addition to the community appreciation party organized by PASS, which took place in the EMP museum and was really fun, there were also many parties organized by sponsors and exhibitors, so it was nice meeting everyone in the evenings too.

All in all, it was fun and it was educating as always, and I'm already looking forward to PASS Summit 2015…

Sign up for the ITPro Today newsletter
Stay on top of the IT universe with commentary, news analysis, how-to's, and tips delivered to your inbox daily.

You May Also Like