The Past, Present, and Future of C#

Plus practical tips for writing manageable C# code

Blair Greenwood

February 3, 2014

6 Min Read
The Past, Present, and Future of C#

Related: "C# Book Reviews"

I recently had the chance to talk with Eric Lippert, who works as an architect on the C# analysis team at Coverity, a company that analyzes C# source code and looks for known defects. Prior to joining Coverity, Lippert's known for his work with the Microsoft language team, where he worked on design and implementation of the C# language for 16 years.

The Evolution of the C# Language

An interesting part of our conversation revolved around the concept that developers are often divided into two different camps when it comes to the evolution of programming languages. In one camp, you've got those who believe that the evolution of software languages will provide improvements that ultimately make software better. In the other camp, you've got others who believe that these programming languages should stay the way they are because they're too complex as is. Furthermore, this second camp also believes that the lack of new functionalities lets developers concentrate on learning and understanding the intricacies of the language.

When Anders Hejlsberg led the initiative to develop the C# language nearly 12 years ago, the language was developed with the intention to be a simple object-orientated language that would be familiar to C, C++, and Java users. There's been a lot of innovation that's transpired over the past 12 years, such as adding a general type system, query comprehension, interoperability with dynamic languages, and a myriad of features that have been added to the language over time.

With that said, the evolution of C# has brought new complexities to the table.

"Simple is no longer accurate. It's now a very complicated language," Lippert said.

Lippert went on to describe a tension that exists for most developers.

"Sometimes it's not even two camps, per se. I often get this from the same person in a conversation, where they say, 'I really want this new feature, and I would make really good use of that feature, but I worry I will have to train my coworkers on how to use it correctly,'" Lippert said.

This train of thought is something that's on the minds of many C# developers. Lippert elaborated by saying that future developments made to the language really need to balance how new and useful features can be included in the language without being overly complex.

Lippert also outlined several questions that should be asked when considering to implement new features into a language:

Is this new feature easy to use?

Does this new feature add enough benefit to include in the language?

Does this new feature do a good job of not introducing strange interactions with existing features?

Does this new feature make it susceptible for developers to introduce errors into their code?

Writing Quality Code with C#: Common Gotchas

Related: "Get Started Building Windows Store XAML/C# Apps"

"One of the architects on the .NET Framework, Rico Mariani, used to say that he wanted C# to be a pit of quality language. You throw users into the pit of quality, and everything they do is quality," Lippert said. In contrast, a pit of failure is where users have to climb their way out of the pit to write quality software.

Lippert reiterated that C# was designed to easily write quality software.

"C# was designed so that the obvious things to do and the right thing to do are the same thing. For the most part, that's true. But, no design is perfect," Lippert explained.

I asked Lippert if he could help describe common pain points that developers experience with the language.

One common problem focuses C#'s automatic memory management.

"If you allocate a bunch of memory, then the runtime automatically figures out when you're done using it and returns it back to the system. That's great. But it does not have automatic management of non-memory resources, so things like you open a file, or a database connections or any other scarce resource like that, the runtime doesn't know about them," Lippert said.

Eventually the runtime will clean up those non-memory resources, but it does it on its own schedule. Although it does provide some level of automatic management, you're at the mercy of your garbage collection to free up those resources.

"Just for politeness, programs should be aggressively cleaning up non-memory resources in C#, and that's a very common mistake to not do so," Lippert explained.

Another common problem is value equality and representation in C#. In other words, are these two things equal in C#?

Lippert explained that there's several different to check for equality in C#, and many of those methods are inconsistent with each other.

He illustrated the example, further: Imagine writing the number two on two different pages of paper. Are those two objects equal? Well, the values are equal, but those are two different pieces of paper.

"So there are often situations in C# where you think you're doing comparisons and values, but you're actually comparing references or vice versa," Lippert said.

Although you can painstakingly debug your code for these common C# programming problems, Coverity's development testing platform helps eliminate these hassles in a cinch.

The Future of C#

Today, we're up to C# 5.0, and C# 6.0 is on the horizon. Although C# 6.0 has been announced, there's no schedule that tells us when the new version will be released. And most interesting is Microsoft's work with its Roslyn Project, which aims to break down the traditional model of blackbox compilers by providing developers with a documented and publicly available interface. Currently, the project is in a Community Technology Preview (CTP) state.

"The point of the Roslyn API isn't just to improve the quality of the compiler itself and make it easier for the compiler team to add new features in the future, but also it's to make it easier for companies like Coverity to work on static analysis of those programming languages," Lippert said.

Once the Roslyn library is available for public use, then companies will be able to provide accurate analyzes because their products will be able to run on the same library of underlying tools as the Microsoft compiler.

Talking about the Microsoft development system, Lippert emphasized that the company is very much in the business of investing time and research into producing quality languages, such as the up and coming TypeScript JavaScript-variant language.

That's the kind of thing Microsoft does. They make productivity tools for professional developers. There's a lot of interesting things coming in the next cycle for the languages team at Microsoft," Lippert said.

Read more about:

Microsoft
Sign up for the ITPro Today newsletter
Stay on top of the IT universe with commentary, news analysis, how-to's, and tips delivered to your inbox daily.

You May Also Like