Skip navigation
Microsoft, ChatGPT and OpenAI logos Alamy

Could Microsoft's AI Strategy Tank Its Reputation?

Microsoft is making a high-stakes bet on AI. But will it tarnish its reputation that it has worked long and hard to rebuild, bringing a return to the "Microshaft" days?

More than any other major tech company, Microsoft has bet its future on AI. Financially, that makes sense. There's plenty of reason to expect businesses to invest heavily in AI technology in the coming years.

But Microsoft also faces a growing liability from its AI investments. Concerns over potential harm caused by AI technology could tarnish the shiny reputation that Microsoft has earned for itself in the world of tech.

Microsoft: The 'Good' Big Tech Company

To understand how AI could affect Microsoft's reputation and why that would be such a big deal, some historical context is in order.

If you surveyed folks about their feelings about different tech companies two or three decades ago, most would likely have described Microsoft negatively. At the time, there was a widespread sense that the company's software products were lackluster at best, and that it foisted them upon users through monopolistic business practices that crowded out superior alternatives.

This was the era when Microsoft killed OS/2, widely viewed (then and now) as a better operating system than Windows. It was a time when developers hated Microsoft software so much that they created a video game, "Microshaft Winblows 98," to parody the company's flagship products. It was a period when Microsoft spent millions of dollars trying to discredit upstart open source projects, and when the company's CEO denounced Linux in the media as a "cancer."

In short, Microsoft was the domineering bully of the tech world. Against this backdrop, Apple enjoyed a reputation as an embattled underdog and people could take seriously promises by Google, a newcomer, not to be evil.

Fast forward to 2024, however, and the tables have turned dramatically. Today, it's companies like Amazon and Google that are under fire, being accused of monopolistic practices or using their market dominance to push out subpar products. You'd be hard-pressed to find anyone claiming that Microsoft is a paragon of antitrust virtue or that its products are the very best in the world. But if you had to pick a tech giant that looks "evil" today based on its business practices, you probably would not choose Microsoft.

Meanwhile, the company has bolstered its image in other notable ways. It has become a major supporter of popular open source projects. Its security researchers have helped find and investigate some of the most prominent threats of recent years. It owns platforms like GitHub, which is at the center of modern approaches to software development, and LinkedIn, which (boring though it may be) has become an inextricable part of modern work culture.

In short, Microsoft in 2024 looks like an innovative, forward-thinking tech company that is easier to love — or at least harder to hate — than many of its peers.

Microsoft's Unique AI Strategy

But there's a chance that Microsoft's bets on AI could change all of that.

To date, Microsoft's AI strategy has hinged on its close partnership with OpenAI, the company behind ChatGPT and other generative AI technology. This approach distinguishes Microsoft from other large tech companies, like Google and Amazon, which have invested more heavily in in-house AI development.

From a business perspective, Microsoft's decision to pursue AI technology via a partner makes sense. Working with OpenAI allows Microsoft to buffer itself a bit from lawsuits surrounding AI technologies. It will also limit Microsoft's losses in the event that OpenAI's tech turns out not to be as valuable as AI optimists believe and Microsoft has to write off the $10 billion it has invested in the company.

But from a reputational standpoint, Microsoft's approach to AI presents some risks because negative perceptions of OpenAI will rub off onto Microsoft, even though they are separate companies. And increasingly, OpenAI seems to be viewed in negative ways.

That's due in particular to worries that OpenAI is pursuing advanced AI tech that could harm humanity because the company would rather maximize profit than develop safe AI. Such claims became widespread after the company temporarily ousted its CEO, Sam Altman, in late 2023.

To date, the true reasons for Altman's departure remain murky, and there is no hard evidence that the company is actually creating unsafe technology. But when it comes to reputation, perception matters more than reality. In general, public perceptions of OpenAI seem to have taken a decidedly negative turn.

The fact that OpenAI's name is a mockery of openness doesn't help Microsoft's reputation, either, especially within tech communities that value transparency — although that risk is not as serious as claims about OpenAI's potential to develop apocalyptic software products.

What Does OpenAI Mean for Microsoft?

In theory, the fact that Microsoft doesn't own OpenAI should help distance the company from negative perceptions of OpenAI and its technology. But I'm not sure it actually will. Microsoft has become so closely aligned with OpenAI that whatever OpenAI does rubs off onto Microsoft in an indelible way. OpenAI technology is now deeply embedded into Microsoft products like GitHub and Microsoft 365, and Microsoft gained one of only four seats on the OpenAI board following the Altman kerfuffle. (Microsoft's board seat is nonvoting, but given that Microsoft is the only external company with this type of relationship with OpenAI, it reflects the outsize influence that Microsoft has over OpenAI.)

At the same time, however, because OpenAI is still an independent entity, Microsoft doesn't have as much control over it as, for example, Google has over AI software it builds itself. It's a safe bet that Microsoft can significantly influence OpenAI's decisions, but it's not fully in charge — which means it would not be able to stop OpenAI if the company were to decide to pursue AI initiatives viewed as harmful or unethical.

On top of this, Microsoft has a lot more to lose reputationally due to concerns about harmful AI technology than do other tech companies. All of the reputational rehabilitation that Microsoft has achieved over the past couple of decades could be reversed if tech analysts, regulators, and/or the public at large come to see Microsoft as being behind "evil" decisions made by OpenAI.

Back to the Future: The Return of Microshaft?

Admittedly, my take on the risk that Microsoft has assumed due to its AI strategy is quite speculative. It's entirely possible that OpenAI will produce only great, safe technology, Microsoft's close partnership with the company will yield enormous dividends without undercutting its reputation, and everyone will live happily ever after.

But if developments take a different direction — if OpenAI releases new AI services with a serious potential to cause harm, or if the company simply appears not to take safe AI seriously enough — we could find ourselves living in the 1990s all over again. Microsoft would come to be seen anew as a company prioritizing profit at the expense of everything else — except this time, the criticism wouldn't be limited to selling crappy software via monopolistic business practices. It would go deeper, centering on the company's failure to rein in technology that could end up having a profoundly negative impact on humanity.

About the author

Christopher Tozzi headshotChristopher Tozzi is a technology analyst with subject matter expertise in cloud computing, application development, open source software, virtualization, containers and more. He also lectures at a major university in the Albany, New York, area. His book, “For Fun and Profit: A History of the Free and Open Source Software Revolution,” was published by MIT Press.
Hide comments

Comments

  • Allowed HTML tags: <em> <strong> <blockquote> <br> <p>

Plain text

  • No HTML tags allowed.
  • Web page addresses and e-mail addresses turn into links automatically.
  • Lines and paragraphs break automatically.
Publish