Insight and analysis on the information technology space from industry thought leaders.
History of Cybersecurity: Key Changes Since the 1990s and Lessons for Today
Effective cybersecurity today requires a balanced approach that incorporates both modern innovations and fundamental best practices from the past.
September 16, 2024
Written by Danny Wheeler, the Director of IT at Recast Software
In the realm of cybersecurity – which is a huge part of my job as the Director of IT (and Security) at Recast – most folks tend to focus on what's coming next.
But you can't excel at addressing future threats unless you also understand the past. Although the goals and activities of threat actors have changed tremendously over the past two decades, many traditional threats remain just as prevalent today as they were in the past. As a result, a holistic approach to cybersecurity requires managing risks of all types – not just the latest, greatest threats that have emerged in the age of AI.
To that end, allow me to walk through what I see as key stages in the evolution of cybersecurity from the dawn of the Web in the 1990s through the present. I'll also explain how today's IT and cybersecurity leaders can leverage the lessons of the past to erect stronger cybersecurity defenses.
The 1990s and Early 2000s: The Dawn of Modern Hacking
Attacks against computer systems stretch back to the early days of computing in the mid-twentieth century. But to avoid writing too much of a tome, I'm going to begin my analysis of the history of IT security in the 1990s, when the internet first entered homes and businesses in a big way.
At this time, most folks were still trying to figure out what the internet really meant and what you could do with it – including hackers. That's probably why most early IT security attacks focused on causing relatively minor damage, like defacing webpages or disrupting internet services. Most cyber attackers hadn’t considered using the internet to pursue financial gain or cause serious harm to organizations.
To be sure, financial crimes based on computer hacking took place in the '90s and early 2000s. But they didn't dominate the news in an endless stream of cautionary tales, and most people thought the 1995 movie Hackers was a realistic depiction of how hacking worked (in truth, it's much, much more boring, but I digress).
I like to think of cyberattacks during this period as the equivalent of graffiti: They were annoying but rarely caused extensive damage, and it was easy enough to clean up the aftermath.
Mid-to-Late 2000s: Criminals Embrace the Internet
By the mid-2000s, however, internet-based attacks became more harmful and frequent. This was the era when threat actors realized they could build massive botnets and then use them to distribute spam or send scam emails.
These attacks could have caused real financial harm, but they weren't exactly original types of criminal activity. They merely conducted traditional criminal activity, like scams, using a new medium: the internet.
Importantly, this era also saw the advent of the Dark Web and underground online marketplaces like Silk Road. These were a natural outgrowth of criminals' embrace of the internet since they provided a place to sell stolen digital goods.
The 2010s: A Golden Age of Cybercrime
Gradually, threat actors evolved their strategies. Instead of botnets, which blasted massive amounts of spam at targets that were typically chosen at random, they launched ransomware attacks focused on specific organizations.
(For the record, I should note that ransomware attacks date back at least to the 1980s. But the 2010s were the decade when ransomware truly took off and became a widespread threat for virtually every organization.)
The 2010s were also a time of massive technological change. The advent of cloud computing, widespread adoption of mobile devices, and rollout of Internet of Things (IoT) hardware meant businesses could no longer define clear network perimeters or ensure that sensitive data always remained in their data centers.
At the same time, this period saw the rise of state-sponsored threat actors alongside traditional hacking groups. State-sponsored actors enjoyed access to substantial resources, which increased their ability to carry out effective attacks.
The combination of these three factors – targeted ransomware attacks, increasing IT complexity, and state-sponsored attackers – made it harder than ever for organizations to keep their digital assets secure. Despite some investment in protections like firewalls and software patching tools, most companies struggled to keep up with cyber threats because basic solutions weren't enough to contend with the vast scale or complexity of cyber risks.
The Late 2010s: Enterprises Fight Back
Things began changing in a positive direction starting in the late 2010s. By then, the typical organization realized that basic, reactive cybersecurity protections were not enough. Companies learned that they also needed to invest in preventative measures.
Hence, the widespread deployment of Multi-Factor Authentication (MFA) solutions made it significantly harder for attackers to impersonate targets using stolen credentials. Companies also invested in more sophisticated cybersecurity solutions and techniques, such as threat intelligence and threat hunting.
To be sure, these solutions didn't eradicate cybercrime. But they were effective in reducing overall levels of risk.
The Recent Past: Covid and AI
Over the past few years, we've lived through two major new challenges that further complicated cybersecurity.
One was the Covid pandemic. When many white-collar employees began working from home, enterprise networks extended to include home networks. Businesses had to adopt more sophisticated cybersecurity protections, such as zero-trust network security policies.
Meanwhile, the AI boom of the past few years has placed new tools in the hands of threat actors. Today, the bad guys can turn to AI to assist with tasks like selecting targets and generating malware, introducing greater efficiency and scalability to hacking operations.
In response, we're currently witnessing another inflection point in cybersecurity strategies. Enterprises have realized that to keep up with attackers, they need to achieve greater levels of efficiency and up their game. To do this, they're investing in AI-enabled cybersecurity tools that provide advanced behavior analysis, automated response, and other capabilities.
Lessons From Cybersecurity History – Or, Why We Need To Go back to the Basics
There are many potential takeaways from the cybersecurity history I've just laid out. However, I will focus on the one that I think is the most important for guiding modern cybersecurity strategies.
That takeaway is how far removed we've gotten from basic cybersecurity practices and how important it is not to lose sight of standard best practices in today's age of highly sophisticated threats.
To be sure, solutions like MFA and AI-enabled cybersecurity tools are great things, and they're a necessary part of a modern defense strategy. However, these tools don't excel at addressing more basic security risks, like unpatched software or endpoints that an organization is not monitoring effectively.
That's why effective cybersecurity requires a defense-in-depth strategy that hinges on adopting tools and techniques from across all stages of cybersecurity history that I described above. Seemingly mundane solutions, like patch and vulnerability management software and least-privilege policies, remain as important as modern innovations, like fancy AI-powered threat-hunting and modeling tools.
Conclusion: To Thrive in the Present, Remember the Past
This brings me back to the observation with which I started: You can't excel at meeting today's cybersecurity threats without drawing, in part, on lessons from decades past.
There's no denying that some of today's threats are more sophisticated and complex than ever and require novel solutions. But an unpatched server or a user with unnecessary administrative privileges remains just as much of a risk today as it did 20 years ago, which is why cybersecurity lessons from the past still apply to the present.
About the Author
Danny Wheeler is the Director of IT at Recast Software. He has 20 years of experience building technology departments, motivating teams, and achieving business objectives, using technology as a force multiplier to solve problems. He is passionate about cloud technology and cybersecurity.
About the Author
You May Also Like