Advancements in IT cybersecurity lag all other areas of tech, which puts applications and data at unnecessary risk.

Christopher Tozzi, Contributing Editor

July 6, 2020

8 Min Read
Myth Busting
Shutterstock

By most measures, information technology has advanced by leaps and bounds during the past several decades. CPUs are orders of magnitudes faster, less expensive and more focused than they were around 2000. Software reliability is much improved, with Blue Screens of Death fading into memory. The cloud has made our data and applications accessible from anywhere, enabling a level of convenience that folks could barely fathom 20 years ago. Yet, amid all these improvements, one facet of IT stands apart: IT cybersecurity.

This article by Christopher Tozzi first appeared on Channel Futures’ sister site, IT Pro Today.

In most senses, we are no better securing applications and data today than 10, 30 or even 50 years ago. Why is that? I don’t think there’s a single answer. But I do think it’s worth examining the myths surrounding IT security that are often cited as explanations for why systems and applications remain so insecure, even as they have improved markedly in other respects.

IT Cybersecurity Trends: Things Are Getting Worse

Data-breach-300x200.jpg

It’s hard to imagine why, in this day and age, only 5% of companies properly secure their data.

You don’t need to be an IT cybersecurity analyst to sense things aren’t great on the IT cybersecurity front. Anyone who reads the news knows that barely a month goes by without a major IT security breach.

This has been the case for years, and yet things only seem to be getting worse. Consider the following data points:

These trends have emerged despite the fact that the IT cyber security market grew by 3,500% between 2004 and 2017. Clearly, there is a lot of interest in trying to improve IT cyber security. There are few tangible results.

Myths for IT Cybersecurity Failures

It’s easy to point fingers to explain why we’ve done a poor job of improving IT security over the years. Unfortunately, most of the places at which blame is typically directed bear limited, if any, responsibility for our lack of security.

Following are common myths about why IT cyber security remains so weak.

  • Myth 1: Software has grown too complex to secure.

It’s hard to deny that software is more complex today than it was 10 or 20 years ago. The cloud, distributed infrastructure, microservices, containers and the like have led to …

… software environments that change faster and involve more moving pieces.

It’s reasonable to argue (as people have been doing for years) that this added complexity has made modern environments more difficult to secure. There may be some truth to this. But, on the flipside, you have to remember that the complexity brings new security benefits, too. In theory, distributed architectures, microservices and other modern models make it easier to isolate or segment workloads in ways that should mitigate the impact of a breach.

Thus, I think it’s simplistic to say that the reason IT cybersecurity remains so poor is that software has grown more complex. Or, that security strategies and tools have not kept pace. You could argue just as plausibly that modern architectures should have improved security.

  • Myth 2: Users are bad at security.

It has long been common to blame end users for security breaches. Yet, while it’s certainly true that some users – perhaps even most users – don’t always follow security best practices, this is a poor explanation for the types of breaches that grab headlines today.

Attacks that lead to the theft of millions of consumers’ personal data, or the taking for ransom of billions of dollars’ worth of information, aren’t typically caused by low-level employees writing their workstation passwords on sticky notes because they don’t know any better. They’re the result of mistakes like unpatched server software or data that is accidentally exposed on the internet for all to see.

So, yes, ordinary end-users could do more to help secure the systems and applications they use. But they are not the cause of the biggest problems in IT cybersecurity today.

  • Myth 3: The government doesn’t do enough to promote IT security.

Blaming the government for problems is always popular, and IT cyber security is no exception. Some folks contend that private companies can’t stop the never-ending stream of security breaches on their own, and that the government needs to do more to help.

Asking governments to do more on the IT cybersecurity front is reasonable. However, it’s not as if governments don’t already invest significantly in IT security initiatives. And given that government support for cybersecurity has only increased over the decades, you’d think that we’d be getting better at security over time, if government intervention was the key. But we’re not, which suggests that government investment in cybersecurity is not the make-or-break factor for achieving actual progress.

  • Myth 4: Developers don’t care about (or understand) security.

Another scapegoat for the lack of improvement in IT cybersecurity is developers. Indeed, the idea that developers either don’t care about security, or simply don’t understand how to write consistently secure code, forms the root for the DevSecOps concept, which proposes that we’d finally improve software security if only developers and security engineers worked more closely together.

Here again, this is, at best, an exaggerated effort to explain the lack of progress in IT cybersecurity. There are surely some developers who don’t take security seriously, or who lack proper training in secure coding best practices. But to say that all developers fail to write secure code is …

… a gross generalization that can’t be substantiated. More to the point, it also ignores the fact that many breaches are the result not of insecure code, but of insecure deployments or configurations.

  • Myth 5: Security is a cat-and-mouse game, and we’ll never get ahead.

Finally, you sometimes hear IT cybersecurity described as a sort of cat-and-mouse game. The implication is that it’s impossible to make real improvements to cybersecurity because the bad guys will always find ways to get around whichever new defenses you come up with.

The reality, though, is that few fundamental improvements to cybersecurity have been achieved in a long time. Buffer overflow exploits, injection attacks, denial-of-service attacks and the like have been widespread attack vectors for decades. You might think that the industry would have figured out ways of fundamentally plugging these sorts of holes by now. But it hasn’t, even though other realms of the IT industry have seen astounding leaps forward.

We now have operating systems that virtually never crash, CPUs that can run on comparatively minuscule amounts of electricity and word processors that can predict what you want to write before you write it. Why, despite advances like these, are we still living with the same IT cyber security problems that have been happening since the 1980s?

Making Real Improvements to Security

Now that we know what’s not to blame for the enduringly poor security record of IT organizations today, what’s the real cause?

That’s a very complicated question. A mix of the factors described above are partly at play, even though no single one fully explains the problems. Some developers probably should work harder to write more secure code. The government could do more to support private enterprise in developing cybersecurity solutions. And so on.

However, I suspect the single largest factor in the lack of improvement to security is this: Companies that suffer major breaches face few meaningful consequences. Sure, organizations that report major security problems might suffer some backlash from consumers. But I’ve yet to hear of a major corporation going out of business for failing to take IT cybersecurity seriously. Equifax remains profitable as ever, for example, despite the major breach it suffered.

Nor are there usually major fines imposed by governments on companies that suffer breaches due to negligence. To fine a company for a breach, you first have to prove that it violated a specific compliance requirement. Even if you do that, the fines are typically slaps on the wrist. To use Equifax as an example again, it paid $700 million in a settlement for its data breach. That may seem large, but it represents only 20% of Equifax’s annual revenue for most of the past several years.

In short, then, I think we’d finally see real improvements to IT security if companies faced more tangible consequences for their failure to keep systems and data secure. Twenty years ago, another writer on this site proposed allowing consumers to sue vendors for security flaws in the same way that auto manufacturers are held liable for defects in their products. Maybe that sort of solution would lead to the accountability that organizations need to take security truly seriously.

Read more about:

MSPsVARs/SIs

About the Author(s)

Christopher Tozzi

Contributing Editor

Christopher Tozzi started covering the channel for The VAR Guy on a freelance basis in 2008, with an emphasis on open source, Linux, virtualization, SDN, containers, data storage and related topics. He also teaches history at a major university in Washington, D.C. He occasionally combines these interests by writing about the history of software. His book on this topic, “For Fun and Profit: A History of the Free and Open Source Software Revolution,” is forthcoming with MIT Press.

Free Newsletters for the Channel
Register for Your Free Newsletter Now

You May Also Like