Topic.2 | Cybersecurity
Topic | Cybersecurity

The Future of Cybersecurity

Chapter 07/07

Snapshot

Learn how cybersecurity technologies will grow and change as other emerging technologies, such as artifical intelligence, become more prevalent.

Key Terms:

  • Internet of things
  • Artifical intelligence
  • Bot
  • Ambient computing

The precept known as Moore’s law—formulated by Gordon Moore, co-founder of microchip manufacturer Intel Corporation—states that the number of transistors included on a microchip will double about every two years.

A hand positions a processor above a personal computer motherboard.

This doubling happens because of constant ongoing innovations in the design and manufacture of microchip components, making it possible to pack more and more transistors into the same amount of space.

This “law” (really just a prediction based on a set of observations) has held basically true so far, which is why computers keep getting smaller, faster, cheaper and more powerful. The principle expressed by Moore’s law explains why the computer in your pocket is 100,000 times more powerful view citation[1] than the computer on the Apollo 11 lunar landing module.

Moore’s law is expected to hit a wall sometime in the 2020s as computers reach the physical limits of transistor manufacture and operation, but IT innovation shows no signs of slowing down—and that’s just as true for the black hats as it is for the rest of us.

Rise of the Machines: AI vs. AI

As with so many other cybersecurity threats, the latest—and one of the scariest—developed from innocent beginnings.

A bot (short for “robot”) is a piece of software that runs simple programs over the internet, such as “crawling” websites at rates much faster than a human can to fetch and analyze a predetermined type of information. For instance, Google uses bots to index web content and return search engine results. In fact, bots account for more than half of all web traffic.

Because bots do what they’re told and operate autonomously, they make great tools for hackers—especially when you lash a bunch of them together into an impromptu collective called a botnet. Many of the most damaging hacks of the past 20 years, such as 2008’s Conficker worm, used botnets to do their dirty work. Conficker spread through the internet, turning each infected machine into a host for another bot that would further disseminate the infection. At its height, Conficker commanded a botnet numbering 10 million strong that was capable of sending up to 10 billion pieces of spam per day.

Experts eventually got Conficker under control, but hackers keep using botnets, and now they’ve set their sights on a new target: the internet of things (IoT), that plethora of internet-connected devices such as smart doorbells with cameras you can look through on your phone, smart refrigerators whose temperature you can monitor remotely and internet-connected automobiles. The number of IoT devices in existence is expected to reach 30 billion in 2020—a number that makes a hacker glad they got out of bed in the morning, because those 30 billion devices have notoriously weak security, making them great conscripts for a hacker’s next botnet army.

As if that wasn’t bad enough, hackers are beginning to use artificial intelligence (AI) in their attacks. AI refers to a computer system that can interpret external data, learn from it and use what it learns to perform tasks and achieve goals. AI is useful to a hacker deploying a botnet because in the past, a human always had to perform certain functions in launching the attacks that would distribute the bot code to the targeted machines. Humans also had to monitor the attack to evaluate its effectiveness. If cybersecurity measures at a targeted location successfully counteracted the attack, the hacker would then change the bots’ code, hopefully landing on an “attack signature” that the defenders wouldn’t recognize as hostile.

AI changes the equation by taking humans out of the picture, which has two advantages: It allows hackers to launch more attacks more frequently, and it makes the attacks more effective because the AI can detect successful defenses and respond to them much more quickly than a human can. In response, cybersecurity professionals are developing AI defenses to keep up with an AI botnet, but right now such technologies are still in their infancy.

An aerial view of a busy city at dusk.

Ambient Computing: From the Internet of Things to the Internet of Everything

Imagine a world where the alarm that wakes you up in the morning, the switches that turn on your lights, the thermostats that regulate the air temperature in your house and the water temperature in your shower, the machine that makes your coffee, the tablet that provides your breakfast reading material, the locks that secure your door and even the car you drive to work are all part of the same harmonious computing environment.

That’s the goal of companies like Google, who want us to adopt ambient computing as the wave of the future. view citation[2] The potential benefits are obvious: You familiarize yourself with one operating system, learn one set of commands, set up a single account, use one password and get one bill at the end of the month. All your hardware and software works together because it’s all provided by the same company.

The potential disadvantages are equally obvious: What about security for all those internet-connected devices? How reliable will this hardware be? If one of the devices or one node of the network gets hacked, does that put the entire suite at risk? So far there are more questions about ambient computing than answers, but one thing is certain: Google and other major tech players are doing their best to get us there so we can find out just how great such a future would (hopefully) be.

A young professional sits at a computer terminal.

Who’s Responsible for the Future of Cybersecurity?

One sad constant in the history of cybersecurity is that protection measures often lag behind the need for them. We didn’t develop antivirus software until we started discovering viruses. We didn’t start writing and passing cybercrime laws until people started using computers for theft, fraud and vandalism. Part of the reason for that lag is typical organizational inertia; it’s hard to justify devoting resources to a threat that doesn’t exist yet when you’re running just to keep up with everything else you already have to do.

In addition, very few among us have a crystal ball—that is to say, a useful combination of knowledge, wisdom and intuition—allowing us to look into the future and predict which threats are coming our way. Add to that the fact that the people responsible for protecting society from threats, such as law enforcement officers and legislators, lack the field-specific expertise to make those kinds of prognostications.

The future of cybersecurity will probably proceed much as the past and present have: Technological innovations in security threats will create new, more severe threats, and when a sufficiently distressing pain point is reached, those in a position to do something about it—tech companies, law enforcement agencies and legislators—will take steps to address the new problem.

That’s why each of us should take more responsibility for our own cybersecurity—and why, even with that, there will still be a need for cybersecurity professionals.

References

  1. ”Your Mobile Phone vs. Apollo 11's Guidance Computer.” RealClearScience. July 2019. View Source

  2. “Google’s aggressive 'ambient computing' strategy means it wants to be everywhere.” CNBC. October 2019. View Source