Article

Cybersecurity in 2020

Posted February 5, 2020 in Business Technology & Digital Transformation Strategies, Data Analytics & Digital Technologies Cutter Business Technology Journal
security2020

CUTTER BUSINESS TECHNOLOGY JOURNAL  VOL. 33, NO. 1
  

Cybersecurity urgently needs attention from businesses and government, according to Cutter Consortium Senior Consultant Paul Clermont. He highlights how several colliding trends — complexity, AI, and inter-connectedness — are compounding long-standing risks. Clermont discusses the tactics necessary to address them but cautions that “compounding the difficulty of these tasks is the need to be able to execute algorithms and procedures in nanoseconds — a tall order that should inspire a bit of conservatism about how much functionality and connectivity we might want versus what we truly need.”

Over the past few years, I have avoided predicting specific technological innovations, focusing instead on issues around privacy and security and the evolving perceptions of technology and big tech in the public sphere. I believed these issues would become increasingly critical in the emerging era of artificial intelligence (AI). They have. One prominent effect has been the speed with which tech’s long honeymoon with the public has come to a sobering end, with lots of tough questions from governments and pundits that have not been very well answered by industry leaders.

It’s hard to imagine 2020 not intensifying this trend, with almost every large tech company and even some lesser ones in the crosshairs of one or another government agency in both the US and the European Union (EU). Indeed, in the US, the notion of reining in big tech looms large in some political campaigns. The form that “reining in” may take is still hazy; is it merely about size? About scope of activities? About the appropri­ateness and societal value (or lack thereof) of some extraordinary profitable business models? We don’t know. Europe is taking the lead, but the actions of the US (or lack thereof) will be critical.

I’ll skip the obvious about the rapid progress in AI and the fears that it will continue to engender. Most of the threats — like the loss of jobs and privacy, the “surveillance state,” intentionally or unintentionally biased algorithms entrusted with too much power over our lives, and, not least, the ability to microtarget lies and disinformation that have already cast shadows over elections — are not going away. Dealing with these justified fears requires attention from the best and brightest; not just technical people but the full range of social scientists: sociologists, political scientists, economists, psychologists, educators, and so on. We should expect 2020 to bring at least a bit of progress in this area.

However, there is, in my opinion, one dragon that can be slain: the fear of imminent artificial general intelligence (AGI) — capabilities pretty much like human intelligence, which of course we really don’t understand — lurking just around the corner1 and then morphing rapidly into artificial superintelligence that could, like HAL in Stanley Kubrick’s 2001: A Space Odyssey, decide that some or all of us humans are expendable. We need to note that it took 54 years for a computer to go from mastering checkers to mastering Go, even though the games differ only in scale and complexity. Both games offer the advantages of 100% available information and unambiguous goals and rules (just like real life, eh?) — so any worry about the quantum leap to AGI is a counterproductive distraction from the very real threats mentioned above.

One concern that businesses and governments need to address with much more urgency (and we hope they will do so in 2020) is cybersecurity. Attacks can come from hostile governments, criminals, and mischief-makers. Their targets can be companies, governments at all levels, and critical infrastructure. Several trends are colliding in a way that compounds long-standing risks:

  • Complexity. The more versatile a computer system is, the more complex it must inevitably be. Persuading a computer to do what it’s supposed to do and only that (i.e., debugging) is mind-bogglingly diffi­cult. (If it were easy, we would not be getting the frequent mid-release updates addressing “bug fixes” and “security hole” patches from such software powerhouses as Apple and Microsoft.) This suggests that adding functionality just because we can is not a good design approach.

  • Artificial intelligence. A debugged traditional computer system doesn’t do anything we didn’t program it to do. AI is different; it learns to do things we didn’t tell it how to do. Sometimes it’s brilliant, like the unorthodox but legal and devastating move that felled the world Go champion. But that’s a game; the overarching rules — the guardrails — are built in. Without explicit guardrails, a machine could learn to do something that defies common sense. Are we confident that our teaching software will be flawless enough to prevent its electronic “students” com­ing up with harebrained ideas that even Dilbert’s dimmest colleagues would have more sense than to devise? As the users of AI become ever less conversant with what’s inside the black boxes, how well will they recognize something crazy — in time?

  • Interconnectedness. The Internet, particularly the Internet of Things, keeps opening up ever more surface area for penetration by thieves and mischief-makers. It is hard enough — probably impossible in an absolute sense — to secure even an isolated network from intrusion. Now connect it to thousands or millions of other networks, remembering that a chain is only as strong as its weakest link. For a sobering overview of the kind of mischief that has already occurred, an essay entitled “The Drums of Cyberwar” by Sue Halpern reviews the recently published Sandworm: A New Era of Cyberwar and the Hunt for the Kremlin’s Most Dangerous Hackers by Andy Greenberg.2 The blithe assumption that more interconnection is per se beneficial is not only unjustified, it’s downright dangerous.

It will take some very clever technical work to address these issues; the required disciplines and techniques need as much or more priority than basic testing, which assumes that spurious input is simply inadvertent rather than deliberately designed by very smart people to create problems. We need to give priority to:

  • Recognizing intrusions. Recognizing suspicious inputs will require subtle and sophisticated algorithms. These algorithms are not easy to design and build (AI can help), and they must be continuously improved to stay a step ahead of the would-be intruders. It is not easy to find the sweet spot between excessive and insufficient caution.

  • Isolating intrusions. Systems need the equivalent of white blood cells to isolate and neutralize spurious input. In cases where the intrusion has gone beyond the entry point and compromised the system, we need mechanisms to provide the equivalent of a containment vessel to isolate the compromised system from all its connections.

Compounding the difficulty of these tasks is the need to be able to execute algorithms and procedures in nanoseconds — a tall order that should inspire a bit of conservatism about how much functionality and connectivity we might want versus what we truly need.

A piece of possibly encouraging news, if much more interconnectedness is inevitable, is that Apple, Amazon, and Google have agreed to join other companies to set up a working group that will (they hope) agree on a set of standards for Internet-connected home products in order to make them compatible with each other and ensure a certain level of security.3 Let’s hope for more such news in 2020.

On the discouraging side, a recent op-ed column by Josephine Wolff, a Tufts University Fletcher School professor of cybersecurity policy, catalogued the wholesale departure of US government experts concentrating on election security.4

As ever, a mixed bag.

References

1I recently read a report from a scholarly symposium that blithely assumed AGI would arrive in the 2030s; see: Casey, Kevin. “5 AI Fears and How to Address Them.” The Enterprisers Project, 30 September 2019.

2Halpern, Sue. “The Drums of Cyberwar.” The New York Review of Books, 19 December 2019.

3Amazon, Apple, and Google Joining Forces Could Be What Makes Smart Homes Happen.” MIT Technology Review, 19 December 2019.

4Wolff, Josephine. “Cybersecurity Experts Are Leaving the Federal Government. That’s a Problem.” The New York Times, 19 December 2019.

About The Author
Paul Clermont
Paul Clermont is a Senior Consultant with Cutter Consortium's Business Technology & Digital Transformation Strategies practice. He has been a consultant in IT strategy, governance, and management for 30 years. His clients have been primarily in the financial and manufacturing industries, as well as the US government. Mr. Clermont takes a clear, practical view of how information technology can transform organizations and what it takes to… Read More