Article

Privacy Issues and AI Forecast

Posted February 13, 2019 | Leadership | Technology | Amplify
Privacy
In this issue:

CUTTER BUSINESS TECHNOLOGY JOURNAL  VOL. 32, NO. 1
  

Cutter Consortium Senior Consultant Paul Clermont weighs in on the relentless privacy and security issues resulting from technological advancement. He emphasizes social networks’ misuse of private data and the need to address privacy-related basics, along the lines of the European Union’s General Data Protection Regulation. Clermont predicts that, in 2019, attention will be paid to the real and present threats posed by artificial intelligence (AI), such as AI replacing large numbers of the current workforce, the overuse of AI algorithms and associated ethical considerations, and the malicious use of AI by authoritarian governments.

Predictions about IT have typically focused on new offerings and what they can do for us, along with improvements in infrastructure that will make even better things possible. Over the past few years, I have avoided predicting specific technologies and focused instead on issues around privacy and security, suggesting they would become more prominent. Last year I called out the following points:

  • IT’s long honeymoon with the public, the press, and government has been threatened. Indeed, too many revelations of inadequately protected and inappropriately distributed and misused personal data have surfaced. The general public is wising up to the fact that companies like Google and Facebook’s delightful “free services” come at a cost: our privacy. The cavalier attitudes and tone-deaf responses from some tech executives have not aided their cause.

  • Rapid developments in artificial intelligence (AI) will take concerns about privacy and security to a whole new level.

Now, you can view these simultaneous events as either fortunate or unfortunate, depending on your viewpoint. Why? Because people widely perceive AI in its totality — algorithms, machine learning, robots — as a potential threat to society far greater than any earlier forms of IT. So at a time when the industry critically needs people to view it as a credible force for good (or at least not for evil), its once-good reputation has been severely tarnished.

It has become clear that most new technologies have serious downsides. Examples of unintended, unforeseen, and undesirable consequences — all of which could have been foreseen and even were foreseen (just not by the people who mattered) — include:

  • Social networks that only a few years ago seemed to empower people against despots have become tools of repression used by despots (e.g., fomenting genocide in Myanmar).

  • Democratizing the ability to mass distribute content worldwide over the Internet so that anyone’s voice can be heard provides an unprecedented megaphone for cranks, bigots, mischief makers, and certifiable lunatics to spread not just fake news but even bizarre conspiracy theories and poisonous lies.

  • Algorithms that help connect people to music and fashions they like also help seal them inside informational echo chambers about current affairs, amplifying political divisiveness.

  • The amazingly handy access to online information is also access to time wasters, and for large numbers of people, screens seem to be addictive.

  • Access to educational material for children also means access to junk programming that ultimately turns kids into passive couch potatoes and, worse yet, offers them the opportunity to buy junk1 that they can order on the spot. The news that many tech executives are strictly limiting their children’s access and screen time should be instructive.

  • Social networks that help people enrich their connections have, in the hands of adolescents, hugely amplified the voices of the nasty bullies we all remember from our own school years who have harassed their prey even to the point of suicide.

The Front Lines of Privacy

The poster child for exhibiting a cavalier attitude is Facebook, whose largely unpublicized privacy-related issues with the US Federal Trade Commission go back to 2007, with promises to change largely ignored, and whose recent attempts to discredit their critics have backfired. The once-plausible notion that the tech industry could and would effectively regulate itself now seems hopelessly naive.

The European Union (EU) is leading the way on the privacy front with its implementation of the General Data Protection Requirements (GDPR) last year. EU governments have also levied massive (10-figure) fines on much of Big Tech and beefed up their scrutiny of business models and practices. If and when the US joins in is a big question; public outcry will bump up against the pro-business slant of most parts of its government. Unfortunately, the lack of an even basic understanding that lawmakers (not just the octogenarians!) have exhibited in hearings about today’s technologies does not bode well for effective legislation.

It would be easy but wrong to think that most Big Tech companies are in the same boat; for example, Apple and Microsoft are not in the business of gathering and selling data about their users; in fact, their leaders have spoken out on privacy. Apple CEO Tim Cook warned attendees at a conference in Brussels via the image of a “data-industrial complex” in a call for comprehensive US privacy laws, adding that “our own information is being weaponized against us with military efficiency.”

Just days later, Microsoft CEO Satya Nadella charac­terized privacy as a human right in a similar London speech. This month, Shoshana Zuboff of the Harvard Business School published “The Age of Surveillance Capitalism,” which paints both Google’s and Facebook’s hoovering up and packaging of our data crumbs for lucrative sale as threats to a free society.2 Moreover, the World Economic Forum’s annual meeting in Davos, Switzerland, included speeches from several national leaders, including German Chancellor Angela Merkel and Japanese Prime Minister Shinzō Abe, calling for increased governmental scrutiny and regu­lation of IT innovations. This drumbeat will only get louder, but it would be foolish to predict serious regulatory action in the US in 2019, though one could hope for a bipartisan push to address at least a few privacy-related basics, including:

  • Requiring opting into anything that might send unwanted advertising our way

  • Making it easy to drop out of a social network without leaving a trace

  • Making it easy to shield subsets of personal information with substantial penalties for releasing it, some of which are already part of GDPR

Companies that conceived of their platforms as com­mon carriers with no responsibility for the content they carried — a model appropriate for a phone conversation between two people but not for a medium that broadcasts any individual’s musings to the whole world — have had to scramble to weed out misinformation, out­right lies, and hate-mongering. Facebook has corralled thousands of employees and contractors around the world to try to screen out bad and inappropriate material, but not surprisingly, it has proven difficult to promulgate effective guidelines.3 Substantial progress in getting rid of most of the chaff while only losing a minimum of wheat will need to be demonstrated, or governments, particularly in Europe, may take drastic action. I will go out on a limb and suggest that before the year is out, some thought leaders or institutions may start questioning whether social networks need to fundamentally change their business (and profitability) models.

Coming to Terms with AI

2019 developments in AI are more likely to be debates about its threats than about specific technical innovations, though we should expect lots of push from Amazon and Google to extend the use of their intelligent assistants into the Internet of Things. Rather than killer robots and mind control, attention in 2019 will be paid to AI’s real and present threats:

  • AI will replace a large share of the current workforce if and when it can do the jobs better than people, or at least as well, if it’s cheaper. To suggest otherwise is a fantasy, a belief confirmed by private conversations with business leaders at Davos who look forward to massive staff reductions. Clearly, some form of lifelong education is called for, but 2019 may not bring much more than further admiration of the problem.

  • AI algorithms can be (and are already) overused where nuanced human judgment would add real value. AI can do very well informing decisions but making them is a different matter when those decisions can have a huge impact on people’s lives, such as hiring and extending loans for homes and small businesses. Recent attention to ethics is appro­priate; it is extremely difficult to create algorithms that don’t reflect the personal, institutional, and cultural biases of their designers. We should expect to see broader acknowledgement of ethical issues4 and early efforts to address them (e.g., Google’s AI principles and possibly laws and regulations demanding transparency5).

  • AI offers rich opportunities for malicious use;6 for example, by authoritarian governments analyzing the copious data we generate without even thinking about it, such as our location. It is chilling to think of what the East German Stasi would have done with that type of data, and as more information exists only in electronic form, 1984 scenarios of governments literally rewriting inconvenient history and creating “deep fake” photos as “proof” become practical. However, possible 2019 developments in this area, other than awareness raising, would probably not go beyond restricting import and export of specific technologies, if that.

In sum, AI amplifies the demand for something much different from the laissez-faire approach that governments, particularly in the US, have taken to date.

One final thought: 2019 is likely to shape up as a year in which the news is less about what technology can do for us than about what it can do to us.

 

1This point echoes critics from the early years of television. In 1961, US Federal Communications Commission Chair Newton Minow characterized TV in the US as a “vast wasteland,” a quote that has never stopped being repeated — or relevant.

2A review, along with a Q&A with the author, can be found in “’The Goal Is to Automate Us’: Welcome to the Age of Surveillance Capitalism.”

3Their efforts were described in a 27 December 2018 New York Times piece by Max Fisher: “Inside Facebook’s Secret Rulebook for Global Political Speech” with the subhead “Under fire for stirring up distrust and violence, the social network has vowed to police its users. But leaked documents raise serious questions about its approach.”

4New York University is the home of the AI Now Institute, which examines social implications. The Massachusetts Institute of Technology offers a course entitled “The Ethics and Governance of Artificial Intelligence,” and its Media Lab spawned the delightfully named Algorithmic Justice League.

5Zeynep Tufekci, a University of North Carolina professor and New York Times op-ed columnist, has an excellent TED talk on this topic: “We’re Building a Dystopia Just to Make People Click on Ads.”

6An international consortium of institutions, including Oxford and Cambridge Universities, issued a comprehensive report in February 2018 entitled, “The Malicious Use of Artificial Intelligence: Forecasting, Prevention, and Mitigation.”

About The Author
Paul Clermont
Paul Clermont is a Cutter Expert. He has been a consultant in IT strategy, governance, and management for 40 years and is a founding member of Prometheus Endeavor, an informal group of veteran consultants in that field. His clients have been primarily in the financial and manufacturing industries, as well as the US government. Mr. Clermont takes a clear, practical view of how information technology can transform organizations and what it takes… Read More