Article

Trustworthiness: A Mouthful That Shouldn’t Leave a Bad Taste

Posted February 6, 2020 in Business Technology & Digital Transformation Strategies, Data Analytics & Digital Technologies Cutter Business Technology Journal
badtaste

CUTTER BUSINESS TECHNOLOGY JOURNAL  VOL. 33, NO. 1
 

Cutter Consortium Senior Consultant Claude Baudoin addresses the issue of trust, or mistrust, in the information we rely on to stay informed or to make decisions. He writes, “This article is not a definitive proposal to achieve the elusive goal of knowing what we can trust, but rather a set of per­spectives and considerations to justify the urgency of addressing this issue.” Some reasons for our untrusting mindsets include “deep fakes,” voting sys­tems breaches, bias in decision algorithms, unknown sources of email, insufficiently secured IoT systems, and robocalls.

“What can you believe?” has recently become a visible social issue, one that is critical to the proper functioning of our societies. Establishing a degree of confidence in information has also become a complex technical issue, one that may drive a number of changes in the fabric of IT. This article is not a definitive proposal to achieve the elusive goal of knowing what we can trust, but rather a set of perspectives and considerations to justify the urgency of addressing this issue.

Trustworthiness is defined as “the ability to be relied on as honest or truthful.”1 The word appeared in English in 1820, reached its first peak of popularity in 1890, declined until 1960, then went up the charts, and is now still increasing in frequency,2 for reasons with which readers should be all too familiar:

  • Technology allows the almost undetectable falsification of information, including “deep fakes” that place people and things within a textual, graphical, or video context of which they were never part.

  • The most pessimist among us fear that there are essentially no longer any channels of information that can be trusted for their honesty and objectivity. Most in mass media are accused by one side or another of having become biased or virtual instruments of propaganda, even in democratic countries.

  • Social media outlets have replaced newspapers, radio, and television as the main sources of news for many of us, but this further weakens the notion of trust. Some of us live in echo chambers that thrive on the “confirmation bias3 well known to psychologists.

  • We are losing faith in the idea of election results representing the true will of the majority “thanks” to a combination of cyberattacks, the influence of fake news on the electorate, instances of voter suppression, and antiquated voting systems.

  • We often do not know the source of the data used to make momentous decisions. In the medical world, for example, we know that certain populations are underrepresented in research studies, including women and ethnic minorities, which risks leading to standards of care that are inappropriate for those populations.

  • We do not know if artificial intelligence algorithms have been trained using unbiased data. We know, for example, that when using facial recognition to identify crime suspects, the rate of false positives is much higher for people of color than for white people.

  • We can no longer trust the origin of an email, since the Internet protocols, which date back 50 years, do not ensure the authenticity of that information. Hence the plagues of spamming and phishing, and the occasional overreaction that causes us to ignore legitimate messages that were incorrectly filtered out as spam or junk.

  • Many of us no longer answer phone calls from unknown numbers, since robocalls often represent over 50% of the calls we receive. Some are just annoying, while others are criminal attempts to extort money from naive recipients, including the elderly or simply uninformed people.

The latest scary scenarios come from the proliferation of insufficiently secured Internet of Things (IoT) sys­tems. According to the director of the US Cybersecurity and Infrastructure Security Agency (CISA), “CISA is currently aware of a system that controls water pumps, one controlling an oil and natural gas facility, and one controlling emergency management equipment that can be accessed without a password and modified by anyone with an internet connection.”4 Now, consider this:

  • The agency is unable to locate the owners of those systems without issuing a subpoena to the ISPs responsible for the IP addresses where those systems were detected.

  • Open source software development is a great thing; it has made millions of lines of code, largely developed and donated by organizations and improved by volunteer experts, available to others for free. But it is also a huge opportunity to inject malware into hundreds of critical systems. This vulner­abil­ity led the National Telecommunications and Information Administration (NTIA), a branch of the US Department of Commerce, to launch, in 2019, an effort to develop an approach to document the pedigree of software — a Software Bill of Materials (SBOM) that would allow for the traceability of a security vulnerability up and down the genealogy tree for the code.

By now, you should get the idea that trustworthiness is a key challenge to the integrity of just about everything that happens around us and to us — of the systems we use, of our electricity supply, of our forms of government, of our financial security, of the decisions that lead to war or peace, and of our reactions to worldwide existential threats such as climate change or epidemics.

So, what’s next? Unfortunately, if history repeats itself, the situation will get worse before it gets better. But that isn’t reassuring.

Part of the answer must lie in the traceability of infor­ma­tion as it propagates, as hinted at in the proposed SBOM, discussed above. Let’s now imagine a world in which:

  • When the phone rings, I can be certain of who is calling me. No more “Michael” with a thick South Asian accent calling me from the “Windows Security Center” to demand access to my laptop. Telecom companies could make it happen; we need to force them to do so.

  • When an email arrives, I can be certain of the identity of the author. Goodbye, Nigerian customs officials. Now, this would pose a risk for people in countries where freedom of expression is not guaranteed. So, there also needs to be a reliable and optional anonymization method that can only be used among con­senting parties and cannot be (easily) subverted by scammers. Tor5 (the anonymity network project) on steroids, so to speak.

  • When I see an image or video, I can query its provenance or pedigree. When was the image taken or the video made? Was it edited, and by whom? What was the chain of custody between the camera and me? Goodbye, fake news about world leaders cozying up with enemies. Again, in some cases we need to protect anonymous sources, and therein will lie a key technical difficulty.

  • When I see a quote attributed to a person, I can click on it and find the original source as well as corroborating evidence.

  • When I vote, I can trust that my ballot is correctly counted, as well as those of my fellow citizens, perhaps using a blockchain-enabled voting system. Again, we need to ensure that this doesn’t allow people to buy votes. If a voter’s choices can be verified after the fact, the linking of voter and vote enables bribery. That’s another contradiction to be addressed.

  • Standards are developed to test machine learning algorithms for the absence of bias, especially gender, racial, and other demographic or sociological bias.

  • Identification, authorization, and access control mechanisms are systematically used to know where IoT data comes from, including measurements from sensors and commands sent to actuators.

  • All software is delivered with a bill of materials, so that when a vulnerability is identified, the owners and users of all affected systems can be alerted. Not all vulnerabilities are executable in a given system, but they are, at minimum, a time bomb that must be defused.

What will it take to get to this ideal world? Probably a multipronged approach, which will require several years, if not the entire decade that is now opening:

  • Suppliers of systems must be more responsible about their products. Market forces as well as regulation may be needed to apply the necessary pressure.

  • Buyers need to ask tough questions before something blows up in their hands (figuratively and, in the worst case, literally).

  • Legislators must educate themselves about security technology.

  • Democratic governments need to take the rebuilding of public trust in information seriously.

  • The “fourth estate” — the media — must self-regulate or face regulation.

Technology will play a role — including public key infrastructure, blockchain, a redesign of Internet protocols, and more. But this is not just a technology challenge. It is a fundamental reexamination of an age-old question: “Can I trust you?”

References

1Trustworthiness.” Lexico, 2020.

2Trustworthiness, 1800-2008.” Google Books Ngram Viewer, 2013.

3Cherry, Kendra. “How Confirmation Bias Works.” Verywell Mind, 27 November 2019.

4Senate Bill Would Give CISA Limited Subpoena Power.” MeriTalk, 17 December 2019.

5Tor Project: History.” Tor Project, 2020. 

 

About The Author
Claude Baudoin
Claude Baudoin is a Senior Consultant with Cutter Consortium's Business & Enterprise Architecture and Data Analytics & Digital Technologies practices. He is a proven leader and visionary in IT and knowledge management (KM) with extensive experience working in a global environment. Mr. Baudoin has 35 years' experience and is passionate about quality, knowledge sharing, and providing honest and complete advice. There is today a… Read More