Trust: Is IT the Problem or the Solution? — Opening Statement

Posted June 5, 2020 | Leadership | Technology | Amplify
In this issue:



We are living though complex and worrying times. Much of the world, as I write this, is dealing with a pandemic that has upset our daily lives in a way that is unparalleled in recent history, except in countries that were literally, not just metaphorically, at war or undergoing a revolution. At the same time, old demons of our societies, never completely slayed, have reemerged to haunt us: populism, racism, intolerance….

As we fight through this and eventually come out on the other side to whatever will be the “new normal,” a word has increasingly made its way into the daily discourse of business and technology leaders, as well as of politicians (at least those who know how to spell it): trustworthiness. Some things are noticed only when we miss them; trustworthiness is one of those. It is interesting to delve into how we got to this situation; the role that information technology has been playing in the erosion of trustworthiness; and how it, like many double-edged tools, might help solve the very problems it has helped create.

Our notion of trustworthiness has been evolving for several centuries. This evolution has accelerated recently, and this is clearly attributable to technology. “Trust” initially referred only to trust in people. Then we had to trust certain institutions — rulers, governments, banks — to do what they had promised to do for us. What is paper money if not a testament to the trustworthiness of a central bank? The technology of the First Industrial Revolution added to that meaning, when the trustworthiness of the machine designer and supplier became important. Still, in a world of tangible objects, inspecting a machine before putting it into service was relatively easy. When the automobile appeared, it was a whimsical, inconsistent machine whose driver needed to have some of the skills of a mechanic to be able to trust that driver and machine would reach the intended destination. Yet although an untrustworthy car could leave you stranded by the roadside, it was unlikely to start a conflict or change the result of an election.

Interestingly, early forms of communication were also fraught with trustworthiness issues. Much of that was rooted in the ambiguities of human language, and the multiplicity of them. The trustworthiness of a trans­lation was a huge issue, and at least one war started in that manner.1

Enter IT. For several decades, all was well, because IT did not touch us personally all that much. It was easy enough to verify that our computer-generated paycheck showed the right numbers, and that was about it. Isaac Asimov certainly brought up the trustworthiness of robots in his writings and, in 1942, even invented laws about robotics. But despite the philosophical import of his prose, his ideas remained simply fiction. It is only when email, the World Wide Web, and the com­puterization and automation of a growing number of activities became part of our lives that the issue of whom and what we can trust exploded. That shift into uncertainty has only grown since then.

To a large extent, we naturally mistrust what we do not understand. The fact that software developers use voluminous and arcane code to automate things — and that even other professionals have a hard time deciphering what the code means and validating that it cannot produce ill effects — is enough to explain the loss of trustworthiness. This is well illustrated by the old joke of the real-time software engineers who, upon board­ing an early Airbus plane, hear the captain announce, “Today, this will be an entirely fly-by-wire experience.” Upon which they hastily disembark. So even without considering any malicious intent, complex systems already stretch our ability to trust them. Is the code bug-free? Certainly, we know that beyond a few hundred or perhaps a few thousands lines, no code is completely bug-free and that real-world systems contain millions of lines. Has the supplier performed all the necessary tests? This is an almost impossible feat, as so many combinations of conditions would have to be tested, and the requirements against which a system is tested can be riddled with ambiguity. Was this measurement supposed to be in imperial or metric units (as was the cause of the Mars Climate Orbiter failure2)? Losing a spacecraft is one thing, but what if an entire country was plunged into a blackout because of untrustworthy data in the control systems of the national electric grid?

In the 20 years since the Mars Orbiter crashed, we have become, individually and collectively, dependent on information systems at a much deeper level. Instead of a few reasonably balanced news channels, we now have a choice of hundreds of electronic “echo chambers” in which we only receive news selected according to the biases of like-minded people. We do not know how many of the posts we read on Facebook or Twitter were generated by Russian bots. The majority of us who live in democratic countries are no longer sure that the often abysmally insecure technology of electronic voting machines has not been subverted. We know that this stash of money from an unknown Nigerian official is fake, but what about this offer for a shopping coupon at my pharmacy? Did this politician really say these things, or is it a deep fake? And speaking of “fake,” how do I know what is real or not, now that some people call “fake news” anything that does not agree with their opinions, while you can almost be sure that what they praise as newsworthy (mostly because it flatters them) is almost surely false?

Artificial intelligence is now adding yet another twist to this story: How do we know why a neural network denied a loan application or confused the face of a person with that of a terrorist? Do we know whether such misidentification occurs at the same frequency for people of different ethnicities? Who wrote this software, and which data set did they use to train it? Is placing a human in the loop likely to improve or degrade the trustworthiness of the system? A loan officer might look at a strange rejection recommendation, question it, and redo some calculations by hand, but he might also be prejudiced against certain applicants. A human driver might override the controls of an autonomous car that is going to run over a misclassified pedestrian, but we also know that many airplane crashes were caused by pilots ignoring the warnings of their cockpit instruments. Who (or what) should be trusted more?

I addressed some of the above points at greater length a few months ago in a Cutter Business Technology Journal (CBTJ) article I called “Trustworthiness: A Mouthful That Shouldn’t Leave a Bad Taste.”3 But it seemed too important a subject, in these times of uncertainty and confusion, to leave it at that without seeking the opinion of a broader panel of experts. Hence this issue of CBTJ, for which we asked the question: “Is IT the problem or the solution?” In other words, while IT has created the conditions, the products, and the insecure protocols that permit the problems listed above, can IT also be used to counter these threats? For example:

  • Internet protocols and telephone caller ID could be updated to prevent spoofing.

  • Voting systems could be designed with redundancies and paper trails to permit verification of the counts, without creating a risk of vote buying (which is one of the unintended consequences of primitive paper trail systems).

  • “Provenance and pedigree” standards might be used to create a tamper-proof trail of where news items, photographs, videos, data sets, or software come from.

  • Internet of Things (IoT) sensors could be required to “sign in” to the networks using cryptography methods and to only send data in encrypted form to protect critical infrastructure and assets from industrial spying or hacker attacks.

Clearly, technology solutions are not the only things we need to restore the needed sense of trust in systems and information, especially when it comes to news and social media. At a minimum, it seems that we require a new regime of checks and balances that covers a whole range of qualities, such as reliability, resiliency, visibility of provenance, safety, security, privacy, and absence of bias. These checks and balances must be put in place by well-trained and ethical-minded humans, working for organizations that respect and protect their independence. The all-too-common practice of dismissing the warnings of someone who says “we haven’t tested this system enough, so we shouldn’t release it” should be banned. We need the guarantors of correct systems design to attain the same level of training and professional certification as other experts, to maintain the same unblemished records and accountability as other esteemed professionals, and to hold the same exalted level in our society as judges hold — and usually deserve.

In This Issue

This issue’s contributors have addressed the question of trustworthiness from a variety of angles. Each article offers a significant contribution to the challenge of restoring and maintaining trust.

In our first article, Cutter Consortium Senior Consultant and frequent contributor Paul Clermont uses his well-known “straight talking” style to paint a clear picture of the “broad scope of threats” we are facing. He uses anthropological analogies to explain the “circles of trust” we use in deciding what to believe. Clermont doesn’t shy from the potential conflicts among information transparency, privacy, and intellectual property. He then looks at the proper role of governments in creating the frameworks and standards that can help improve trustworthiness.

Next, Philippe Flichy tells us that there are three complementary facets we need to consider, particu­larly in an industrial environment: trusting the data, a challenge made more difficult by the emergence of IoT, digital transformation, and cyberattacks; trusting the tools, for example, the machine learning algorithms whose innards are, almost by design, largely inscrutable; and trusting the people, given the pandemic-era new work practices.

David Tayouri then brings us the perspective of the Israeli defense environment, justly famous for its leadership in cybersecurity. For Tayouri, the combination of biometrics, asymmetric cryptography (think “PKI”), and blockchain can help construct a strong authentication and authorization environment, which is crucial to, in his words, “reconstruct virtual trust.”

Following along the same “technology as the solution” line of thought but with the added twist of putting a human in the loop, a team of eight coauthors led by Greek academic Panagiotis Monachelis proposes to combine peer-to-peer decentralized networks and blockchain technology to address the challenge of misinformation in social media. The authors provide a detailed description of an architecture, embodied in their research project called EUNOMIA, that allows end users to review posts and feed a secure voting system.

Finally, Robert A. Martin addresses in the last article the complete ecosystem involved in the procurement of products and services. What does it mean to trust that what you buy, and the organizations that sell to you, meet all the conditions required to merit your trust? Martin describes the elements of a system of trust for supply chain security that is currently under development and is based on collecting information from a wide community of procurement departments and standards organizations.

Even if all the ideas presented by this issue’s authors are implemented, serious challenges will remain. One is the tension between trust and anonymity, when the latter is required; in particular, to protect whistleblowers or opponents of authoritarian regimes. The other is the fact that society and its actors (politicians, media, product or service suppliers, and consumers) do not change as quickly as the technology. Levels of trust that have been destroyed in just a few years may take decades to rebuild. But we can be thankful to our authors for pointing us toward several useful building blocks of the solution.


1The “Ems Dispatch” started the Franco-Prussian War of 1870-1871, at least in part, because the translation of the German word “Adjutant” into the French “adjudant” reinforced a sense of insult among the French government and populace; see Wikipedia’s “Ems Dispatch.”

2See Wikipedia’s “Mars Climate Orbiter: Cause of Failure.”

3Baudoin, Claude. “Trustworthiness: A Mouthful That Shouldn’t Leave a Bad Taste.” Cutter Business Technology Journal, Vol. 33, No. 1, 2020.


About The Author
Claude Baudoin
Claude Baudoin is a Cutter Consortium Expert and a member of Arthur D. Little's AMP open consulting network. He is a proven leader and visionary in IT and knowledge management (KM) with extensive experience working in a global environment. Mr. Baudoin has 35 years' experience and is passionate about quality, knowledge sharing, and providing honest and complete advice. There is today a convergence of knowledge management, the use of social… Read More