Thoughts on AI’s Potential for Social and Economic Disruption
The ultimate success of artificial intelligence (AI) will depend on how well society is prepared for its widespread application. This, of course, raises a slew of social, economic, legal, privacy, and ethical issues and challenges that must be seriously studied and addressed. Indeed, according to a survey we conducted analyzing the adoption of AI, the majority (62%) of survey participants were inclined to believe (to some degree) that the greater use of AI will lead to — or has the possibility of causing — some forms of significant social and economic disruption.
Consider loss of employment. The AI industry and its proponents have tended to gloss over the potential for widespread job loss due to greater AI adoption. Proponents say that AI will create many more jobs than it eliminates. However, even if the latter should eventually prove the case, the threat of workforce disruption due to intelligent automation remains.
Certainly, AI will lead to the creation of new companies based on new business models and new types of employment, new job categories, and new ways of working — as the industry touts. But does anyone believe that the bulk of the workers whose jobs today involve performing more menial tasks will get hired to fill these new jobs? And how fast will the old jobs be replaced as more organizations implement AI automation?
I have doubts as to whether AI automation will proceed as slowly as have the adoption of technologies in the past. Certainly, the old workforce will need to be retrained to acquire the skills necessary for these new jobs. Moreover, AI will dramatically increase the requirement that job candidates possess “knowledge worker” skills, and such training takes considerable time and a serious commitment which, due to budgetary and other considerations, I’m not sure a lot of organizations will be able to undertake.
It’s important to consider the types of jobs that AI will automate over the long term. Organizations currently are using AI to automate more repetitive task jobs — the so-called “low-hanging fruit.” But as the technology advances, AI applications will inevitably start having an increasingly greater impact on more skilled forms of employment. For example, lab technicians (medical imaging systems), project managers (AI-powered project management tools), programmers (ML-based software development tools), and even doctors will eventually see their fields affected by AI-based diagnostic and advisory systems.
Please understand that I am not saying that AI will replace all lab technicians, project managers, programmers, or physicians. But it is not hard to foresee a time when organizations will not need to have as many of these positions on staff because such work will be supplemented by intelligent machines.
There are the issues of transparency and fairness to consider with AI systems as they become more widely deployed. Neural networks and other machine learning (ML) models are developed using large amounts of historical data. So how do you ensure that AI models and algorithms do not end up encapsulating existing prejudices pertaining to race, sex, gender, sexuallity, and other biases?
These are serious considerations for a broad range of AI applications across many industries and domains, including face-recognition, voice-recognition, and other ID systems, prison sentencing applications, HR hiring platforms, college admissions, credit and lending, and other platforms. (For more on these issues, see “Transparency and Fairness in AI Systems, Part I: The Problem.”)
Considerations centering around the accuracy and privacy of AI systems have taken on increasing significance with the widespread use of AI. This is especially concerning when it comes to the use of facial recognition. Most people do not have a problem with using facial recognition to ID criminals. But facial recognition systems can be highly inaccurate — particularly when it comes to identifying women or people of color — resulting in citizens being accused or arrested for crimes they did not commit.
Privacy issues also run rampant with facial recognition. This is because the technology offers governments or corporations the ability to identify and track individuals on a widespread basis. Consequently, we are seeing considerable backlash against the technology in the US and other Western countries.
Over the last six months or so, cities including San Francisco and Oakland, California, and Somerville, Massachusetts, have enacted legislation banning or restricting the use of facial recognition. In August, the Swedish Data Protection Authority fined a municipality in northern Sweden approximately €19,000 for the unapproved use of facial recognition technology to monitor the attendance of students in school. And just last month, California passed legislation implementing a three-year ban on police using facial recognition embedded in officer-worn body cameras. As California goes, so goes the nation, as the saying goes.
Concerns about privacy and facial recognition have even advanced so far as to spawn software designed to mask the identities of individuals appearing in photos and videos from online facial detection systems like those used by social media platforms like Facebook. For example, D-ID offers a solution that protects photos and videos of organizations from face recognition, while keeping them similar to the human eye. D-ID’s approach digitally manipulates images, rendering them unreadable by the ML tools that are used to identify an individual but are imperceptible to the human eye.
I’ve focused on facial recognition in this Advisor because the market for such technology is taking off and because it is currently being singled out for the potential harm it could bring to society in general. However, other recognition technologies, such as voice recognition, gait detection, and emotion detection have similar issues associated with their deployment.
Emotion detection, in particular, I think will become the next AI technology to be singled out by AI opponents. Already we are seeing such technology used by companies to gauge the emotional responses of prospective job candidates when undergoing online job interviews. Eventually, emotion detection algorithms will become a part of other non-HR applications like law enforcement and financial lending systems.
Finally, the big question that I believe isn’t being asked enough or answered satisfactorily, is what constraints should be placed on all the data generated or accumulated from the use of all these technologies? All the images and associated data from all these various recognition technologies are stored in databases. If you ponder what happens when this recognition data is combined with data from other applications, including the data gleaned from our social, mobile (including location), shopping, Internet browsing, customer loyalty, frequent flyer, auto usage, financial, census, Alexa, smart TVs, and so on, it is easy to see why privacy advocates are alarmed.
The potential for social and economic disruption from the widespread adoption of AI will increasingly become a hot button issue for AI developers, end-user organizations, governments, and politicians over the coming years.
No doubt, widespread adoption of AI will lead to the demise of old industries and the rise of new industries. But the extent of this impact is difficult to determine, and therein lies the problem. New regulations governing AI’s usage are almost certain to arise to reduce the fear that some have regarding its possible negative effects on employment and society in general. But this will be a very tricky issue, as it requires both good vision and good leadership — at the organizational and government levels — as well as restraint to get the rules right while not stifling innovation.
I’d like to get your opinion on the potential for AI to cause serious social and economic disruption. Do you think these issues are viable concerns or overblown? You can comment at the link below, email me at email@example.com, or call +1 510 356 7299 with your comments.