A number of the respondents wrote about cross-cutting themes, introduced novel ideas or shared thoughts that were not widely mentioned by others. This section features a selection of that material.
Questions about AI figure into the grandest challenges humans face
Michael G. Dyer, professor emeritus of computer science at UCLA, expert in natural language processing, argued, “The greatest scientific questions are:
- Nature of Matter/Energy
- Nature of Life
- Nature of Mind
“Developing technology in each of these areas brings about great progress but also existential threats. Nature of Matter/Energy: progress in lasers, computers, materials, etc., but hydrogen bombs with missile delivery systems. Nature of Life: progress in genetics, neuroscience, health care, etc., but possibility of man-made deadly artificial viruses. Nature of Mind: intelligence software to perform tasks in many areas but possibility of the creation of a general-AI that could eliminate and replace humankind.
“We can’t stop our exploration into these three areas, because then others will continue without us. The world is running on open, and the best we can do is to try to establish fair, democratic and noncorrupt governments. Hopefully in the U.S., government corruption, which is currently at the highest levels (with nepotism, narcissism, alternate ‘facts,’ racism, etc.), will see a new direction in 2021.”
Was the internet mostly used in ethical or questionable ways the past decade?
Seth Finkelstein, programmer, consultant and EFF Pioneer of the Electronic Frontier Award winner, noted, “Just substitute ‘the internet’ for ‘AI’ here – ‘Was the internet mostly used in ethical or questionable ways in the last decade?’ It was/will be used in many ways, and the net result ends up with both good and bad, according to various social forces. I believe technological advances are positive overall, but that shouldn’t be used to ignore and dismiss dealing with associated negative effects. There’s an AI ‘moral panic’ percolating now, as always happens with new technologies. A little while ago, there was a fear-mongering fad about theoretical ‘trolley problems’ (choosing actions in a car accident scenario). This was largely written about by people who apparently had no interest in the extensive topic of engineering safety trade-offs.
“Since discussion of, for example, structural racism or sexism pervading society is more a humanities field of study than a technological one, there’s been a somewhat better grasp by many writers that the development of AI isn’t going to take place outside existing social structures there.
“As always, follow the money. Take the old aphorism ‘It is difficult to get a man to understand something when his salary depends upon his not understanding it.’ We can adapt it to ‘It is difficult to get an AI to understand something when the developer’s salary depends upon the AI not understanding it.’
“Is there going to be a fortune in funding AI that can make connections between different academic papers or an AI that can make impulse purchases more likely? Will an AI assistant tell you that you’re spending too much time on social media and you should cut down for your mental health (‘log off now, okay?’) or that there’s new controversy brewing and get clicking otherwise you may be missing out (‘read this bleat, okay?’)?”
The future of work is a central issue in this debate
Jonathan Taplin, author of “Move Fast and Break Things: How Google, Facebook and Amazon Cornered Culture and Undermined Democracy,” observed, “Even if most major players in AI will abide by ethical rules, the role of bad actors using AI can have outsized effects on society. The ability to use deepfakes to influence political outcomes will be tested.
“What worries me the most is that the substitution of AI (and robotics) for human work will accelerate post-COVID-19. The political class, with the notable exception of Andrew Yang, is in total denial about this. And the substitution will affect radiologists just as much as meat cutters. The job losses will cut across classes.”
Intellectual product is insufficient to protect us from dystopic outcomes
Frank Kaufmann, president of the Twelve Gates Foundation, noted, “Will AI mostly be used in ethical or questionable ways in the next decade? And why? This is a complete and utter toss-up. I believe there is no way to predict which will be the case.
“It is a great relief that, in recent years, there have been scores of convenings and even more papers generated proposing ethical frameworks for the application of artificial intelligence. They cover a host of issues, including transparency, justice and fairness, privacy, freedom and human autonomy, beneficence and nonmaleficence, freedom, trust, sustainability and dignity. But, then again, there have been the Treaty of Versailles, and the literal tons of paper the United Nations has produced talking about peace, the Declaration of Human Rights and so forth.’
“I am glad people meet sincerely and in earnest to examine vital ethical concerns related to the development of AI. The problem is that intellectual product is insufficient to protect us from dystopic outcomes. The hope and opportunity to enhance, support and grow human freedom, dignity, creativity and compassion through AI systems excite me. The chance to enslave, oppress and exploit human beings through AI systems concerns me.
Technological determinism should be ‘challenged by critical research’
Bill Dutton, professor of media and information policy at Michigan State University, said, “AI is not new and has generally been supportive of the public good, such as in supporting online search engines. The fact that many people are discovering AI as some new development during a dystopian period of digital discourse has fostered a narrative about evil corporations challenged by ethical principles. This technologically deterministic good versus evil narrative needs to be challenged by critical research.”
Is it possible to design cross-cultural ethics systems?
Michael Muller, a researcher for a top global technology company focused on human aspects of data science and ethics and values in applications of artificial intelligence, wrote, “I am concerned about what might be called ‘traveling AI’ – i.e., AI solutions that cross cultural boundaries.
“Most AI systems are likely to be designed and developed in the individualistic EuroWestern cultures. These systems may be ill-suited – and in fact harmful – to collectivist cultures. The risk is particularly severe for indigenous cultures in, e.g., the Americas, Africa and Australia.
“How can we design systems that are ethical in the cultural worlds of their users – whose ethics are based on very different values from the individualistic EuroWestern traditions?”
Most people have no idea how limited and brittle these capabilities are
Steven Miller, professor emeritus of information systems at Singapore Management University, responded, “We have to move beyond the current mindset of AI being this special thing – an almost mystical thing. I wish we would stop using the term AI (though I use it a lot myself), and just refer to it for what it is – pattern-recognition systems, statistical analysis systems that learn from data, logical reasoning data, goal-seeking systems. Just look at the table of contents for an AI textbook (such as ‘Artificial Intelligence: A Modern Approach,’ Stuart Russell and Peter Norvig, 4th edition, published 2020). Each item in the table of contents is a subarea of AI, and there are a lot of subareas. …
“There are ethical issues associated with any deployment of any engineering and technology system, any automation system, any science effort (especially the application of the science), and/or any policy analysis effort. So, there is nothing special about the fact that we are going to have ethical issues associated with the use of AI-enabled systems. As soon as we stop thinking of AI as ‘special,’ and to some extent magical (at least to the layman who does not understand how these things work, as machines and tools), and start looking at each of these applications, and families of applications as deployments of tools and machines – covering both physical realms of automation and/or augmentation, and cognitive and decision-making realms of automation and/or augmentation – then we can have real discussions.
“Years back, invariably, there had to have been many questions raised about ‘the ethics of using computers,’ especially in the 1950s, 1960s and 1970s, when our civilisation was still experiencing the possibilities of computerising many tasks for the very first time. AI is an extension of this, though taking us into a much wider range of tasks and tasks of increasing cognitive sophistication. …
“Now, of course, our ability to create machines that can sense, predict, respond and adapt has vastly improved. Even so, most lay people have no idea of just how limited and brittle these capabilities are – even though they are remarkable and far above human capability in certain specific subdomains, under certain circumstances. What is happening is that most laypeople are jumping to the conclusion that, ‘Because it is an AI-based system, it must be right, and therefore, I should not question the output of the machine, for I am just a mere human.’ So now the pendulum has swung to the other extreme of layperson assuming AI-enabled algorithms and machines are actually more capable (or more robust and more context-aware) than they actually are. And this will lead to accidents, mistakes and problems. …
“Just like there will be all types of people with all types of motives pursuing their interests in all realms of human activity, the same will be true of people making use of AI-enabled systems for automation, or augmentation or related human support. And some of these people will have noble goals and want to help others. And some of these people will be nefarious and want to gain advantage in ways others might not understand, and there will even be the extreme of some who purposely want to bring harm to others. We saw this with social media. In years, decades and centuries past, we saw this with every technological innovation that appeared, going back to the printing press and even earlier. … Let’s start getting specific about use cases and situations. One cannot talk in the abstract as to whether an automobile will be used ethically. Or whether a computer will be used ethically. Or biology as an entire field will be used ethically. One has to get much more specific about classes of issues or types of problems that are related to the usage of these big categories of ‘things.’”
AI ‘must not be used to make any decision that has direct impact on people’s lives’
Fernando Barrio, a lecturer in business law at Queen Mary University of London expert in AI and human rights, responded, “If ethical codes for AI will be in place for the majority of cases for 2030, they will purport to be in the public good (which would seem to imply a ‘yes’ to the question as it was asked), but they will not result in public good. The problem is that the question assumes that the sole existence and use of ethical codes would be in the public good.
“AI, not as the singularity but as machine learning or even deep learning, has an array of positive potential applications but must not be used to make any decision that has a direct impact on people’s lives. In certain sectors, like the criminal system, it must not be used even in case management, since the inherent and unavoidable bias (either from the data or the algorithmic bias resulting from its own selection or discovery of patterns) leads that individuals and their cases are not judged or managed, taking into account the characteristics that make every human unique but in those characteristics that make that person ‘computable.’
“Those who propose the use of AI to avoid human bias, such as those that judges and jurors might implement, tend to overlook – let’s assume naively – that those biases can be challenged through appeals, and they can be made explicit and transparent. The AI inherent, one cannot be challenged because, between other things, the lack of transparency and, especially, for the insistence of its proponents that the technology can be unbiased.”
Just as with drugs, without rigorous study prior to release, AI side effects can be dangerous
An internet pioneer and principal architect at a major technology company said, “AI is a lot like new drug development – without rigorous studies and regulations, there will always be the potential for unexpected side effects.
“Bias is an inherent risk in any AI system that can have major effects on people’s lives. While there is more of an understanding of the challenges of ethical AI, implicit bias is very difficult to avoid because it is hard to detect. For example, you may not discover that a facial-recognition system has excessively high false-recognition rates with some racial or ethnic groups until it has been released – the data to test all the potential problems may not have been available before the product is released.
“The alternative is to move to a drug-development model for AI, where very extensive trials with increasingly large populations are required prior to release, with government agencies monitoring progress at each stage. I don’t see that happening, though, because it will slow innovation, and tech companies will make the campaign contributions necessary to prevent regulation from becoming that intrusive.”
Has AI has been shifting the nature of thought and discourse and, if so, how?
A professor of urban planning noted, “Already the general ‘co-evolution’ of humanity and technology suggests that humans are not nearly as in control as they think they are of technology’s operations, much less its trajectory. While I am not an enthusiast of singularity speculations, there does seem to be a move toward AI stepping in to save the planet, and humans remain useful to that for a while, maybe in bright perpetuity.
“With wondrous exceptions of course, humans themselves seem ever less inclined to dwell on the question of what is good, generally, or more specifically, what is good about mindful reflectivity in the face of rampant distraction engineering.
“While one could worry that humans will unleash AI problems simply because it would be technically possible, perhaps the greater worry is that AI, and lesser technological projects, too, have already been shifting the nature of thought and discourse toward conditions where cultural deliberations on more timeless and perennial questions of philosophy have no place. Google is already better at answers. Humans had best cultivate their advantage at questions. But if you are just asking about everyday AI assistance, just look how much AI underlies the autocomplete to a simple search query. Or, gosh, watch the speed and agility of the snippets and IntelliSense amid the keyboard experience of coding. Too bad.”
Can any type of honor code – ‘AI omerta’ – really keep developers in line?
Anthony Judge, editor of the Encyclopedia of World Problems and Human Potential, observed, “The interesting issue for me is how one could navigate either conclusion to the questions and thereby subvert any intention.
“We can’t assume that the ‘bad guys’ will not be developing AI assiduously to their own ends (as could already be argued to be the case), according to their own standards of ethics. AI omerta? Appropriate retribution for failing to remain loyal to the family? Eradicate those who oppose the mainstream consensus? What is to check against these processes? What will the hackers do?”
When there is no trust in others, people focus on self-interests to the detriment of others
Rebecca Theobald, assistant research professor at the University of Colorado-Colorado Springs, predicted, “AI will mostly be used in questionable ways in the next decade because people do not trust the motives of others. Articulate people willing to speak up give me the most hope. People who are scared about their and their families’ well-being worry me the most because they feel there is no other choice but to scramble to support themselves and their dependents.
“Without some confidence in the climate, economy, health system and societal interaction processes, people will become focused on their own issues and have less time and capital to focus on others. AI applications in health and transportation will make a difference in the lives of most people. Although the world is no longer playing as many geopolitical games over territory, corporations and governments still seek power and influence. AI will play a large role in that. Still, over time, science will win out over ignorance.”
In order to assure it is for public good, perhaps AI could be regulated like a utility
A director with a strategy firm commented, “The creators of AI and AI in general are most likely to be used by those in power to keep power. Whether to wage war or financial war or manage predicted outcomes, most AIs are there to do complex tasks. Unless there is some mechanism to make them for the public benefit, they will further encourage winner-take-all.
“Regarding lives, let’s take the Tesla example. Its claim is that it will soon have Level 5 AI in its vehicles. Let’s assume that it takes a couple of years beyond that. The markets are already betting that: 1) It will happen, and 2) No one else is in any position to follow. If so, rapid scaling of production would enable fleets of robocall-taxis; it could destroy the current car industry as the costs are radically lower, and the same tech will impact on most public transport, too, in five years.
“Technology-wise, I love the above scenario. It does mean, however, that only the elite will drive or have a desire to have their own vehicle. Thus, for the majority, this is a utility. Utilities are traditionally for the public good. It’s why in most countries the telephone system or the postal system were originally owned by the government. It’s why public transport is a local government service. We will not be well served by a winner-take-all transportation play! Amazon seems to be doing pretty well with AI. They can predict your purchases. They can see their resellers success and at scale simply replace them. Their delivery network is at scale and expected to also go to autonomous. I can’t live without it; however, each purchase kills another small supplier. Because economics eliminate choice – one has to feed oneself.
“As long as AI can be owned, those who have it or access to it have an advantage. Those who don’t are going to suffer and be disadvantaged.”
Three respondents’ views:
The biggest concerns involve ill-considered AI systems, large and small
Joshua Hatch, a journalist who covers technology issues, commented, “While I think most AI will be used ethically, that’s probably irrelevant. This strikes me as an issue where it’s not so much about what ‘most’ AI applications do but about the behavior of even just a few applications. It just takes one Facebook to cause misinformation nightmares, even if ‘most’ social networks do a better job with misinformation (not saying they do; just providing an example that it only takes one bad actor). Furthermore, even ethical uses can have problematic outcomes. You can already see this in algorithms that help judges determine sentences. A flawed algorithm leads to flawed outcomes – even if the intent behind the system was pure. So, you can count on misuse or problematic AI just as you can with any new technology. And even if most uses are benign, the scale of problem AI could quickly create a calamity. That said, probably the best potential for AI is for use in medical situations to help doctors diagnose illnesses and possibly develop treatments. What concerns me the most is the use of AI for policing and spying.”
A research scientist who works at Google commented, “I’m involved in some AI work, and I know that we will do the right thing. It will be tedious, expensive and difficult, but we’ll do the right thing. The problem will be that it’s very cheap and easy for a small company to not do the right thing (see the recent example of ClearView, which scraped billions of facial images, violating terms of service and created a global facial-recognition dataset). This kind of thing will continue. Large companies have incentives to do the right thing, but smaller ones do not (see, e.g., Martin Shkreli and his abuse of pharma patents).”
A research scientist working on AI innovation with Google commented, “There will be a mix. It won’t be wholly questionable or ethical. Mostly, I worry about people pushing ahead on AI advancements without thinking about testing, evaluation, verification and validation of those systems. They will deploy them without requiring the types of assurance we require in other software. For global competition, I worry that U.S. tech companies and workers do not appreciate the national security implications.”