This section covers three themes that emerged among the answers from respondents who expect the level of human agency in regard to individual’s control of their tech-abetted activities will be improved by 2035:

  • Humans and tech always positively evolve: The natural evolution of humanity and its tools and systems has always worked out to benefit most people most of the time. Regulation of AI and tech companies, refined design ethics, newly developed social norms and a deepening of digital literacy will emerge.
  • Businesses will protect human agency because the marketplace demands it: Tech firms will develop tools and systems in ways that will enhance human agency in order to stay useful to customers, to stay ahead of competitors and to assist the public and retain its trust.
  • The future will feature both more and less human agency, and some advantages will be clear: The reality is that there will always be a varying degree of human agency allowed by tech, depending upon its ownership, setting, uses and goals. Some digital tech will be built to allow for more agency to easily be exercised by some people by 2035; some will not.

Humans and tech always positively evolve

Many of the experts who have hope about the future of human agency noted that throughout history, humans and technology have always overcome significant hurdles. They said societies make adjustments through better regulation, improved design, updating of societal norms and a revamping of education. People tend to adapt to and/or come to accept both the good and the worrisome aspects of technological change. These experts predict this will also be the case as rapidly advancing autonomous systems become more widespread.

Ulf-Dietrich Reips, professor and chair for psychological methods at the University of Konstanz, Germany, wrote, “Many current issues with control of important decision-making will in the year 2035 have been worked out, precisely because we are raising the question now. Fundamental issues with autonomous and artificial intelligence will have come to light, and ‘we’ will know much better if they can be overcome or not. Among that ‘we’ may actually be some autonomous and artificial intelligence systems, as societies (and ultimately the world) will have to adapt to a more hybrid human-machine mix of decision-making. Decision-making will need to be guided by principles of protection of humans and their rights and values, and by proper risk assessment. Any risky decision should require direct human input, although not necessarily only human input and most certainly procedures for human decision-making based on machine input need to be developed and adapted. A major issue will be the trade-off between individual and society. But that in itself is nothing new.”

Willie Curry, a longtime global communications policy expert based in Africa, said, “My assumption is that over the medium term, two things will happen: greater regulation of tech and a greater understanding of human autonomy in relation to machines. The combination of these factors will hopefully prevent the dystopian outcome from occurring, or at least mitigate against any negative effects. Two factors will operate as countervailing forces toward dystopian outcomes: the amorality of the tech leaders and the direction of travel of autocratic regimes.”

Frank Kaufmann, president of the Twelve Gates Foundation, commented, “Humans will use as many tech aids as possible; these will be as internal and as ‘bionic’ as possible, and these machines will have the ability to learn, and virtually all will be powered by autonomous and artificial intelligence. No key decisions will be automated. All key decisions will require human input. There is no such thing as genuine autonomous decision-making. Mechanical and digital decision-making will characterize machines. These will help human society greatly. The only negatives of this will be those perpetrated by bad or evil people. Apart from augmented capacity, machines will merely serve and enhance what humans want and want to do.”

All key decisions will require human input. There is no such thing as genuine autonomous decision-making. Mechanical and digital decision-making will characterize machines. These will help human society greatly.

Frank Kaufmann, president of the Twelve Gates Foundation

Anthony Patt, professor of policy at the Institute for Environmental Decisions at ETH Zürich, a Swiss public research university, said, “I am generally optimistic that when there is a problem, people eventually come together to solve it, even if the progress is uneven and slow. Since having agency over one’s life is absolutely important to life satisfaction, we will figure out a way to hold onto this agency, even as AI becomes ever more prevalent.

Jane Gould, founder of DearSmartphone, said, “The next generation, born, say, from 2012 onward, will be coming of age as scientists and engineers by 2035. Evolving these tools to serve human interests well will seem very natural and intuitive to them. I can imagine that the core questions of what problems we want to solve, what we want to do in life, where we want to live, and whom we want to have relationships with will be maintained within our own agency. The means to accomplish these things will become increasingly automated and guided by AI. For example, my family and I recently decided that we wanted to go to England this summer for a holiday. We are going to drive there from our home in Switzerland. These choices will stay within our control. Until recently, I would have had to figure out the best route to take to get there. Now I hand this over to AI, the navigation system in our car. That navigation system even tells me where I need to stop to charge the battery, and how long I need to charge it for. That’s all fine with me. But I wouldn’t want AI to tell me where to go for holiday, so that’s not going to happen. OK, I know, some people will want this. They will ask Google, ‘Where should I go on holiday?’ and get an answer and do this. But even for them, there are important choices that they will maintain control over.”

R Ray Wang, founder, chairman and principal analyst at Constellation Research, wrote, “In almost every business process, journey or workflow, we have to ask four questions: 1) When do we fully intelligently automate? 2) When do we augment the machine with a human? 3) When do we augment the human with a machine? 4) When do we insert a human in the process? And these questions must also work with a framework that addresses five levels of AI Ethics: 1) Transparent. 2) Explainable. 3) Reversible. 4) Trainable. 5) Human-led.”

Mark Henderson, professor emeritus of engineering at Arizona State University, wrote, “Science fiction has predicted that technology will surreptitiously take charge of decisions. I see that as a fear-based prediction. I have confidence in human intelligence and humane anticipatory prevention of takeover by either technology or those who want to cause harm. I think most humans would be very troubled by the prospect of machines making decisions over vital human interests such as how health care or other societal goods are allocated. There will undoubtedly be pressure to grant greater decision-making responsibility to machines under the theory that machines are more objective, accurate and efficient. I hope that humans can resist this pressure from commercial and other sources, so that privacy, autonomy and other values are not eroded or supplanted.”

Grace Chomba of DotConnectAfrica, based in Kenya, wrote, “AI will help people to manage the increasingly complex world we are forced to navigate. It will empower individuals to not be overwhelmed.”

Neil McLachlan, consultant and partner at Co Serve Consulting, predicted, “Highly tailored decision-support systems will be ubiquitous, but I expect that a great deal of decision-making – especially regarding ‘life-and-death’ matters – will remain largely the domain of humans.

“From an individual human perspective there may continue to be scope for some ‘fully’ automated decision-making in lower stakes areas such as when to service your car. Greater degrees of automation may be possible in highly controlled but technology-rich environments such as the higher-level implementations of rail traffic management utilising the European Train Control System. Machines and other systems, whether utilising artificial intelligence or not, will remain in decision-support roles.

Laura Stockwell, executive VP for strategy at Wunderman Thompson, wrote, “When you look at the generation of people designing this technology – primarily Gen Z and Millennials – I do believe they have both the awareness of the implications of technology on society, along with the political leaning required to implement human-first design. I also believe that those in decision-making positions – primarily Gen X – will support these decisions. That said, I do believe legislation will be required to support large companies to take into account user autonomy and agency.”

Jim Fenton, an independent network privacy and security consultant and researcher who previously worked at OneID and Cisco, responded, “I’m somewhat optimistic about our ability to retain human agency over important decisions in 2035. We are currently in the learning stage of how best to apply artificial intelligence and machine learning. We’re learning what AI/ML is good at (e.g., picking up patterns that we humans may not notice) and its limitations (primarily the inability to explain the basis for a decision made by AI/ML). Currently, AI is often presented as a magic solution to decision problems affecting people, such as whether to deny the ability to do something seen as fraudulent. But errors in these decisions can have profound effects on people, and the ability to appeal them is limited because the algorithms don’t provide a basis for why they were made. By 2035, there should be enough time for lawsuits about these practices to have been adjudicated and for us as a society to figure out the appropriate and inappropriate uses of AI/ML.”

Erhardt Graeff, a researcher at Olin College of Engineering and expert in the design and use of technology for civic and political engagement, wrote, “Though the vast majority of decisions will be made by machines, the most important ones will still require humans to play critical decision-making roles. What I hope and believe is that we will continue to expand our definition and comprehension of important decisions demanding human compassion and insight. As Virginia Eubanks chronicled in her book ‘Automating Inequality,’ the use of machines to automate social service provision has made less humane the important and complex decision of whether to help someone at their most vulnerable. Through advocacy, awareness and more-sophisticated and careful training of technologists to see the limits of pure machine logic, we will roll back the undemocratic and oppressive dimensions of tech-aided decision-making and limit their application to such questions.”

Some expect that regulation that encourages human-centered design and the application of codes of ethics will emerge

A share of these experts said it is possible that new laws and regulations may be passed in order to protect vulnerable populations from being exploited and allow individuals to exercise at least some control of their data. Some expect that governing bodies and industry organizations will agree upon suggested ethical and design standards and codes of conduct that will influence the degree of individual agency in future tools, platforms and systems.

Stephen D. McDowell, professor of communication and assistant provost at Florida State University, said, “There have to be standards in AI systems to highlight information sources and automated processes that are designed into systems or being used, so we understand the information presented to us, our perceptions of our own values and preferences and our choice environments more fully. The challenge is figuring out how we can think about or conceptualize individual decisions when our information sources, online relationships and media environments are curated for us in feedback loops based upon demonstrated preferences and intended to enhance time engaged online with specific services. To serve to enhance the quality of individuals’ and citizens’ decision-making, there will need to be some underlying model in our systems of what the individual person, citizen, family member, worker should have within their scope of choice and decision. It would need to go beyond the generalized image of a consumer or a member of the public.”

Cathy Cavanaugh, chief technology officer at the University of Florida Lastinger Center for Learning, predicted, “The next 12 years will be a test period for IT policy. In countries and jurisdictions where governments exert more influence, limitations and requirements in technology providers, humans will have greater agency because they will be relieved from the individual burden of understanding algorithms, data risks and other implications of agreeing to use a technology because governments will take on that responsibility on behalf of the public, just as they do in other sectors where safety and expert assessment of safety are essential such as building construction and restaurants. In these places, people will feel more comfortable using technology in more aspects of their lives and will be able to allocate more-repetitive tasks such as writing, task planning and basic project management to technology. People with this technology will be able to spend more time in interactions with each other about strategic issues and leisure pursuits. Because technology oversight by governments will become another divide among societies, limitations upon with whom and in what ways a person uses an application may follow geographic borders.”

Adam Nagy, senior research coordinator at the Berkman Klein Center for Internet and Society, Harvard University, predicted, “Under the upcoming European AI Act, higher-risk use cases of these technologies will demand more robust monitoring and auditing. I am cautiously optimistic that Europe is paving the way for other jurisdictions to adopt similar rules and that companies may find it easier and beneficial to adhere to European regulations in other markets. Algorithmic tools can add a layer of complexity and opacity for a layperson, but with the right oversight conditions, they can also enable less arbitrariness in decision-making and data-informed decisions. It is often the case that an automated system augments or otherwise informs a human decision-maker. This does come with a host of potential problems and risks. For example, a human might simply serve as a rubber stamp or decide when to adhere to an automated recommendation in a manner that reinforces their own personal biases. It is crucial to recognize that these risks are not unique to ‘automated systems’ or somehow abetted by human-led systems. The true risk is in any system that is unaccountable and does not monitor its impacts on substantive rights.”

Lillie Coney, chief of staff and policy director for a member of the U.S. House of Representatives and former associate director of the Electronic Privacy Information Center, said, “Agency and autonomy for one person may deny agency and autonomy to others. There will need to be norms, values and customs that align to transition to this state. There will likely be the ‘four walls rule’ that in one’s dwelling the person has full rights to exercise autonomy over technology, but even this will rely on Supreme Court decisions that uphold or strike down laws governing such matters.”

Agency and autonomy for one person may deny agency and autonomy to others. There will need to be norms, values and customs that align to transition to this state.

Lillie Coney, chief of staff and policy director for a member of the U.S. House of Representatives and former associate director of the Electronic Privacy Information Center

Marija Slavkovik, professor of information science at the University of Bergen, Norway, commented, “Legislation and regulation is globally moving toward higher governance of automated decision-making. The goal of that legislation is protecting human agency and values. There is no reason why this trend would stop. Automation has always been used to do away with work that people do not want to do. This type of work is typically low-paid, difficult or tedious. In this respect, automation supports human agency. In some settings we automate some parts of a job in order to augment the activities of the human expert. This requires that the human is left in control.”

Tom Wolzein, inventor, analyst and media executive, wrote, “Without legislation and real enforcement, the logical cost-savings evolution will be to remove the human even from systems built with a ‘human intervention’ button. Note ‘with a human decision in the loop’ in this headline from a 6/29/2022 press release from defense contractor BAE Systems: ‘BAE Systems’ Robotic Technology Demonstrator successfully fired laser-guided rockets at multiple ground targets, with a human decision in the loop, during the U.S. Army’s tactical scenario at the EDGE 22 exercise at Dugway Proving Ground.’ Think about how slippery the slope is in just that headline. There is a more fundamental question, however. Even if there is human intervention to make a final decision, if all the information presented to the human has been developed through AI, then even a logical and ethical decision by a human based on the information presented could be flawed.”

An anonymous respondent predicted there will be regulation, but it will actually reinforce the current power structure, writing, “In the next 10-15 years, we are likely to see a resurgence of regulation. In some cases, it will be the byproduct of an authoritarian government, who wants control of technology and media. In other cases (especially in Europe), it will be the by-product of governments being increasingly anxious about the rise of authoritarianism (who thus want to control technology and media). This regulation will, among other things, take the form of AI and related algorithms that produce predictable (although constrained) results. Humans will be in control, though in a way that skews the algorithms toward preferred results rather than what ‘the data’ would otherwise yield. Key decisions that will be automated would thus include news feeds, spam filters and content moderation (each with some opportunity for human intervention).

“Other decisions that would be automated (as they often are today) include credit decisioning, commercial measurement of fraud risks, and targeted advertising. Some of these decisions should require direct human input, e.g., in order to correct anomalous or discriminatory results. That input will be applied inconsistently, though with regulators taking enforcement action to incent more breadth and rigor to such corrections. The effects on society will include less change, in some ways: Existing power structures would be reinforced, and power could even be consolidated. In other ways, the effects will be to shift value from those who analyze data to those who collect and monetize data (including those who have collected and monetized the most). European efforts to dethrone or break up large U.S. platform companies will fail, because the best algorithms will be those with the best data.”

Some do not believe that truly effective regulation or industry codes or standards will be successfully agreed upon, applied or achieved by 2035

A tech entrepreneur whose work is to create open-source knowledge platforms commented, “I suspect that we are unlikely to have legal frameworks in place which are sufficient to support evolving and emerging case law in the context of robotic decision makers. I base that, in part, on the well-documented polarization of our political and social systems. On the theory that we are more likely to muddle along and not face complex and urgent problems in appropriate ways, we will not be ready for fully autonomous decision makers by 2035. I do expect gains in the capabilities of autonomous agents, as, for instance, in the self-driving transportation field; we have come a very long way since the early DARPA-funded experiments, but still, we see spectacular failures. The fact that an autopilot will be tasked to make moral decisions in the face of terrible alternatives in emergency situations remains a hot topic; legal frameworks to support that? By 2035? Perhaps, but it seems unlikely. Surgeons use robots to assist in surgery; robots are beginning to outperform radiologists in reading diagnostic images, and so the progress goes. By 2035, will hospitals be ready to surrender liability-laden decisions to robots? Will surgeons turn over a complex surgical procedure to a robot? Consider the notion of stents; which surgeon would give up a six-figure surgery for a four-figure operation? Not until students in med schools were trained to use them did they penetrate surgery suites. The dimensionality of this question is far too high for any single mortal to see all of them; it’s good that this question is being posed to a wide audience.”

Wendell Wallach, bioethicist and director of the Artificial Intelligence and Equality Initiative at the Carnegie Council for Ethics in International Affairs, commented, “I do not believe that AI systems are likely by 2035 to have the forms of intelligence necessary to make critical decisions that affect human and environmental well-being. Unfortunately, the hype in the development of AI systems focuses on how they are emulating more and more sophisticated forms of intelligence, and furthermore, why people are flawed decision makers. This will lead to a whittling away of human agency in the design of systems, unless or until corporations and other entities are held fully responsible for harms that the systems are implicated in.

The hype in the development of AI systems focuses on how they are emulating more and more sophisticated forms of intelligence, and furthermore, why people are flawed decision makers. This will lead to a whittling away of human agency in the design of systems, unless or until corporations and other entities are held fully responsible for harms that the systems are implicated in.

Wendell Wallach, bioethicist and director of the Artificial Intelligence and Equality Initiative at the Carnegie Council for Ethics in International Affairs

“This response is largely predicated on how the systems are likely to be designed, which will also be a measure, uncertain at this time, as to how effective the AI ethics community and the standards it promulgates are upon the design process. In the U.S. at the moment, we are at a stalemate in getting such accountability and liability beyond what has already been codified in Tort. However, this is not the case in Europe and in other international jurisdictions. Simultaneously, standards-setting bodies, such as the Institute of Electrical and Electronics Engineers (IEEE) and the International Organization for Standardization are making it clear that maintaining human agency should be central.

“Nevertheless, we are seeing the development and deployment of autonomous weapons systems and other autonomous artifacts in spite of the fact that meaningful human control is often either an illusion or near-impossible to implement. We probably will need a disaster before we can create sufficient popular pressure that focuses on upgrading of our laws and regulatory bodies to reinforce the importance of human agency when deploying AI systems.”

A senior research scientist at Google predicted, “It’s unclear to me how we can rely on full autonomy in any systems that lack commonsense knowledge. As you know, commonsense knowledge is the hardest nut to crack. I see no reason that we’ll have it in the next 10 years. Until then, letting robot systems have full autonomy will be a disaster. My prediction: There will be a few disasters when people release autonomous systems, which will then be banned.”

Nicholas CL Beale, futurist and consultant at Sciteb, said, “The more-positive outcome will happen if and only if the people responsible for developing, commercialising and regulating these technologies are determined to make it so. I’m optimistic – perhaps too much so. Based upon present trends I might be much less sanguine, but the tech giants have to adapt or die.”

A well-known internet pioneer now working as a principal architect at one of the world’s leading software companies said, “The relationship between humans and machines will be largely guided by law. Just as autonomous vehicles have not progressed to widespread deployment as quickly as was initially thought, so will many other uses of machine learning be delayed. The basic problem is that making decisions brings with it liability and in most cases the software developers are not adequately compensated for that liability, which today cannot be insured against.”

Some say societal norms, education and digital literacy will positively evolve

A share of these respondents suggested that the public will or should become better educated about digital tools and systems and more digitally literate by 2035, with some saying that societal norms will form around tech-abetted decision-making that will help people more deeply develop their ability to augment their lives with these tools.

Jeremy Pesner, senior policy analyst at the Bipartisan Policy Center, Georgetown University, responded, “We can’t become literate about our data and information if we don’t even know exactly what they look like! At the end of the day, it’s important that we know how the machines think, so that we never come to see them as inscrutable or irreproachable. When it comes to the public making data-based decisions, a challenge is that some of the biggest are made by people who are not especially data-literate. They’re going to rely on machines to analyze whatever data they have and either follow the machine’s advice or disregard it and go with their gut. The best collaborations between man and machine on decision-making will always revolve around humans who could analyze the data manually but know how to program machines to do [it] for them. In such a case, if there’s some kind of error or suspicious output, those humans know how to recognize it and investigate.

“Many automated decisions will be based on which data to capture (so it can be mined for some kind of preferencing algorithm), what suggestions to then offer consumers and, of course, what ads to show them. When it comes to issues involving health and legal sentencing and other high-risk matters, I do expect for there to be a human in the mix, but again, they’ll need to be data-literate so they can understand what characteristics about a person’s data led the machine to make that decision. Europe’s AI Act, which puts restrictions on different types of AI systems according to their risk, will hopefully become the de-facto standard in this regard, as people will come to understand that machines can always be second-guessed.

“Then again, I’m concerned that many of the technical information and details – which are what determines any given decision a machine will make – will remain largely masked from users. Already, on smartphones, there is no way to determine the memory allocation of devices or examine their network traffic without the use of third-party, often closed-source apps. With more and more out-of-the-box standalone IoT devices that have sleek smartphone interfaces, it will be extremely difficult to actually know what many of our devices are doing. This is only more true for centralized Internet and social media services, which are entirely opaque when it comes to the use of consumer data. Even these cookie menu options as a result of GDPR only describe them in broad terms, like ‘necessary cookies’ and ‘cookies for analytics.’”

A professor of computer science at Carnegie Mellon University wrote, “I believe that the current work in AI and ethics will accelerate, such that important ethical considerations, such as human autonomy and transparency, will be incorporated into critical decision-making AI software, enabling humans to stay in control of the AI systems.”

Eileen Rudden, co-founder of LearnLaunch, said, “Workflows will be more automated. Translations and conversions will be automated. Information summarization will be automated. Many decisions requiring complex integration of information may be staged for human input, such as potential for drugs prescribed to interact. Other complex decisions may be automated, such as what learning material might be presented to a learner next, based on her previous learning or future objective and the ability to scan immense databases for legal precedents. In general, any process that yields significant data will be analyzed for patterns that can be applied to other participants in that process. This will include hiring and promotion systems. If professionals get comfortable with the new systems, they will be expanded. What sorts of worries fall into view?

  • “Tech-savviness will become even more important as more-advanced systems become more prevalent. There will be a risk of social breakdown if the inequality that has resulted from the last 40 years of the information age is allowed to accelerate.
  • “We need to understand the power and dignity of work and make sure all people are prepared for change and feel they have value in society.
  • “It is also important for society to be able to understand the real sources of information in order to maintain democracy.”

Jeff Jarvis, director of the Tow-Knight Center for Journalism, Columbia University, wrote, “It is critical to get past the simplistic media-framed perspective about AI and machine learning to assure that people understand what these systems can and cannot do. They use large data sets to make predictions: about responses to queries, about human behavior and so on. That is what they are good at; little else. They will need to be tied with databases of reliable information. They will need to be monitored for quality and bias on input and output. They will be helpful. In the words of David Weinberger in his book ‘Everyday Chaos,’ ‘Deep learning’s algorithms work because they capture better than any human can the complexity, fluidity and even beauty of a universe in which everything affects everything else, all at once.’ The interesting issues are that these systems will not be able to deliver a ‘why.’ They are complex A/B tests. They do not have reasoning or reasons for their decisions. As I wrote in a blog post, I can imagine a crisis of cognition in which humans – particularly media – panic over not being able to explain the systems’ outcomes.”

They use large data sets to make predictions: about responses to queries, about human behavior and so on. That is what they are good at; little else. They will need to be tied with databases of reliable information. They will need to be monitored for quality and bias on input and output. They will be helpful.

Jeff Jarvis, director of the Tow-Knight Center for Journalism, Columbia University

John McNutt, professor emeritus of public policy and administration at the University of Delaware, wrote, “I have little doubt that we have the ability to create such machines [allowing agency]. Whether we will use our creations with agency will depend on culture, social structure and organization, and public policy. We have a long history of resistance to tools that will make our lives better. The lost opportunities are often depressing.”

Irina Raicu, director of the internet ethics program of the Markkula Center for Applied Ethics, Santa Clara University, said, “Whether the broad rollout of tech-abetted, often autonomous decision-making will continue is up to us. It depends on the laws we support, the products we buy, the services we access or refuse to use, the way in which we educate people about the technology and the way in which we educate future technologists and lawmakers in particular.”

A share of those who don’t believe human agency will have better support in tech-abetted decision-making by 2035 expressed doubts that society can accomplish such change

Lauren Wagner, a post-disciplinary social scientist expert in linguistic anthropology, predicted, “Based on where we are today – at a time in which there is limited or no algorithmic transparency and most of the AI that impacts our day-to-day lives is created inside large technology platforms – I do not believe that by 2035 we will be in a place where end users are in control of important decision-making regarding how AI works for them. To accomplish this would require up-leveling of user education around AI (how it works and why users should care about controlling it), advanced thinking around user experience, and likely government-mandated regulation that requires understandable algorithmic transparency and user controls.”

Nrupesh Soni, founder and owner of Facilit8, a digital agency located in Namibia, commented, “I fear that we have a whole generation of youth that is used to instant gratification, quick solutions, and we do not have enough people who can think long-term and work on solutions. I do not think humans will be in charge of the bots/AI decision-making, mainly because we are seeing a huge gap between the people who grew up with some understanding of programming and the basics motivations behind our digital technologies, and the next-gen that is used to using APIs provided to them without knowing the backend coding required to work on something new. There will be a time in the next 10 years when most of those who developed the core of these bots/AI will be aging out of the creative force, in their late 50s or 60s, and the younger generation will not know how to really innovate as they are used to plug-and-play systems.”

Frank Odasz, director at Lone Eagle Consulting, expressed little faith in the public gaining broad-based digital literacy, writing, “Increasing AI manipulation of beliefs, or media (such as deepfake videos) can be expected in future. I see a two-tiered society as 1) those who learn to smartly use AI tools without allowing themselves to be manipulated, and 2) those who allow themselves to believe that they can justify ‘believing anything they want.’ The big question is, in the future, which tier will be dominant in most global societies? My 39-year history as an early adaptor and promoter of smart online activities such as e-learning, remote work, building community collaborative capacity, began in 1983 and I’ve presented nationally and internationally. Top leaders in D.C. in the early days didn’t have a clue what would be possible ‘being online’ – at any speed. Americans’ misuse of social media and persuasive design at all levels has increasingly resulted in artificial intelligence-spread political manipulation promoted as factual truth by radicals lacking any level of conscience or ethics. Automated data via Facebook and persuasive design caused autocratic winners to take major leadership positions in more than 50 national elections in the past few years, sharing misinformation designed to sway voters and automatically deleting the convincing false messages to hide them after they had been read.

“Various people have proposed that there are seven main intelligences (though a Google search will show different listings of the seven). ‘Intelligence’ and ‘agency’ are related to basing decisions smartly on factual truths, yet many people do not base decisions on proven facts, and many say, ‘I can believe whatever I want.’ Hence the global growth of the Flat Earth Society, refuting the most obvious of facts, that the Earth is a round planet. Many people choose to believe ideas shared by those of a particular religious faith, trusting them despite proven facts. There are those who are anti-literacy and there are deniers who refute proven facts; they represent a growing demographic of followers who are not thinkers. We also have those who will routinely check facts and have a moral compass dedicated to seeking out facts and truth. Eric Fromm said, ‘In times of change, learners inherit the Earth.’”

Roger K. Moore, editor of Computer Speech and Language and professor at the University of Sheffield, England, responded, “In some sense the genie was released from the bottle during the industrial revolution, and human society is on a track where control is simply constantly reducing. Unless there is massive investment in understanding this, the only way out will be that we hit a global crisis that halts or reverses technological development (with severe societal implications). I am basing my decision on the history of automation thus far. Already, very few individuals are capable of exerting control over much of the technology in their everyday environment, and I see no reason for this trend to be reversed. Even accessing core technologies (such as mending a watch or fixing a car engine) is either impossible or highly specialised. This situation has not come about by careful societal planning, it has simply been an emergent outcome from evolving technology – and this will continue into many areas of decision-making.”

Barry Chudakov, founder and principal, Sertain Research, also expects that widespread digital literacy will not have been achieved by 2035. He predicted, “It will still be unclear to most by 2035 that humans are now sharing their intelligence, their intentions, their motivations with these technological entities. Why? Because we have not built, nor do we have plans to build, awareness and teaching tools that retrain our populace or make people aware of the consequences of using newer technologies; and because in 13 years the social structures of educational systems – ground zero for any culture’s values – will not have been revamped, rethought, reimagined enough to enable humans to use these new entities wisely.

“Humans must come to understand and appreciate the complexity of the tools and technologies they have created and then teach one another how to engage with and embrace that complexity. It is now supremely important is to understand the dynamics and logic of smart machines, bots and systems powered mostly by autonomous and artificial intelligence. This is the new foundation of learning. But most are at a disadvantage when it comes to today’s most critical skill: learning to think through, question, test and probe the moral and ethical dimensions of these new tools.”

Businesses will protect human agency because the marketplace demands it

Some experts said they expect that businesses will begin to develop digital tools and systems in ways that allow for human agency in order to stay relevant, to stay ahead of competitors and to assist the public and retain its trust.

Gary M. Grossman, associate professor in the School for the Future of Innovation at Arizona State University, said, “Market conditions will drive accessibility in AI applications. In order to be marketable, they will have to be easy enough for mass engagement. Moreover, they will have to be broadly perceived to be useful. AI will be used as much as possible in routine activities, such as driving, and in situations in which minimizing human efforts is seen to be beneficial. All of this will change society profoundly, as it has in every major occurrence of widespread technological change. The key question is whether that change is ‘better.’ This depends on one’s perspective and interests.”

Peter Suber, director of the Harvard University Open Access Project, responded, “The main reason to think that AI tools will help humans make important decisions is that there will be big money in it. Companies will want to sell tools providing this service and people will want to buy them. The deeper question is how far these tools will go toward actually helping us make better decisions or how far they will help us pursue our own interests. There’s good reason to think the tools will be distorted in at least two ways.

“First, even with all good will, developers will not be able to take every relevant variable into account. The tools will have to oversimplify the situations in which we make decisions, even if they are able to take more variables into account than unaided humans can. Second, not all tools and tool providers will have this sort of good will. Their purpose will be to steer human decisions in certain directions or to foster the political and economic interests the developers want to foster. This may be deceptive and cynical, as with Cambridge Analytica. Or it may be paternalistic. A tool may intend to foster the user’s interests, but in practice this will mean fostering what the developer believes to be the user’s interests or what the tool crudely constructs to be the user’s interests.”

Geoff Livingston, a digital marketing pioneer who is now marketing VP at Evalueserve, wrote, “Where AI is working well in the business world is via domain-specific use cases. This is when AI is guided by humans – subject matter experts, data scientists and technologists – to address a specific use case. In these instances, AI becomes highly effective in informing users, making recommendations, providing key intelligence related to a market decision, identifying an object and suggesting what it is. These domain-specific AI experiences with human guidance are the ones that are becoming widespread.

“So, when a business unleashes autonomous decision-making via a domain-specific AI on its customers and that experience is not awesome, you can rest assured 1) customers will leave and 2) competitors with a more user-friendly experience will welcome them. When a business suggests a customer use AI to better the experience, gives them the ability to opt-in and later opt-out at their discretion, successes will occur. In fact, more revenue opportunities may come by providing more and more manual human control.”

A distinguished researcher at IBM said, “Any key decision that should have a human in the loop can and will be designed to do that.”

Tori Miller Liu, chief information officer for the American Speech-Language-Hearing Association, commented, “Investment in innovation may be driven by a desire for increased revenue, but the most sustainable solutions will also be ethical and equitable. There is less tolerance amongst users for anti-ethical behavior, inaccessibility and lack of interoperability. The backlash experienced by Meta is an example of this consumer trend. Until someone can program empathy, human agency will always have a role in controlling decision-making. While AI may assist in decision-making, algorithms and datasets lack empathy and are inherently biased. Human control is required to balance and make sense of AI-based decisions. Human control is also required to ensure ethical technology innovation. Companies are already investing in meaningful standardization efforts. For example, the Metaverse Standards Forum or the Microsoft Responsible AI Standard are focused on improving customer experience by touting interoperability, transparency and equity.”

Fred Baker, internet pioneer, longtime Internet Engineering Task Force leader and Cisco Systems Fellow, wrote, “I think people will remain in ultimate control of such decisions as people have a history of making their opinions known, if only in arrears. If a ‘service’ makes errors, it will find itself replaced or ignored.”

A telecommunications policy expert wrote, “In 2035 humans will often be in control because people will want the option of being in control, and thus products that offer the option of control will dominate in the market. That said, many routine tasks that can be automated will be. Key decisions that will be fully automated are 1) those that require rapid action and 2) those that are routine and boring or difficult to perform well. Indeed, we have such systems today. Many automobiles have collision avoidance and road departure mitigation systems. We have long had anti-lock brake systems (ABS) on automobiles. I believe that ABS usually cannot be turned off. In contrast, vehicle stability assist (VSA) can often be turned off. Automobiles used to have chokes and manual transmissions. Chokes have been replaced by computers controlling fuel-injection systems. In the U.S., most cars have automatic transmissions. But many automatic transmissions now have an override that allows manual or semi-manual control of the transmission. This is an example of the market valuing the ability to override the automated function. The expansion of automated decision-making will improve efficiency and safety at the expense of creating occasional hard-to-use interfaces like automated telephone attendant trees.”

An expert who has won honors as a distinguished AI researcher commented, “There is a lot of research on humans and AI, and it will produce results in a few years. Tech companies are interested in making products that people will buy, so there is more attention than ever in making software that interacts with humans.”

Jenny L. Davis, senior lecturer in sociology at the Australian National University and author of “How Artifacts Afford: The Power and Politics of Everyday Things,” commented, “The general retention of human decision-making will eventuate because the public will resist otherwise, as will the high-status professionals who occupy decision-making positions. I don’t think there will be a linear or uniform outcome in regard to who maintains control over decision-making in 2035. In some domains – such as consumer markets, low- and mid-level management tasks (e.g., résumé sorting) and operation of driverless vehicles – the decisions will lean heavily toward full automation. However, in the domains accepted as subjective, high stakes and dependent on expert knowledge, such as medicine, judicial sentencing and essay grading, for example, human control will remain in place, albeit influenced or augmented in various capacities by algorithmic systems and the outputs those systems produce.”

Peter Rothman, lecturer in computational futurology at the University of California, Santa Cruz, pointed out that lack of demand in the marketplace can stifle innovation that may support more agency, writing, “As we can see with existing systems such as GPS navigation, despite the evidence that using these impairs users’ natural navigation abilities and there is a possibility of a better design that wouldn’t have these effects, no new products exist because users are satisfied using current systems. As Marshall McLuhan stated, every extension is also an amputation.”

Some suggested ways that tech businesses might improve designs

A share of these respondents suggested potential approaches businesses might possibly implement or are just now beginning to implement to improve human agency in tech-abetted decision-making.

Jim Spohrer, board member of the International Society of Service Innovation Professionals, previously a longtime IBM leader and distinguished technologist at Apple, asked, “People will likely have a form of personal-private cognitive mediator by 2035 that they rely on for certain decisions in certain realms. The key to decision-making in our own lives is not so much individual control as it is a process of interaction with trusted others. Are people today in control of important decisions? The short answer is ‘no.’ Instead, they rely on trusted mediators: trusted organizations, trusted experts or trusted friends and family members. Those trusted mediators help make our decisions today and they will continue to do so in 2025. The trusted mediators will likely be augmented by AI.”

Vint Cerf, pioneer innovator, co-inventor of the Internet Protocol and vice president at Google, wrote, “My thought, perhaps only hazily formed, is that we will have figured out how to take intuitive input from users and turn that into configuration information for many software-driven systems. You might imagine questionnaires that gather preference information (e.g., pick ‘this’ or ‘that’) and, from the resulting data, select a configuration that most closely approximates what the user wishes. Think about the Clifton StrengthsFinder questionnaire, a tool developed by the Gallup Organization that asks many questions that reveal preferences or strengths – sometimes multiple questions are asked in different ways to tease out real preferences/strengths. It’s also possible that users might select ‘popular’ constellations of settings based on ‘trend setters’ or ‘influencers’ – that sounds somewhat less attractive (how do you know what behavior you will actually get?). Machine learning systems seem to be good at mapping multi-dimensional information to choices.”

Monique Jeanne Morrow, senior distinguished architect for emerging technologies at Syniverse, a global telecommunications company, said, “The digital version of ‘do no harm’ translates to valuing human safety. Understanding the potential for harm and mitigation is a starting point. Perhaps a new metric should be created that measures a tech development’s likely benefits to society that also indicates that some degree of human agency must always be in the loop. An example of perceived misuse, though cultural and geopolitical in nature, can be found in the recently reported news that ‘Scientists in China Claim They Developed AI to Measure People’s Loyalty to the Chinese Communist Party.’ There should be embedded ethics and attention to environmental, social and governance concerns as part of the tech development process. Automation is needed to remove friction; however, this tech should have ‘smart governance’ capability, with defined and agreed-upon ethics (understanding that the latter is highly contextual).”

Barry Chudakov, founder and principal, Sertain Research, wrote, “We will need a new menu of actions and reactions which we collectively agree do not compromise agency if we turn them over to AI. We can then, cautiously, automate these actions. I am not prepared to list which key decisions should or should not be automated (beyond simple actions like answering a phone) until we have fully examined agency in an historical context; only then are we prepared to consider tool logic and how humans have previously entrained with that logic while not acknowledging our shared consciousness with our tools; and only then are we ready to examine how to consider which decisions could be mostly automated.

“We need a global convention of agency. We are heading toward a world where digital twinning and the metaverse are creating entities which will function both in concert with humans and apart from them. We have no scope or ground rules for these new entities. Agency is poised to become nuanced with a host of ethical issues. The threat of deepfakes is the threat of stolen agency; if AI in the hands of a deepfaker can impersonate you – to the degree that people think the deepfake is you – your agency has vanished. The cultural backdrop of techno agency reveals other ethical quandaries which we have not properly addressed.”

Steven Miller, former professor of information systems at Singapore Management University, responded, “New efforts are generating a growing following for designing and deploying AI-enabled systems for both augmentation and automation that are human-centered and that adhere to principles of ‘responsibility.’ There is a growing recognition of the need for ‘human-centered AI’ as per the principles enunciated in Ben Shneiderman’s 2022 book on this [“Human-Centric AI Experiences: Applied UX Design for Artificial Intelligence”], as illustrated by the advocacy and research of Stanford’s Institute for Human-Centered AI and as demonstrated by growing participation in AI Fairness, Accountability and Transparency (FAccT) communities and efforts, and many other initiatives addressing this topic.”

Gary Arlen, principal at Arlen Communications, commented, “In 2035, AI – especially AI designed by earlier AI implementations – may include an opt-out feature that enables humans to override computer controls. Regulations may be established in various categories that prioritize human vs. machine decisions. Primary categories will be financial, medical/health, education, maybe transportation. Human input will be needed for moral/ethical decisions, but (depending on the political situation) such choices may be restricted. What change might this bring in human society? That all depends on which humans you mean. Geezers may reject such machine interference by 2035. Younger citizens may not know anything different than machine-controlled decisions. In tech, everything will become political.”

Kurt Erik Lindqvist, CEO and executive director of the London Internet Exchange, wrote, “Absent breakthroughs in the underlying math supporting AI and ML, we will continue to gain from the advances in storage and computing but we will still have narrow individual applications. We will see parallel AI/ML functions cooperating to create a seamless user experience where the human interaction will be with the system, assisted by guidance from each individual automated decision-making. Through this type of automated decision-making, many routine tasks will disappear from our lives.”

We’ll continue to have the computers do the grunt work of poring through data but will continue to need experts to look at the conclusions drawn from AI analysis and do reality and gut checks for where they may have gone astray.

alerie Bock, principal at VCB Consulting

Valerie Bock, principal at VCB Consulting, observed, “What we find, time and time again, is that the most accurate models are the ones in which there are multiple places for humans to intervene with updated variables. A ‘turnkey’ system that uses a pre-programmed set of assumptions to crank out a single answer is much too rigid to be useful in the dynamic environments in which people operate. I do not believe we are going to find any important decisions that we can fully and reliably trust only to tech. Computers are wonderful at crunching through the implications of known relationships. That’s one thing, but where there is much that is uncertain, they are also used to test what-if scenarios. It is the human mind that is best attuned to ask these ‘what if’ questions. One of the most useful applications for models, computerized and not, is to calculate possible outcomes for humans to consider. Quite often, knowledgeable humans considering model predictions feel in their gut that a model’s answer is wrong. This sets off a very useful inquiry as to why. Are certain factors weighted too heavily? Are there other factors which have been omitted from the model? In this way, humans and computers can work effectively together to create more realistic models of reality, as tacit human knowledge is made explicit to the model. We’ve learned that AI that is programmed to learn from databases of human behavior learns human biases, so it’s not as easy as just letting it rip and seeing what it comes up with. I expect we’ll continue to have the computers do the grunt work of poring through data but will continue to need experts to look at the conclusions drawn from AI analysis and do reality and gut checks for where they may have gone astray. It has been possible for decades to construct spreadsheets that model complex decision-making.”

Dan McGarry, journalist, editor and investigative reporter, suggested, “Machine learning and especially training of ML services require a kind of input to which most people are unaccustomed. The closest people today come to interacting with learning algorithms are the ‘Like,’ ‘Block’ and ‘Report’ buttons they see online. That communication and information exchange will have to involve a great deal more informed consent from individuals. If that happens, then it may become possible to train so-called AIs for numerous tasks. This interaction will, of necessity, take the form of a conversation – in other words, a multi-step, iterative communication allowing a person to refine their request, and it will take the ‘AI’ to refine its suggestions. As with all relationships, these will, over time, become based on nonverbal cues as well as explicit instructions.

“Machine learning will, eventually, become affordable to all, and initiate fundamental changes in how people interact with ‘AIs.’ If and when that transpires, it may become possible to expand a person’s memory, their capacity for understanding, and their decision-making ability in a way that is largely positive and affirming, inclusive of other people’s perspectives and priorities. Such improvements could well transform all levels of human interaction, from international conflict to governance to day-to-day living. In short, it will not be the self-driving car that changes our lives so much as our ability to enhance our understanding and control over our minute-to-minute and day-to-day decisions.”

Christian Huitema, 40-year veteran of the software and internet industries and former director of the Internet Architecture Board, said humans should be involved in reviewing machine decisions, writing, “Past experience with technology deployment makes be dubious that all or even most developers will ‘do the right thing.’ We see these two effects colliding today, in domains as different as camera auto-focus, speed-enforcement camera, and combat drones. To start with a most benign scenario, camera makers probably try to follow the operator’s desires when focusing on a part of an image, but a combination of time constraints and clumsy user-interaction design often proves frustrating. These same tensions will likely play in future automated systems. Nobody believes that combat drones are benign, and most deployed systems keep a human in the loop before shooting missiles or exploding bombs. I hope that this will continue, but for less-critical systems I believe designers are likely to take shortcuts, like they do today with cameras. Let’s hope that humans can get involved after the fact and have a process to review the machines’ decisions. Autonomous driving systems are a great example of future impact on society. Human drivers often take rules with a grain of salt, do rolling stops or drive a little bit above the speed limit. But authorities will very likely push back if a manufacturer rolls out a system that does not strictly conform with the law. Tesla already had to correct its ‘rolling stop’ feature after such push-back. Such mechanisms will drive society toward ‘full obedience to the laws,’ which could soon become scary.”

Pat Kane, futurist and consultant Pat Kane Global, predicted, “It’s obvious that the speed and possibility space of computation is bringing extraordinary new powers of material-shaping to humans’ hands. See AlphaFold’s 200 million protein-shape predictions. How can we bring a mind-modelling articulacy to the communication of these insights, and their workings, indeed putting their discoveries at the service of human agency? The recent lesson from the Blake Lemoine LaMDA incident, reinforced by the Google AI exec Blaise Arnos, is that advanced machine-learning has a ‘modelling-of-mind’ tendency, which makes it want to respond to its human interlocutors in a sympathetic and empathetic manner. This particular evolution of AI may well be peculiarly human-centered.”

Philip J. Salem, professor of communications studies and faculty emeritus at Texas State University, wrote, “Most AI designers and researchers are sensitive to many issues about human decision-making and the modeling of this in AI, and most of them will design AI with a sense of individual agency. One thing I am worried about is what they don’t know. What they don’t know well are the social constraints people routinely create for each other when they communicate. The people who design AI need training in human communication-dialogue, and they need to be more mindful of how that works.

“Many people have experiences with social media that are more about presentations and performance than about sustaining dialogue. Many people use these platforms as platforms – opportunities to take the stage. People’s uses of these platforms, not the technologies, are the problem. Their use is nearly reflexive, moving fast, with little time for reflection or deliberation. What I am afraid of is the ways in which the use of future AI will simulate human communication and the development of human relationships. The communication will be contrived, and the relationships will be shallow. When the communication is contrived and the relationships are shallow, the society becomes brittle. When the communication is contrived and relationships are shallow, psychological well-being becomes brittle. Human communication provides the opportunities for cognitive and emotional depth. This means there are risks for incredible sadness and incredible bliss. This also means there are opportunities for resilience. Right now, many people are afraid of dialogue. Providing simulated dialogue will not help. Making it easier for people to actually connect will help.”

A share of these experts expressed concerns that technology design will not be improved between 2022 and 2035

Alan S. Inouye, senior director for public policy and government relations at the American Library Association, cited limitations to design advances in some spaces, also mentioning where they are most likely to occur. “Fundamentally, system designers do not currently have incentive to provide easy control to users. Designers can mandate the information and decision-making flow to maximize efficiency based on huge data sets of past transactions and cumulative knowledge. User intervention is seen as likely to decrease such efficiency and so it is discouraged. Designers also have motivations to steer users in particular directions. Often these motivations derive from marketing and sales considerations, but other motivations are applicable, too (e.g., professional norms, ideology or values). Thus, the ability for users to be in control will be discouraged by designers for motivational reasons.

“As the political context has become yet more polarized, the adoption of new laws and regulations becomes only more difficult. Thus, in the next decade or two, we can expect technology development and use to continue to outpace new or revised laws or regulations, quite possibly even more intensely than in the last two decades. So, there will be only modest pressure from the public policy context to mandate that design implements strong user control. (The exception to this may occur if something really bad becomes highly publicized.)

“I do believe that areas for which there are already stronger user rights in the analog world will likely see expansion to the digital context. This will happen because of the general expectations of users, or complaints or advocacy if such control is not initially forthcoming from designers. Some domains such as safety, as in vehicle safety, will accord considerable user control. People have particular expectations for control in their vehicles. Also, there is a well-developed regulatory regime that applies in that sector. Also, there are considerable financial and reputational costs if a design fails or is perceived to have failed to accommodate reasonable user controls.”

Rather than understanding this as a binary relationship between humans vs. machines, systems that allow for greater flexibility, modularity and interoperability will be key to supporting human agency.

Laura Forlano, director of the Critical Futures Lab, Illinois Institute of Technology

Laura Forlano, director of the Critical Futures Lab, Illinois Institute of Technology, an expert on the social consequences of technology design, said, “It is highly likely, due to current design and engineering practices, that decisions about what is too much or what is too little automation will never be clearly understood until the autonomous systems are already deployed in the world. In addition, due to current design and engineering practices, it is very likely that the people who must use these systems the most – as part of their jobs – especially if they are in customer-facing and/or support roles with less power – will never be consulted in advance about how best to design these systems to make them most useful in each setting. The ability for primary users to inform the design processes of these systems in order to make them effective and responsible to users is extremely limited.

“Rather than understanding this as a binary relationship between humans vs. machines, systems that allow for greater flexibility, modularity and interoperability will be key to supporting human agency. Furthermore, anticipating that these systems will fail as a default and not as an aberration will allow for human agency to play a greater role when things do go wrong.”

Deirdre Williams, an independent internet governance consultant based in the Caribbean, said, “I don’t believe that humans will not be in control of important decision-making, but I also don’t believe that the behaviour of designers is likely to change very much. In the technologically disadvantaged parts of the world, we are not very good at collecting data or handling it with accuracy. Decision-making software needs good data. It doesn’t work properly without it. When it doesn’t work properly it makes decisions that hurt people. There is a tendency to forget or not to acknowledge that data is history not prophecy; that it is necessary to monitor ALL of the variables, not just the ones humans are aware of – to note that patterns shift and things change, but not necessarily on a correctly perceived cycle.”

The future will feature both more and less human agency, and some advantages will be clear

A share of the experts who responded that there WILL be some ease of agency in the tech-enabled future said these individual freedoms will be unevenly distributed across humanity. While many of the experts who selected “yes,” human agency will gain ground by 2035, made statements that fall into this category, there were also quite a few in the “no” column who said they expect that the digital future will continue to feature broad inequalities.

Giacomo Mazzone, global project director for the United Nations Office for Disaster Risk Reduction, responded, “The future will be unevenly distributed. Positive progress will be made only in those countries where a proper system of rules based on the respect of human rights is put in place. A large part of the world’s population living outside of democracies will be under the control of automated systems that serve only the priorities of the regional regime. Examples include the massive use of facial recognition in Turkey and the ‘stability maintenance’ mechanisms in China. Also, in the countries where profit-based priorities are allowed to overrule human rights such as privacy or respect of minorities the automated systems will be under the control of corporations. I believe the U.S. will probably be among those in this second group.

In the countries with human rights-compliant regulation, greater agency over human-interest decision-making may come in the realms of life-and-death situations, health- and justice-related issues, some general-interest and policymaking situations, and in arbitration between different societal interests (e.g., individuals vs. platforms).

Giacomo Mazzone, global project director for the United Nations Office for Disaster Risk Reduction

“In the countries with human rights-compliant regulation, greater agency over human-interest decision-making may come in the realms of life-and-death situations, health- and justice-related issues, some general-interest and policymaking situations, and in arbitration between different societal interests (e.g., individuals vs. platforms). In countries that respect human rights, automated decisions will generally be turned to in cases in which the speed, safety and/or complexity of the process requires them. Examples include the operation of unmanned vehicles, production and distribution of goods based on automatic data collection, and similar.

“Perhaps one of the most likely broad societal changes in a future with even more digitally enhanced decision-making is that – similarly to what happened with the introduction of pocket calculators, navigation systems and other innovations that in the past brought a loss of mental calculation capacity, of orientation in space, of the ability to repair simple objects – most of the humanity will find their skills to be significantly degraded.”

Cindy Cohn, executive director of the Electronic Frontier Foundation, wrote, “I expect that some humans will likely be in control of important decision-making for themselves. I do not think that this amount of control will be possessed by all humans. As with all technologies, the experience of them and the amount of control that is exercised will be different for different people and will likely track the power those people have in society. Marginalized people will likely be subjected to a range of decisions about their lives that are made by machines, bots and systems, with little control. I expect that this will be the case in situations involving public support, access to health care and necessities such as food, energy and water, law enforcement, national security.”

Rasha Abdulla, professor of journalism and communication at The American University in Cairo, Egypt, commented, “An important aspect of now and the future is how such technology will be used across different regions of the world where people live under authoritarian rule. While I think consumer-oriented products will be better designed in the future to make life easier, mostly with the consumer in control, I worry about what products with broader use or influence by authoritarian governments or systems will be like. It’s one thing to talk about coffee makers and self-driving cars and another to talk about smart surveillance equipment.”

The director of an institute for bioethics, law and policy said, “I think most humans would be very troubled by the prospect of machines making decisions over vital human interests like how health care or other societal goods are allocated. There will undoubtedly be pressure to grant greater decision-making responsibility to machines under the theory that machines are more objective, accurate and efficient. I hope that humans can resist this pressure from commercial and other sources, so that privacy, autonomy, equity and other values are not eroded or supplanted.”

Mike Silber, South African attorney and head of regulatory policy at Liquid Intelligent Technologies, wrote, “A massive digital divide exists across the globe. Certainly, some people will have tech-abetted decision-making assist them, others will have it imposed on them by third-party decision-makers (governments, banks, network providers) and yet others will continue to remain outside of the technology-enabled space.”

Irina Raicu, director of the internet ethics program at the Markkula Center for Applied Ethics, Santa Clara University, commented, “The answer will vary in different countries with different types of governments. Some autocratic governments are deploying a lot of technologies without consulting their citizens, precisely in order to limit those citizens’ decision-making abilities. It’s hard to know whether activists in such countries will be able to keep up, to devise means by which to circumvent those technological tools by 2035 in order to maintain some freedom of thought and action. But rights will be stunted in such circumstances, in any case. In other countries, such as the U.S. and various countries in the EU, for example, we might see humans being more in control in 2035 than they are now – in part because by then some algorithmic decision-making and some other specific tech applications might be banned in some key contexts. As more people understand the limitations of AI and learn where it works well and where it doesn’t, we will be less likely to treat it as a solution to societal problems.

“Other forces will shape the tech ecosystem, too. For example, the Supreme Court decision in Dobbs is already prompting a reevaluation of some data-collection, use, and protection decisions that had been seen (or at least presented by some companies) as generally accepted business practices. Redesigning the online ecosystem in response to Dobbs might strengthen human agency in a variety of contexts that have nothing to do with abortion rights.”

The director of a U.S. military research group wrote, “2035 is likely to see a muddied (or muddled) relationship between technology and its human overlords. While humans will be mostly in control of decision-making using automated systems in 2035, the relationship between humans and automated systems will likely be mixed. While some humans will likely adopt these systems, others may not. There is currently distrust of automated systems in some segments of society as evidenced by distrust of ballot-counting machines (and the associated movement to only count them by hand), distrust of automated driving algorithms (despite them having a better track record per driven mile than their human counterparts), etc. There are enough modern-day Luddites that some technologies will have to be tailored to this segment of the population.”

Jeff Johnson, a professor of computer science at the University of San Francisco who previously worked at Xerox, HP Labs and Sun Microsystems, wrote, “Some AI systems will be designed as ‘smart tools,’ allowing human users to be the main controllers, while others will be designed to be almost or fully autonomous. I say this because some systems already use AI to provide more user-friendly control. For example, cameras in mobile phones use AI to recognize faces, determine focal distances and adjust exposure. Current-day consumer drones are extremely easy to fly because AI software built into them provides stability and semi-automatic flight sequences. Current-day washing machines use AI to measure loads, adjust water usage and determine when they are unbalanced. Current-day vehicles use AI to warn of possible obstacles or unintended lane-changes. Since this is already happening, the use of AI to enhance ease of use without removing control will no doubt continue and increase. On the other hand, some systems will be designed to be highly – perhaps fully – autonomous. Some autonomous systems will be beneficial in that they will perform tasks that are hazardous for people, e.g., find buried land mines, locate people in collapsed buildings, operate inside nuclear power plants, operate under water or in outer space. Other autonomous systems will be detrimental, created by bad actors for nefarious purposes, e.g., delivering explosives to targets or illegal drugs to dealers.”

J. Meryl Krieger, senior learning designer at the University of Pennsylvania, said, “It’s not the technology itself that’s of issue, but of who has access to it. Who are we considering to be ‘people?’ People of means will absolutely have control of decision-making relevant to their lives. the disparities in technology access needs to be addressed. This issue has been in front of us for most of the past two decades but there’s still so much insistence on technology as a luxury – to be used by those with economic means to do so – that the reality of it as a utility has still not been sorted out. Until internet access is regulated like telephone access, or power or water access, none of the bots and systems in development or in current use are relevant to ‘people.’ We’re still treating this like a niche market and assuming that this market is all ‘people.’”

Janet Salmons, consultant with Vision2Lead, said, “The accelerating rollout of tech-abetted, often autonomous decision-making will widen the divide between haves and have-nots, and further alienate people who are suspicious of technology. And those who are currently rejecting 21st-century culture will become more angry and push back – perhaps violently.”

A distinguished professor of information studies at a major California technological university said, “It will further divide affluent global north countries from disadvantaged nation-states. It will also take over many people’s driving, shopping, the ordering of consumer products. I see this to be most unfortunate all around.”

Jill Walker Rettberg, professor of digital culture at the University of Bergen, Norway, and principal investigator of the project Machine Vision in Everyday Life, replied, “In 2035 we will see even greater differences between different countries than we do today. How much agency humans will have is entirely dependent upon the contexts in which the technologies are used.

“In the U.S., technologies like smart surveillance and data-driven policing are being implemented extremely rapidly as a response to crime. Because machine learning and surveillance is less regulated than guns or even traffic calming measures (like adding cul-de-sacs to slow traffic instead of installing ALPRs) it is an easy fix, or simply the only possible action that can be taken to try to reduce crime given the stalled democratic system and deep inequality in the U.S. In Europe, these technologies are much more regulated, people trust each other and government more, so using tech as a Band-Aid on the gaping wound of crime and inequality is less attractive.

“Another example is using surveillance cameras in grocery stores. In the U.S., Amazon Fresh has hundreds of cameras in stores that are fully staffed anyway and the only innovation for customers is that they don’t have to manually check out. In Norway and Sweden, small family-owned grocery stores in rural villages are using surveillance cameras so the store can expand its opening hours or stay in business at all by allowing customers to use the store when there are no staff members present. This is possible without AI through trust and a remote team responding to customer calls. The same is seen with libraries. In Scandinavia, extending libraries’ opening hours with unstaffed time is common. In the U.S., it’s extremely rare because libraries are one of the few public spaces homeless people can use, so they are a de facto homeless shelter and can’t function without staff.”

Mark Perkins, co-author of the International Federation of Library Associations “Manifesto on Transparency, Good Governance and Freedom from Corruption,” commented, “Those with tech knowledge/education will be able to mitigate the effects of the ‘surveillance economy,’ those with financial means will be able to avoid the effects of the ‘surveillance economy,’ while the rest will lose some agency and be surveilled. For most humans – ‘the masses’ – key decisions such as creditworthiness, suitability for a job opening, even certain decisions made by tribunals, will be made automated by autonomous and artificial intelligence, while those with the financial means will be able to get around these constraints. Unlike, in the case of privacy settings, however, I think technical workarounds for retaining control/agency will be much less available/effective.”

Steven Miller, former professor of information systems at Singapore Management University, responded, “There is no Yes vs. No dichotomy as to whether smart machines, bots and systems powered by AI will be designed (Yes) or will not be designed (No) to allow people to more easily be in control of the most tech-aided decision-making that is relevant to their lives. Both approaches will happen, and they will happen at scale. In fact, both approaches are already happening. We are already observing dynamic tension between the Yes vs. No approaches, and we already see examples of the negative power of not designing AI-enabled systems to allow people to more easily be in control of the lives. As we proceed to the year 2035, there will be an increasingly strong dynamic tension between institutions, organizations and groups explicitly designing and deploying AI-enabled systems to take advantage of human ‘ways’ and limitations to indirectly influence or overtly control people versus those that are earnestly trying to provide AI-enabled systems that allow people to more easily be in control of not only tech-aided decision-making, but nearly all aspects of decision-making that is relevant to their lives.

“No one knows how these simultaneous and opposing forces will interact and play out. It will be messy. The outcome is not pre-determined. There will be a lot of surprises and new scenarios beyond what we can easily imagine today. Actors with ill intent will always be on the scene and will have fewer constraints to worry about. We just need to do whatever we can to encourage and enable a broader range of people involved in creating and deploying AI-enabled systems – across all countries, across all political systems, across all industries – to appropriately work within their context and yet to also pursue their AI efforts in ways that move in the direction of being ‘human-centered.’ There is no one definition of this. AI creators in very different contexts will have to come to their own realizations of what it means to create increasingly capable machines to serve human and societal needs.”

An open-access advocate and researcher based in South America wrote, “For design reasons, much of today’s technology – and future technology – will come with default configurations that cannot be changed by users. I don’t doubt that there will be more machines, bots and AI-driven systems by 2035, but I don’t think they will be equally distributed around the world. Nor do I believe that people can have the same degree of decision-making vis-à-vis the use of such technologies equally. Unfortunately, by 2035 the access gap will still be significant for at least 30% of the population, much wider will be the gap in the use and appropriation of digital technologies. In this scenario, human decision-makers will be in the minority. Possibly the management and distribution of public and private goods and services is something that will be automated to optimize resources. Along these lines, direct human intervention is required to balance possible inequalities created by automation algorithms, to monitor possible biases present in the technology, and to create new monitoring indicators so that these systems do not generate further exclusion. Also, to make decisions that mitigate possible negative impacts on the exercise of human rights. It is important that pilots are carried out and impacts are evaluated before massive implementations are proposed. All stakeholders should be consulted and there should be periodic control and follow-up mechanisms.”

A UK-based expert in social psychology and human communication responded, “Terry Gillam’s movie ‘Brazil’ was quite prescient. Tools that emerge with a sociotechnocratic line from the behavioural sciences will ensure that control is not evenly distributed across society, and the control in question will probably be quite clumsy. And why aren’t political analysts up on this? Where are the political scientists and philosophers who should be helping us with this? Probably still mithering around about what someone said and meant in the 19th century.”