The next two sections of this report include additional comments from experts, organized under the most common themes found in their responses. These remarks generally echo the sentiments expressed by the experts whose comments are included in earlier sections of this report.

This chapter includes a selection of responses to the question, “As you look ahead to the year 2035, what are the most harmful or menacing changes in digital life that are likely to occur in digital technology and humans’ use of digital systems?

Some 37% of the 305 experts who responded to this survey said they are more concerned than excited about what today’s trends say about where developments are headed over the next dozen years, and 42% said they are equally concerned and excited. Only 18% said they are more excited than concerned. The canvassing invited them to respond to five categories of impact. Here are the themes they struck:

  • Human-centered development of digital tools and systems: The experts who addressed this fear wrote about their concern that digital systems will continue to be driven by profit incentives in economics and power incentives in politics. They said this is likely to lead to advanced surveillance and data collection aimed at controlling people rather than empowering them to act freely, share ideas and protest injuries and injustices. These experts worry that ethical design will continue to be an afterthought and digital systems will continue to be released before being thoroughly tested. They believe the impact of all of this is likely to increase inequality and compromise democratic systems.
  • Human rights: These experts fear new threats to rights will arise as privacy becomes harder, if not impossible, to maintain. They cite surveillance advances, sophisticated bots embedded in civic spaces, the spread of deepfakes and disinformation, advanced facial recognition systems, and widening social and digital divides as looming threats. They foresee crimes and harassment spreading more widely, and the rise of new challenges to humans’ agency and security. A topmost concern is the expectation that increasingly sophisticated AI is likely to lead to jobs loss, resulting in a rise in poverty and the diminishment of human dignity.
  • Human knowledge: They fear that the best of knowledge will be lost or neglected in a sea of mis- and disinformation, that the institutions previously dedicated to informing the public will be further decimated, and that facts will be increasingly hard to find amidst a sea of entertaining distractions, bald-faced lies and targeted manipulation. They worry that people’s cognitive skills will continue to decline. In addition, they argued that “reality itself is under siege” as emerging digital tools convincingly create deceptive or alternate realities. They worry a class of “doubters” will hold back progress.
  • Human health and well-being: A share of these experts said humanity’s embrace of digital systems has already spurred high levels of anxiety and depression and predicted things could worsen as technology embeds itself further in people’s lives and social arrangements. Some of the mental and physical problems could stem from tech-abetted loneliness and social isolation; some could come from people substituting tech-based “experiences” for real-life encounters; some could come from job displacements and related social strife; and some could come directly from tech-based attacks.
  • Human connections, governance and institutions: The experts who addressed these issues fear that norms, standards and regulation around technology will not evolve quickly enough to improve the social and political interactions of individuals and organizations. Two overarching concerns: a trend toward autonomous weapons and cyberwarfare and the prospect of runaway digital systems. They also said things could worsen as the pace of tech change accelerates. They expect that people’s distrust in each other may grow and their faith in institutions may deteriorate. This, in turn, could deepen already undesirable levels of polarization, cognitive dissonance and public withdrawal from vital discourse. They fear, too, that digital systems will be too big and important to avoid, and all users will be captives.

Many of the comments cited in earlier parts of this report reflect the ideas shared in these themes. What follows are additional overall comments from experts on the harmful or menacing evolution of humans and digital tools and systems by 2035.

Daniel S. Schiff, assistant professor and co-director of the Governance and Responsible AI Lab at Purdue University, said, “Some of the more concerning impacts in digital life in the next decade could include techno-authoritarian abuses of human rights, continued social and political fracturing augmented by technology and mis-/disinformation, missteps in social AI and social robotics and calcification of subpar governance regimes that preclude greater paradigm shifts in human digital life. As often occurs with emerging technology, we may see innovations introduced without sufficient testing and consideration, leading to scandals and harms as well as more intentional abuses by hostile actors.

“Perhaps the most menacing manifestation of harmful technology would be the realization of hyper-effective surveillance regimes by state actors in authoritarian countries, with associated tools also shared to other countries by state actors and unscrupulous firms. It’s already clear that immense human data production coupled with biometrics and video surveillance can create environments that severely hobble basic human freedoms. Even more worrisome is that the sophistication of digital technologies could lead techno-authoritarian regimes to be so effective that they even cripple prospects for public feedback, resistance and protest, and change altogether.

“Pillars of societal change such as in-person and digital assembly, sharing of ideas inside and outside of borders and institutions of higher education serving as hubs of reform could disappear in the worst case. To the extent that nefarious regimes are able to track and predict dissident ideas and individuals, deeply manipulate information flow and even generate new forms of targeted persuasive disinformation and instill fear, some corners of the world could be locked into particularly horrific status quos. Even less successful efforts here are likely to harm basic human freedoms and rights, including of political, gender, religious and ethnic minorities.

“Another fear imagined throughout subsequent historical waves of technology is dehumanization and dissolution of social life through technology (e.g., radio, television, Internet). Yet these fears do not feel anti-scientific, as we have watched the collapsing trust in news media, proliferation of misinformation and disinformation via social media platforms, and fracturing of political groups leading to new levels of affective polarization and outgroup dehumanization in recent decades.

“Misinformation in text or audio-visual formats deserves a special call-out here. I might expect ongoing waves of scandal over the next years as various realistic generative capabilities become democratized, imagined harms become realized (in fraud, politics, violence), and news cycles try to make sense of these changes. The next scandal or disaster owing to misinformation seems just around the corner, and many such harms are likely happening that we are not aware of.

“There are other reasons to expect digital technology to become more individualized and vivid. Algorithmic recommendations are likely to become more accurate (however accuracy is defined), and increased data, including potentially biometric, physiological, synthetic and even genomic data may feature into these systems. Meanwhile, bigger screens, clever user experience design, and VR and AR technologies could make these informational inputs feel all the more real and pressing.

“Pessimistically speaking, this means that communities that amplify our worst impulses and prey upon our weaknesses, and individuals that preach misinformation and hate are likely to be more effective than ever in finding and persuading their audiences. Fortunately, there are efforts to dissipate in combat these trends in current and emerging areas of digital life, but several decades into the Internet age, we have not yet gotten ahead of bad actors and the sometimes surprising negative emergent and feedback effects. We might expect a continuation of some of the negative trends enabled by digital technology already in the 21st century, with new surprises to boot.

“The power of social technologies like virtual assistants and large language models has also started to become clear to the mass public.

“In the next decade, it seems likely to me that we will have reached a tipping point where social AI or embodied robots become widely used in settings like education, health care and elderly care. Benefits aside, these tools will still be new, and their ethical implications are only starting to be understood. Empirical research, best practices and regulation will need to play catch-up. If these tools are rolled out too quickly, the potential to harm vulnerable populations is greater. Our excitement here may be greater than our foresight.

“And unfortunately, more technology and innovation seem poised to exacerbate inequality (on some important measures) under our current economic system. Even as we progress, many will remain behind. This might be especially true if AI causes acceleration effects, granting additional power to big corporations or companies due to network/data effects, and if international actors do not work tirelessly to ensure that benefits are distributed rather than monopolized. One unfortunate tendency is for rights and other beneficial protections to lag in low-income countries; an unscrupulous corporation may be banned from selling an unsafe digital product or using misleading marketing in one country and decide that another unprotected market exists in a lower-income corner of the world.

“The same trends hold for misinformation and content moderation, for digital surveillance, and for unethical labor practices used to prop up digital innovation. What does the periphery look like in the AI era? To prevent some of the most malicious aspects of digital change, we must have a global lens.

“Finally, I fear that the optimists of the age may not find the most creative and beneficial reforms take hold. Regulatory efforts that aim to center human rights and well-being may fall somewhat to the banalities of trade negotiations and the power of big technology companies. Companies may become better at ethical design, but also better at marketing it, and it remains unclear how much the public knows whether a digital tool and its designer are ethical or trustworthy. It seems true that there is historically high attention to issues like privacy, cybersecurity, digital misinformation, deepfakes, algorithmic bias and so on.

“Yet even for areas where experts have identified best practices for years or decades, economic and political systems are slow to change, and incentives and timelines remain deeply unaligned to well-being. Elections continue to be run poorly, products continue to be dangerous and those involved continue to find workarounds to minimize the impact of governance reforms on their bottom line.

“In the next decade, I would hope to see several major international reforms take hold, such as privacy reforms like GDPR maturing in their implementation and enforcement, and perhaps laws like the EU AI Act start to have a similar impact. Overall, however, we do not seem poised for a revolution in digital life. We may have to content ourselves with the hard work required for slow iteration and evolution instead.”

Jim Spohrer, board member of the International Society of Service Innovation Professionals, previously a longtime IBM leader, wrote, “Many challenges are emerging due to the ongoing advances in humans’ uses of digital technology.

  1. “There is a lack of accountability for criminals involved in cybersecurity breaches/scams that may slow digital transformation of adoption of digital twins for all responsible actors. For example, Google and other providers are unable to eliminate all the Gmail spam and phishing emails – even though their AI does a good filtering job identifying spam and phishing. The lack of ‘human-like dynamic, episodic memory’ capabilities for AI systems slows the adoption of digital-twin ownership by individuals and the development of AI systems with commonsense reasoning capabilities.
  2. “The winner-take-all mindset in all competitive and developmental settings rather than the type of balanced collaboration that is necessary is dominant in business and geopolitics of the U.S., Russia, China, India and others.
  3. “A general resistance to welcoming immigrants by providing accelerated pathways to productive citizenship is causing increasing tensions between regions and wastes enormous amounts of human potential.
  4. “Models show that it is likely that publishers will be slow to adopt open-science disruptions.
  5. “It is expected that mental illness, anxiety, depression exacerbated by loneliness will become the number one health challenge in all societies with elderly-dominant populations.
  6. “A lack of focus on geothermal solutions due to oil company interest in a hydrogen economy is expected to slow local energy independence.”

Frank Bajak, cybersecurity investigations chief at The Associated Press, predicted, “The powerful technologies maturing over the next decade will be badly abused in much of the world unless the trend toward illiberal, autocratic rule is reversed. Surveillance technology has few guardrails now, though the Biden administration has shown some will for limiting it. Yet far too many governments have no qualms about violating their citizens’ rights with spyware and other intrusive technologies. Digital dossiers will be amassed widely by repressive regimes. Unless the United States suppresses the fascist tendencies of opportunist demagogues, the U.S. could become a major surveillance state. Much depends also on the European Union being able to maintain democracy and prosperity and contain xenophobia. We seem destined at present to see biometrics combined with databases – anchored in facial, iris and fingerprint collection – used to control human migration, prejudicing the Black and Brown of the Global South.

“I am also concerned about junk AI, bioweapons and killer robots. It will probably take at least a decade to sort out hurtful from helpful AI. Full autonomous offensive lethal weapons will be operative long before 2035, including drone swarms in the air and sea. It will be incumbent on us to forge treaties restricting the use of killer robots.

“Technology is not and never was the problem. Humans are. Technology will continue to imbue humans with god-like powers. I wish I had more faith in our better angels.

“AI will likely eventually make software, currently dismally flawed, much safer as security becomes central to ground-up design. This is apt to take more than a decade to shake out. I’d expect a few major computer outages in the meantime. We may also learn not to bake software into absolutely everything in our environment as we currently seem to be doing. Maybe we’ll mature out of our surveillance doorbell stage.”

Gary Marchionini, dean at the University of North Carolina-Chapel Hill School of Information and Library Science, said, “My main worries are that acceptance of generative algorithms and autonomous systems will have severe consequences for human life and happiness. Autonomous weapon systems more openly used in today’s conflicts (such as Ukraine-Russia) will foster the acceptance of space-based and other more global weapon systems. Likewise, the current orgasmic fascination with generative AI will set us up for development of a much more impactful generation of experimental food, building materials, new organisms and modified humans.”

Kat Schrier, associate professor and founding director of the Games & Emerging Media program at Marist College, commented, “There are a number of large issues; these are just a few:

  1. “Systemic inequities are transmogrified by digital technologies (though these problems have always existed, we may be further harming others through the advent of these systems). For instance, problems might include biased representation of racial, gender, ethnic and sexual identities in games or other media. It also might include how a game or virtual community is designed and the cultural tone that is established. Who is included or excluded, by design?
  2. “Other ethical considerations, such as privacy of data or how interactions will be used, stored and sold.
  3. “Governance issues, such as how people report and gain justice for harms, how we prevent problems and encourage pro-social behavior, or how we moderate a virtual system ethically. The law has not evolved to fully adjudicate these types of interactions, which may also be happening across national boundaries.
  4. “Social and emotional issues, such as how people are allowed to connect or disconnect, how they are allowed to express emotions, or how they are able to express their identities through virtual/digital communities.”

Ravi Iyer, managing director of the Psychology of Technology Institute at the University of Southern California, formerly product manager at Meta and co-founder of Ranker.com, predicted, “A rogue state will build autonomous killing machines that will have disastrous unintended consequences. I also expect that the owners of capital will gain even more power and wealth due to advances in AI, such that the resulting inequality will further polarize and destabilize the world.”

Dean Willis, founder of Softarmor Systems, observed, “From a public policy and governance perspective, AI provides authoritarian governments with unprecedented power for detecting and suppressing non-conformant behavior. This is not limited to political and ideological behavior or position; it could quite possibly be used to enforce erroneous public health policies, environmental madness, or, quite literally, any aspect of human belief and behavior. AI could be the best ‘dictator kit’ ever imagined. Author George Orwell was an optimist, as he envisioned only spotty monitoring by human observers. Rather, we will face continuous, eternal vigilance with visibility into every aspect of our lives. This is beyond terrifying. Authoritarian AI coupled with gamification has the potential to produce the most inhumane human behavior ever imaged.”

Scott Marcus, an economist, political scientist and engineer who works as a telecommunications consultant, said, “Some of what follows on my list of worrisome areas may not seem digital at first blush, but everything is digital these days.

  • Armed conflict or the threat of conflict causes human and economic losses, and further impedes supply chains
  • Further decline in democratic institutions
  • Continued health crises (antibiotic resistant diseases, etc.)
  • Climate crisis leads to food crises/famine, migration challenges
  • Further growth of misinformation/disinformation
  • Massive breakdown of global supply chains for digital goods and (to a lesser degree?) services
  • The trade war U.S.-China increasingly drives a U.S.-EU trade war
  • Fragmentation of the internet due to geopolitical tensions
  • Further breakdown of global institutions, including the World Health Organization and World Trade Organization.”

Gary Grossman, senior vice president and global lead of the AI Center of Excellence at Edelman, said, “Perhaps because we can already feel tomorrow’s dangers in activities playing out today, the downside seems quite dramatic. Deepfakes and disinformation are getting a boost from generative AI technologies and could become pervasive, greatly undermining what little public trust of institutions remain. Digital addiction, already an issue for many who play video games, watch TikTok or YouTube videos, or who hang on every tweet, could become an even greater problem as these and other digital channels become even more personalized and appeal to base instincts for eyeballs.”

Pedro U. Lima, professor of computer science at the Institute for Systems and Robotics at the University of Lisbon, said, “I expect technology to develop in such a way that physical machines (aka, robots), not just virtual systems, will be developed to replace humans advantageously in dangerous, dull and dirty work. This will increase production, make work safer and create new challenges for humankind not thought of until then.”

A well-known professor of computational linguistics based at a major U.S. university commented, “My fears about digital technology all relate to how they are on a trajectory to overturn civil society and democracy. I am extremely concerned about the difficulties of verifying information from computer-generated content. (Imagine what havoc that can put onto legal proceedings, when doing data collection.) Although ML researchers will solve the problem of generative models producing incorrect information, that will not stop people with bad intentions from using these tools to generate endless incorrect content. Misinformation has become a weapon of destabilizing society, and that is likely to continue. I hope by 2035 we will have collectively come to a solution for how to handle this; both the distribution side (political and social forces needed here) and the detection side.

“Another major threat is future use of automated surveillance of individuals. This is already in place everywhere, even in the U.S., and will continue around the world. Since it can also be used to increase physical safety, automated monitoring will become ever more pervasive. The biggest threat of all is how easily a monitored society can be subdued into an autocracy, as well as how easily an individual can lose feelings of their own humanity by having no private space.

“Another threat is the increasing sophistication of automated weaponry. Of course, humans have always been engaged in an ‘arms race’; that is what that term means, and perhaps there will never be an end to that. But there are dangers of automation going unintentionally berserk with dire consequences. Related to this are the dangers surrounding the hacking of automated systems that control vehicles, water systems and other systems that can harm people if tampered with.

“I am not so concerned about the employment issues caused by automation, since modern history shows that society generally manages to adjust to changes in technology, with new opportunities arising. All of that said, if governments do not act to rein in the egregious inequalities of the modern economy, then this could lead to serious problems – not so much due to the automation as to the unequal distribution of the benefits of working.”

Following are separate themed sections delving further into many of the points mentioned above.

The experts whose comments are included in this category said they fear that the technology industry may fail to refocus its planning, design and overall business practices toward serving the common good ahead of profit and influence. Among their worries: human-centered, ethical design will remain an afterthought; tech companies will continue to release new technology before it has been thoroughly tested by the public; the design of AI tools and social platforms will continue to enable bad actors and authoritarian governments to endanger democratic institutions and human rights; and wealth disparity will grow as power and resources are further concentrated in the hands of Big Tech. Some still hold hope that governments, tech companies and other stakeholders might begin to take action sooner than later to better empower citizens and society overall.

Christopher Le Dantec, associate professor of digital media at Georgia Tech, predicted, “The next industrial revolution from AI and automation will further advance wealth disparity and undermine stable economic growth for all. The rich will continue to get vastly richer. No one will be safe; everyone will be watched by someone/thing. Every aspect of human interaction will be commodified and sold, with value extracted at each turn. The public interest will fall to private motivation for power, control, value extraction.

“Social media and the larger media landscape will continue to entrench and divide. This will continue to challenge political discourse, but science and medical advances will also suffer as a combination of outrage-driven revenue models and foreign actors advance mis- and disinformation to advance their interests.

“The tech sector will face a massive environmental/sustainability crisis as labor revolts spread through regions like China and India, as raw materials become more expensive, and as the mountain of e-waste becomes unmanageable.

“Ongoing experiments in digital currency will continue to boom and bust, concentrating wealth in venture and financial industries; further impoverishing late-come retail investors; and adding to a staggering energy and climate crisis.

“Activists, journalists and private citizens will come under increased scrutiny and threat through a combination of institutional actors working against them and other private individuals who will increasingly use social media to harass, expose and harm people with whom they don’t agree.”

A founder of a center for media and social impact wrote, “I worry about: Poorly or not-at-all managed open-source software at the core of key systems – including national defense and finance – creating cybersecurity risks and system failure. An ever more weakened journalistic ecology under increasingly authoritarian states (India takes the lead for ‘democracies,’ Russia and China show two ways to do it under authoritarian rule). The collapse of shared communication systems (including those in the international financial realm) due to digital insecurity. Companies racing each other to the bottom of corporate ethics with their AI out of control. And the absence of any checks to bad management creating toxic environments where healthy community is destroyed and pathological community flourishes.”

Kenneth A. Grady, futurist and consultant on law and technology and editor of The Algorithmic Society newsletter, wrote, “The most harmful or menacing changes in digital life that are likely to occur by 2035 are the overuse of immature digital technology. The excitement over the apparent ‘skill’ of chat bots based on large language models (e.g., ChatGPT) tends to overwhelm the reality of such experimental software. Those who create such software acknowledge its many limitations. But still, they release them into the wild. Individuals without appreciation for the limitations start incorporating the software into systems that people will use in real life – and sometimes quite important – settings. The combination will lead to the inevitable failures which the overeager will chalk up to the cost of innovation. Neither the software nor society are ready for this step. History has shown us that releasing technologies into the wild too soon leads to significant harm. History has not taught us to show restraint.”

A researcher based in Africa commented, “The most harmful or menacing changes that are likely to occur by 2035 in digital technology and humans’ use of digital systems is if human-centered development were to fall short of advocates’ goals in a world in which digital technology concentrates power and resources in the hands of the elite and widens global inequality.”

Barry K. Chudakov, founder and principal at Sertain Research, said, “Human-centered development of digital tools and systems will continue to fall short of technology advocates’ goals until humans begin to formulate a thorough digital tool critique and analysis, leading to a full understanding of how we use and respond to digital tools and systems. We eat with them. We wear them. We take them into our bodies. We claim them as our own. We are all in Stockholm Syndrome with respect to digital tools; they enthrall us and we bend to their (designed) wishes, and then we champion their cause.

“We are not only adopters of various technologies; we are adapters. We adapt to – that is, we change our thinking and behaving with – each significant technology we adopt. Technology designers don’t need to create technologies which will live inside of us (many efforts toward this end are in the works); humans already ingest technology and tools as though we were cyborgs with an endless appetite. There are now more cellphones on the planet than humans. From health care to retail, from robots in manufacturing to NVIDIA’s Omniverse, humans are adopting new technologies wholesale. In many respects this is wonderful. But our use of these technologies will always fall short of advocates’ goals and the positive potential of our human destiny until we understand and teach – from kindergarten through university graduate school – how humans bend their perceptions to technology and what effects that bending has on us. This is an old story that goes back to the adoption of alphabets and the institutions the alphabet created. We need to see and understand that history before we can fully appreciate how we are responding to algorithms, AI, federated learning, quantum computing or the metaverse.

“Harmful or menacing changes in digital technology and humans’ use of digital systems happen because we have not sufficiently prepared ourselves for this new world and the new assumptions inherent in emerging technologies. We blindly adopt technologies and stumble through how our minds and bodies and society react to that adoption.

“Newer emerging technologies are much more powerful (think AI or quantum computing), and the mechanics of those technologies more esoteric and hidden. Our populace will be profoundly affected by these technologies. We need a broad re-education, so we fully understand how they work and how they work on us. …

“Just as cloud computing was once unthought-of, and there were no cloud computing technologists, and then the demand for such technologists became apparent and grew, so too technology developers will begin to create new industry roles, for example technology consequence trackers. Each new technology displaces a previous technology, and developers must include an understanding of that displacement in their pro forma. Remember: Data and technologies beget more data and technologies. There is a compounding effect at work in technology acceleration development; that is another factor to monitor, track and record.”

Deanna Zandt, writer, artist and award-winning technologist, wrote, “While we continue to work on gender, racial, disability and other inclusive lenses in tech development, the continued lack of equity and representation in the tech community and thus tech design (especially when empowered by lots of rich, able-bodied white men) will continue to create harm for people living on the margins.”

Ben Shneiderman, widely respected human-computer interaction pioneer and author of “Human-Centered AI,” warned, “Dangers from poorly designed social technologies increase confusion, which undermines the capacity of users to accomplish their goals, receive truthful information or enjoy entertainment and sports. More serious harms come from failures and bias in transactional services such as mortgage applications, hiring, parole requests or business operations. Unacceptable harms come from life-critical applications such as in medicine, transportation and military operations. Other threats come from malicious actors who use technology for destructive purposes, such as cybercriminals, terrorists, oppressive political leaders and hate speech bullies. They will never be eliminated, but they can be countered to lessen their impact. There are dangers of unequal access to technology and designs that limit use by minorities, low-literacy users and users with disabilities. These perils could undermine economic development, leading to strains within societies, with damage to democratic institutions, which threatens human rights and individual dignity.”

Steven Sloman, professor of cognitive, linguistic and psychological sciences at Brown University, said, “Developments in AI will create effective natural language tools. These tools will make people feel they are getting accurate, individualized information but there will frequently be no way of checking. The actual information will be more homogeneous than it seems and will be stated with overconfidence. It will lead to large numbers of people obtaining biased information that will feed groundless ideology. Untruths about health, politics, history and more will pervade our culture even more than they already do.”

Jeffrey D. Ullman, professor emeritus of computer science at Stanford University, commented, “I do not believe the ‘Terminator’ scenario where AI develops free will and takes over the world is likely anytime soon. The stories about chatbots becoming sentient are nonsense – they are designed to talk like the humans who created the text on which the chatbot was trained, so it looks sentient but is not. The risk is not that, for example, a driverless car will suddenly become self-aware and decide it would be fun to drive up on the sidewalk and run people over. It is much more likely that some rogue software engineer will program the car to do that. Thus, the real risk is not from unexpected behavior of an AI system, but rather from the possible evil intent of one or more of their creators.”

Jeremy Pesner, senior policy analyst at the Bipartisan Policy Center, wrote, “Most of the major technology services will continue to be owned and operated by a small number of companies and individuals. The gap between open-source and commercial software will continue to grow, such that there will be an increasing number of things that the latter can do that the former cannot, and therefore almost no one will know how the software we all use every day actually works. These individuals and companies will also continue to make a tremendous amount of money on these products and services, without the users of these services having any way to make money from them.”

Bart Knijnenburg, assistant professor and researcher on privacy decision-making and recommender systems at Clemson University, said, “In terms of human-centered development, I am worried that the complexity of the AI systems that are being developed will harm the transparency of our interaction with these systems. We can already see this with current voice assistance: They are great when they work well, but when they don’t do what we want, it is extremely difficult to find out why.”

Edson Prestes, professor of informatics at Federal University of Rio Grande do Sul, Brazil, said, “Having a just and fair world is not an easy task. Digital technologies have the power to objectify human beings and human relationships with severe consequences for society as a whole. The lack of guardrails or the slow pace of implementation of these guardrails can lead to a dystopian society. In this sense, the metaverse and similar universes pose a serious threat with huge potential to amplify existing problems in the real world. We barely understand the impact of current digital technologies on our lives. The most prominent is the impact on privacy. When we shift the use of digital technology from tool to a universe we can live in, new threats will be unlocked. Although, digital universes are only in the digital domain, they have a direct effect in the real world. Maybe some people will prefer to live only in the digital universe and die in the real world.”

Soraya Chemaly, an author, activist and co-founder of the Women’s Media Center Speech Project, wrote, “I’d like to say I am feeling optimistic about value-sensitive design that would improve human connections, governance, institutions, well-being, but, in fact, I fear we are backsliding.”

Corinne Cath, an anthropologist of Internet infrastructure governance, politics and cultures, said, “Everything depends on the cloud computing industry, from critical infrastructure to health to electricity to government as well as education and even the business sector itself – this centralizes power in the centralized power structure even further.”

A share of these experts warned that by 2035, technology is likely to pose far more serious threats to human rights. These include the magnified use of upgraded AI bots, deepfakes and disinformation to manipulate, deceive and divide the public; widespread use of facial recognition technology; further diminishment of digital privacy and data rights and a loss of human agency, especially for those under the rule of the most authoritarian governments; the expansion of the economic and digital divides; and a major displacement of workers as a large surge in automation takes millions of jobs. They worry that government and corporate leadership are not up to the task of addressing all of these problems in time to make a difference.

Eileen Donahoe, executive director of the Stanford Global Digital Policy Incubator, commented, “Digital authoritarianism could become a dominant model of governance across the globe due to a combination of intentional use of technology for repression in places where human rights are not embraced, plus failure to adhere to a human rights-based approach to use and regulation of digital technology even in countries where human rights are embraced.”

Gus Hosein, executive director of Privacy International, said, “When and where human rights are disregarded matters will grow worse over the next decade-plus. A new fundamentalism will emerge from the over-indulgences of the tech/information/free market era, with at least some traditional values emerging, but also aspects of a cultural revolution, both requiring people to exhibit behaviours to satisfy the community. This will start to bleed into more free societies and will pose a challenge to the term and symbolism of human rights.”

Mojirayo Ogunlana, principal partner at M.O.N. Legal in Abuja, Nigeria, and founder of the Advocates for the Promotion of Digital Rights and Civic Interactions Initiative, predicted, “The internet space will become truly ungovernable. As governments continue to push using harmful technologies to invade people’s privacy, there will also be an increase in the development of technologies that will be able to evade governments’ intrusion, which will invariably leave power in the hands of people who may use this as a tool for committing crimes against citizens and their private lives. Then digital and human rights will continue to be endangered as governments continue to take decisions based on their own selfish interests rather than for the good of humanity. The Ukraine/Russia war offers some context.”

Llewellyn Kriel, retired CEO of a media services company based in Johannesburg, South Africa, wrote, “Human-centered issues will increasingly take a backseat to tyranny in Africa, parts of the Middle East and the Near East. This is due to the threat digital tech poses to inept, corrupt and self-serving governance. Digital will be exploited to keep populations under control. Already governments in countries in sub-Saharan Africa are exploiting tech to ensure populations in rural areas remain servile by either denying connectivity, ensuring entrenched poverty and making connectedness a privilege rather than a right. This control will grow.

“Through control and manipulation of education and curricula, governments ensure political policies are camouflaged as fact and truth. This makes real truth increasingly hard to identify. Digital growth and naiveté ensure popularity and easy-to-manipulate majoritarianism becomes ‘the truth.’ This too will escalate. Health is the only sector that holds some glimmer of hope, though access to resources will remain a control screw to entrench tyranny. Already the African digital divide is being exploited and communicated as an issue of narrow political privilege rather than one of basic human rights.

“The impotence of developers to ensure equity in digital tech extends to a new kind of apartheid of which Israeli futurist Yuval Noah Hariri warned. The ease with which governments can and do manipulate access and social media will escalate. For Africa the next decade is very bleak.

“The fact that organised crime remains ahead of the curve will not only seriously raise the existing barrage of threats to individuals, but exacerbate suspicion, fear and rejection of digital progress in a baby-with-the-bathwater reaction. The gravest threat remains government manipulation. This is already dominant in sub-Saharan Africa and will grow simply because governments can, do and will control access. These responses are being written and formulated under precisely the extensive control of the ruling African National Congress and its myriad alliance proxies.

“While the technology will grow worldwide, tyranny and control – especially in the geographically greater rural areas, as is currently the case on the South African Development Community region, which includes 16 countries in South Africa. Rulers ensure their security by denying access. This will grow because technology development’s focus on profit over rights equates to majority domination, populist control and trendy fashionable fads over equity, justice, fairness and balance.”

Janet Salmons, an online research methodologist, responded, “I have concerns about human rights and human health and well-being. Without regulations, the Internet becomes too dangerous to use, because privacy and safety are not protected. More walled gardens emerge as safe spaces. Digital tools and systems are based in greed, not the public good, with unrestricted collection, sale and use of data collected from Internet users.”

James SO’Rourke IV, professor of management at the University of Notre Dame and author of 23 books on communication, commented, “Let’s explore some of the threats that technology will have to offer in regard to human rights by 2035. First, I and others have genuine concern about social media platforms for several reasons. First, the minute-by-minute volume of newly added messaging and video content is so massive it is impossible to fully moderate in order to remove items of concern; 500 hours of video content are now posted to YouTube every minute, and Google and Alphabet cannot possibly monitor the content.

“Facebook owner Meta says that its AI now catches about 90% of terms-of-service violations, many of which are the worst humanity has to offer, simply horrific. The remaining 10% have been contracted out to firms such as Accenture. Two problems seem apparent here. First, Accenture cannot keep employees on the content monitoring teams longer than 45 to 90 days due to the heinous nature of the content itself. Turnover on those teams is 300% to 400% per annum. Second, the contract with Facebook is valued at $500 million per annum, and the Accenture board is unwilling to let go of it. Facebook says, ‘problem solved.’ Accenture says, ‘we’re working on it.’

“The social media platforms are owned and operated either by billionaire entrepreneurs who may pay taxes but do not disclose operating figures, or by trillion-dollar publicly held firms that appear increasingly impossible to regulate. Annual income levels make it impossible for any government to levy a fine for misbehavior that would be meaningful. Regulating such platforms as public utilities would raise howls of indignation regarding First Amendment free speech infringements. Other social media platforms, such as TikTok, are either owned or controlled by dictatorial governments that continue to gather data on literally everyone, regardless of residence, citizenship or occupation.

“Another large concern about digital technology revolves around artificial intelligence. Several programs have either passed or come very close to passing the Turing Test. ChatGPT is but one example. The day when such algorithms can think for themselves and evade the efforts of Homo sapiens to control them is honestly not far off. Neither legislators nor ethicists have given this subject the thought it deserves.

“Another concern has been fully realized. Facial recognition technology is now universally employed in the People’s Republic of China to track the moments, statements and behavior of virtually all Chinese citizens (and foreign visitors). Racial profiling to track, isolate and punish the Uyghur people has proven highly successful. In the United States, James Dolan, who owns the New York Knicks and Rangers as well as Radio City Music Hall, is using facial recognition to exclude all attorneys who work for law firms that have sued him and his corporate enterprises. They cannot be admitted to the entertainment venues, despite paying the price of admission, simply because of their affiliation. Many people fear central governments, but private enterprises operated by unaccountably rich individuals, have proven they can use FR and AI to control or punish those with whom they disagree.”

A director of media and content commented, “Human rights will be violated at unprecedented levels. Improvements to closed-circuit surveillance technology, facial recognition and digital geo-fencing will remove anonymity completely. Humans will be profiled from birth, as artificial intelligence builds psychological profiles based on use of technology, Internet browsing history, email communication, messaging, etc.”

A professor based in North America wrote, “Technological revolutions might be used counterproductively – as engines of greater inequality, humiliation, oppression and fearmongering. Rather than be a tool for every person to enjoy their best life, complex digital technology could be harnessed to deliberately inflict suffering and misery to satiate sadism. As George Orwell wrote, ‘If you want a picture of the future, imagine a boot stamping on a human face – forever.’”

Barry K. Chudakov, founder and principal at Sertain Research, predicted, “By the year 2035, the most harmful or menacing changes regarding human rights – i.e., harming the rights of citizens – that are likely to occur in digital technology and humans’ use of digital systems will entail an absenting of consciousness.

“Humans are not likely to notice the harmful or menacing changes brought about by digital technologies and systems because the effects are not only not obvious; they are invisible. Hidden within the machine are the assumptions of the machine. Developers don’t have time, nor do they have the inclination, to draw attention to the workings of the software and hardware they design and build; they don’t have the time, inclination or money to gameplay the unintended consequences to humans of using a given product or gadget or device. As a result, human rights may be abridged, not only without our consent but without our notice. …

“At so many different levels and layers of human experience, technology and digital solutions will emerge – buying insurance online, investing in crypto, reading an X-ray or assessing a skin lesion for possible cancer – wherein human rights will be a consideration only after the fact. The strange thing about inserting digital solutions into older system protocols is that the consequences of doing so must play out; they must remain to be seen; the damage, if it is to occur must actually occur for most people to notice.

“So human rights are effectively a football, kicked around by whatever technology happens to emerge as a useful upgrade. This will eventually be recognized as a typical outcome and watchdogs will be installed in processes, as we have HR offices in corporations. We need people to watch and look out for human rights violations and infringements that may not be immediately obvious when the new digital solutions or remedies are installed.”

S.B. Divya, an author, editor and electrical engineer and Hugo and Nebula Award-nominated author of “Machinehood,” commented, “By 2035, I expect that we will be struggling with the continued erosion of digital privacy and data rights as consumers trade ever-increasing information about their lives for social conveniences. We will find it more challenging to control the flow of facts, especially in terms of fabricated images, videos and text that are indistinguishable from reliable versions. This could lead to greater mistrust in government, journalists and other centralized sources of news. Trust in general is going to weaken across the social fabric. I also anticipate a wider digital divide – gaps in access to necessary technology, especially those that require a high amount of electricity and maintenance. This would show up more in commerce than in consumer usage. The hazards of climate change will exacerbate this burden, since countries with fewer resources will struggle to rebuild digital infrastructure after storm damage. Human labor will undergo a shift as AI systems get increasingly sophisticated. Countries that don’t have good adult education infrastructure will struggle with unemployment, especially for older citizens and those who do not have the skills to retool. We might see another major economic depression before society adjusts to the new types of employment that can effectively harness these technologies.”

A professor at a major U.S. university commented, “Data will increasingly be used to target and harm individuals and groups. From biased AI models to surveillance by authoritarian regimes to identity theft, failure to empower people to protect themselves and their data is a major risk to human rights.”

Bryan Alexander, futurist, speaker and consultant, responded, “I fear the most dangerous use of digital technologies will be various forms of antidemocratic restrictions on humanity. We have already seen this, from governments using digital surveillance to control and manipulate residents to groups using hacking to harm individuals and other groups. Looking ahead, we can easily imagine malign actors using AI to create profiles of targets, drones for terror and killing, 3D printing weapons and bioprinting diseases. The creation of augmented- and virtual-reality spaces means some will abuse other people therein, if history is of any guide (see ‘A Rape in Cyberspace’ or, more recently, ‘Gamergate’). All of these potentials for human harm can then feed into restrictions on human behavior, either as means of intimidation or as justifications for authoritarianism (e.g., we must impose controls in order to fend off bio-printed disease vectors). AI can supercharge governmental and private power.”

We must be prepared for the impact of automation replacing more human workers

Rosanna Guadagno, associate professor of persuasive information systems at the University of Oulu, Finland, wrote, “By 2035, I expect that artificial intelligence will have made a substantial impact on the way people live and work. AI robotics will replace factory workers on a large scale and AI digital assistants will also be used to perform many tasks currently performed by white-collar workers. I am less optimistic about AIs performing all of our driving tasks, but I do expect that driving will become easier and safer. These changes have the potential to increase people’s well-being as we spend less time on menial tasks. However, these changes will also displace many workers. It is my hope that governments will have the foresight to see this coming and will help the displaced workers find new occupations and/or purpose in life. If this does not occur, it will not be universally welcomed nor universally beneficial to human well-being.

“Emerging technologies taking people’s jobs could lead to civil unrest and wide-sweeping societal change. People may feel lost as they search for new meaning in their lives. People may have more leisure time which will initially be celebrated but will then become a source of boredom. AI technology may also serve to mediate our interpersonal interactions more so than it does now. This has the potential to cause misunderstandings as AI agents help people manage their lives and relationships. AIs that incorporate beliefs based on biases in algorithms may also stir up racial tensions as they display discriminatory behavior without an understanding of the impact these biases may have on humans. People’s greater reliance on AIs may also open up new opportunities for cybercrime.”

Sam Lehman-Wilzig, professor of communication at Bar-Ilan University, Israel, and author of “Virtuality and Humanity,” commented, “Digitally-based artificial intelligence will finally make significant inroads in the economy, i.e., causing increasing unemployment. How will society and governments deal with this? We don’t know. I see the need for huge changes in the tax structure (far greater corporate tax; elimination or significant reduction of individual taxation). This is something that will be very difficult to execute, given political realities, including intense corporate lobbying and ideological stasis.

“What will growing numbers of people do with their increasing free time in a future where most work is being handled autonomously? Can people survive (psychologically) being unemployed their entire lives? Our educational system should place far more emphasis already today on leisure education, and that which used to be called liberal arts? Like governments, educational systems tend to be highly conservative regarding serious change. Obviously, all this will not reach fruition by 2035, but much later), but the trend will become obvious – leading to greater political turmoil regarding future-oriented policymaking (taxes, Social Security, corporate regulation, education, etc.”

Carol Chetkovich, professor emeritus of public policy at Mills College, commented, “I am skeptical that technological development will be sufficiently human-centered, and therein lies the downside of tech change. In particular, we have vast inequalities in our society today, and it’s easy to see how existing gaps in access to technology and control over it could be aggravated as the tools become more sophisticated and more expensive to buy and use.

“The development of the robotic industry may be a boon to its owners, but not necessarily to those who lose their jobs as a result. The only way to ensure that technological advancement does not disadvantage some is by thinking through its implications and designing not just technologies but all social systems to be able to account for the changes. So if a large number of people will not be employed as a result of robotics, we need to be thinking of how to support and educate those who are displaced before it happens. Parallel arguments could be made about human rights, health and well-being, and so on.”

Mark Schaefer, a business professor at Rutgers University and author of “Marketing Rebellion,” wrote, “The rapid advances of artificial intelligence in our digital lives will mean massive worker displacement and set off a ripple of unintended consequences. Unlike previous industrial shifts, the AI-driven change will happen so suddenly – and create a skill gap so great – that re-training on a massive scale will be largely impossible.

“While this will have obvious economic consequences that will renew discussion about a minimum universal income, I am most concerned by the significant psychological impact of the sudden, and perhaps permanent, loss of a person’s purpose in life. What happens when this loss of meaning and purpose occurs on a massive, global scale? There is a large body of research showing that unemployment is linked to anxiety, depression and loss of life satisfaction, among other negative outcomes. Even underemployment and job instability create distress for those who aren’t counted in the unemployment numbers.”

Bart Knijnenburg, assistant professor and researcher on privacy decision-making and recommender systems at Clemson University, said, “In terms of human rights and happiness, I worry that a capitalist exploitation of AI technology will increase the expectations of human performance, thereby creating extra burden on human workers rather than reducing it. For example: While theoretically the support of an AI system can make the work of an administrative professional more meaningful, I worry that it will lead to a situation where one AI-assisted administrative worker will be asked to do the job of 10 traditional administrative workers.”

Josh Calder, partner and founder at The Foresight Alliance, said, “A scenario remains plausible in which growing swathes of human work are devalued, degraded or replaced by automation, AI and robotics, without countervailing social and economic structures to counteract the economic and social damage that result. The danger may be even more acute in the developing world than in richer countries.”

Alexander Halavais, associate professor of social data science at Arizona State University, responded, “The divide between those who can make use of new, smart technologies (including robotics and AI) and those who are replaced by them will grow rapidly. It seems unlikely that political and economic patches will be easy to implement, especially in countries like the United States that do not have a history of working with labor. In those countries, technological progress may be impeded, and it will be increasingly difficult to avoid this long-standing divide coming to a head.

“I suspect that both universities and K-12 schools in the United States will also see something of a bifurcation. Those who can afford to live in areas with strong public schools and universities, or who can afford private tuition, will keep a relatively small number of ‘winners’ active, while most will turn to open and commodity forms of education. Khan Academy, for example, has done a great deal to democratize math education, but it also displaces some kinds of existing schools. At the margin, there will be some interesting experimentation, but it will mean a difficult transition for much of the educational establishment. We will see a continued decline of small liberal arts colleges, followed by larger public and private universities and colleges. I suspect, in the end, it will follow a pattern much like that of newspapers in the U.S., with a few niche, high-reputation providers, several mega-universities and very few small, local/regional institutions surviving.”

Justin Reich, associate professor of digital media at MIT and director of the Teaching Systems Lab, commented, “The hard thing about predicting the future of tech is that so much of it is a reflection on our society. The more we embrace values of civility, democracy, equality and inclusion, the more likely it is that our technologies will reflect our social goals. If the advocates of fascism are successful in growing their political power, then the digital world will be full of menace – constant surveillance, targeted harassment of minorities and vulnerable people, widespread dissemination of crappy art and design, and so forth, all the way up to true tragedies like the genocide of the Uyghur people in China.”

Isaac Mao, Chinese technologist, data scientist and entrepreneur, observed, “It is important to recognize that digital tools, particularly those related to artificial intelligence, can be misused and abused in ways that harm individuals, even without traditional forms of punishment such as jailing or physical torture. These tools can be used to invade privacy, discriminate against certain groups and even cause loss of life. When used by centralized powers, such as a repressive government, the consequences can be devastating. For example, AI-powered surveillance programs could be used to unjustly monitor, restrict or even target individuals without the need for physical imprisonment or traditional forms of torture. To prevent such abuse, it is crucial to be aware of the potential dangers of technology and to work toward making them more transparent through democratic processes and political empowerment.”

Evan Selinger, professor of philosophy at Rochester Institute of Technology and author of “Re-Engineering Humanity,” predicted, “Surveillance technology will become increasingly invasive – not just in terms of its capacity to identify people based on a variety of biometric data, but also in its ability to infer what those in power deem to be fundamental aspects of our identities (including preferences and dispositions) as well as predict, in finer-grained detail, our future behavior and proclivities. Hypersurveillance will permeate public and private sectors – spanning policing, military operations, employment (full cycle, from hiring through day-to-day activities, promotion and firing), education, shopping and dating.”

Susan Aaronson, director of the Digital Trade and Data Governance Hub at George Washington University, commented, “Today’s trends indicate data governance is not likely to be improved without positive changes. Firms are not transparent about the data they hold (something that corporate-governance rules could address). They control the use/reuse of much of the world’s data, and they will not share it. This has huge implications for access to information. In addition, no government knows how to govern data, comprehensively understanding the relationships between algorithms protected by trade secrets and reuse of various types of data. The power relationship between governments and giant global firms could be reversed again with potential negative spillovers for access to information. In addition, new nations/states have rules allowing the capture of biometric data collected by sensors. If firms continue to rely on surveillance capitalism, they will collect ever more of the public’s personal data (including eye blinks, sweat, heart rates, etc.). They can’t protect that data effectively and they will be incentivized to sell it. This has serious negative implications for privacy and for human autonomy.”

Jonathan Stray, senior scientist at the Berkeley Center for Human-Compatible AI, studying algorithms that select and rank content, warned, “In regard to human rights, some governments will use surveillance and content-moderation techniques for control, making it impossible to express dissenting opinions. This will mostly happen in authoritarian regimes, however certain liberal democracies will also use this technology for narrower purposes, and speech regulations will shift depending on who wins elections.”

Robert Y. Shapiro, professor and former chair of the political science department at Columbia University and faculty fellow at the Institute for Social and Economic Research and Policy, responded, “I have great concern for the protection of data and individuals’ privacy, and there have to be much more serious, concerted and thoughtful efforts to deal with issues of misinformation and disinformation.”

Rich Salz, principal engineer at Akamai Technologies, warned, “Mass facial recognition systems will be among the digital tools more widely implemented in the future. There will be increased centralization of internet systems leading to more extra-governmental data collection and further loss of privacy. In addition, we can expect that cellphone cracking will invade privacy and all of this, plus more government surveillance, will be taking place, particularly in regions with tyrannical regimes. Most people will believe that AI’s large language models are ‘intelligent,’ and they will, unfortunately, come to trust them. There will be a further fracturing of the global internet along national boundaries.”

Michael Muller, a researcher for a top global technology company focused on human aspects of data science and ethics and values in applications of artificial intelligence, commented, “Human activities will increasingly be displaced by AIs, and AIs will increasingly anticipate and interfere with human activities. Most humans will be surveilled and channeled by AI algorithms. Surveillance will serve both authoritarian government and increasingly dominant corporations.”

Marvin Borisch, chief technology officer at Red Eagle Digital based in Berlin, commented, “The rise of surveillance technology is dangerously alarming. European and U.S. surveillance technology is hitting a never-before-seen level which gets adapted and optimized by more autocratic nations all around the globe. The biggest problem is that such technology has always been around and will always be around. It penetrates people’s privacy more and more, step by step. Karl-Hermann Flach, journalist and politician once said, ‘Freedom always dies centimeter by centimeter,’ and that goes for privacy, one of the biggest guarantees of freedom.

“The rise of DLT (distributed ledger technology) in forms of blockchains can be used for great purposes, but with over-regulation through technological incompetence and fear it will create a big step toward the transparent citizen and therefore the transparent human. Such deep transparency will enhance the already existing chilling effect and might cause a decline of individuality. Such surveillance will come in forms of transparent ‘Central Bank Digital Currencies’ which are a cornerstone of social credit systems. It will come with the weakening of encryption through governmental mandatory backdoors but also with the rise of quantum computing. Later could, and probably will, be dangerous because of the costs of such technology.

“Quantum resistance might already be a thing, but the spread of it will be limited to those that have access to quantum computing. New technological gatekeepers will rise, deciding who has access to such technology in a broader span.”

A longtime contributor to the work of the Internet Engineering Task Force warned, “Pervasive surveillance that is enabled by ubiquitous communications fabrics is a huge threat to privacy and human rights. Even the best of governments will find it difficult to avoid the temptation to be omniscient (or more accurately, to have the delusion of omniscience) in order to rid society of its evils. But this cannot be done without creating a surveillance state and governments can be expected to encroach on citizens’ privacy more and more. There is a huge potential for artificial intelligence to become a tyrannical ruler of us all, not because it is actually malicious, but because it will become increasingly easier for humans to trust AIs to make decisions and increasingly more difficult to detect and remove biases from AIs. That, and the combination of AI and pervasive surveillance will greatly increase the power of a few humans, most of whom will exploit that power in ways that are harmful to the general citizenry.”

Juan Carlos Mora Montero, coordinator of postgraduate studies in planning at the Universidad Nacional de Costa Rica, wrote, “The biggest damaging change that can occur between now and 2035 is a deepening of inequities when it comes to communications tools and the further polarization of humanity between people who have access to the infinite opportunities that technology offers and the people who do not. This situation would increase the social inequality in the economic sphere that exists today and would force it to spill over into other areas of life.”

Sam Lehman-Wilzig, professor of communication at Bar-Ilan University, Israel, and author of “Virtuality and Humanity,” observed, “As the possibility of mass human destruction and close-to-complete extinction becomes more of a reality, greater thought and perhaps the start of planning will be given how to archive all of human knowledge in ways that will enable it to survive all sorts of potential mass disasters.

“This involves software and hardware. The hardware is the type(s) of media in which such knowledge will be stored to last eons (titanium? DNA? etc.); the software involves the type of digital code – and lexical language – to be used so that future generations can comprehend what is embedded (whether textual, oral or visual).

“Another critical question: What sort of knowledge to save? Only information that would be found in an expanded wiki-type of encyclopedia? Or perhaps everything contained in today’s digital clouds run by Google, Amazon, Microsoft, etc.? A final element: Who pays for this massive undertaking? Governments? Public corporations? Private philanthropists?”

Many of these experts are anxious about the way digital technology change poses challenges to knowledge creation, sharing and acquisition. They fear that some of the most prized human knowledge will be lost or neglected in a sea of mis- and disinformation; that the institutions previously dedicated to informing the public about the most accurate and well-considered expert findings will be further decimated; and that facts will be increasingly hard to find amidst a sea of entertaining distractions, bald-faced lies and targeted manipulation. They worry that people’s cognitive skills are in decline.

A large proportion of those who wrote on this topic focused on what they consider to be the rising threat of newly emerging digital tools that can create deceptive or alternate realities, leading to battles for “truths.” These experts say the social and political unrest, confusion and cognitive dissonance created by human-led, AI-bot-generated content, deepfakes and fake personas are a threat to human wellness and to participatory democracy. A number of these experts worry human analytical and cognitive skills could wither. One worried about the “decline in and discouragement of individual human thought.”

Stephan Adelson, president of Adelson Consulting Services and an expert on the internet and public health, said, “Reality itself is under siege. The greatest threats to our future are AI, CGI [computer generated imagery], developmental augmented reality and other tools that have the ability to create misleading, alternate or deceptive reality. These are especially dangerous when used politically. Manipulation of the masses through media has always been a foundation of political and personal gain. As digital tools that can create more convincing alternatives to what humankind sees, hears, comprehends and perceives become mainstream daily tools, as I believe they will by 2035, the temptation to use them for personal and political gain will be ever-present.

“There will be battles for ‘truths’ that may cause a future where paranoia, conspiracy theories and a continual fight over what is real and what is not are commonplace. I fear for the mental health of those who are unable to comprehend the tools and who do not have the capacity to discern truth from deception. Continued political and social unrest, increases of mental illnesses and a further widening of the economic gap are almost guaranteed if actions are not taken to create restrictions on their use and/or if reliable and capable tools able to accomplish the separation of ‘truth’ from ‘fiction.’”

Barry K. Chudakov, founder and principal at Sertain Research, said, “By the year 2035, the most harmful or menacing changes that are likely to occur in digital technology and human knowledge – compromising or hindering progress – will come from the doubters of factfulness. New technologies are effectively measuring tools at many different levels. We all live with quantified selves now. We count our calories, our steps, we monitor our blood pressure and the air quality in our cities and buildings. We are inundated by facts, and our newest technologies will serve to sort and prioritize facts for us.

“This is a remarkable achievement in human history, tantamount to – but far greater than – the enlightenment of 1685-1815. We have never had so many tools to tell us so much about so many different aspects of human existence. (‘Dare to understand,’ as Steven Pinker has said.) The pace of technology development is not slowing, nor is the discovery of new facts about almost anything you can name. In short, human knowledge is exploding.

“The threat to that knowledge comes not from the knowing but from those, like the Unabomber Ted Kaczynski, who are uncomfortable with the dislocations, disintermediation and displacements of knowledge and facts. The history of the world is not fact-based, evidence-based; it is based on assertion and institutionalizing explanations of the world. Our new technologies upset many of those explanations and that is upsetting to many who have clung to those explanations in such diverse areas as religion, diet, health, racial characteristics and dating and mating. So, the threat to knowledge by 2035 will not come from the engines of knowing but from the forces of ignorance that are threatened by the knowledge explosion.

“This is not a new story. Copernicus couldn’t publish his findings in his lifetime; Galileo was ordered to turn himself in to the Holy Office to begin trial for holding the belief that the Earth revolves around the sun, which was deemed heretical by the Catholic Church. (Standard practice demanded that the accused be imprisoned and secluded during the trial.) Picasso’s faces were thought weird and distorted until modern technologies began to alter faces or invent face amalgams, i.e. ‘This person does not exist.’

“By 2035 human knowledge will be shared with artificial intelligence. The logic of AI is the logic of mimesis, copying, mirroring. AI mirrors human activities to enhance work by mirroring what humans would do in that role – filling out a form, looking up a legal statute, reading an X-ray. AI trains on human behavior to enhance task performance and thereby enhance human performance – which ultimately represents a new kind of knowledge. Do we fully understand what it means to partner with our technologies to accomplish this goal?

“It is not enough to use AI and then rely on journalists and user reviews to critique it. Instead, we need to monitor it as it monitors us; we must train it, as it trains on us. Once again, we need an information balcony that sits above the functioning AI to report on it, to give us a complete transparent picture of how it is working, what assumptions it is working from – and especially what we think, how we act and change in response to using AI. This is the new human knowledge. How we respond to that knowledge will determine compromising or hindering progress.”

Jens Ambsdorf, director of the Lighthouse Foundation in Germany, wrote, “The same technologies that could be drivers for a more coherent and knowledge-based world can be the source of further fragmentation and the building up of parallel societies. The creation of self-referenced echo chambers and alternative narratives is a threat to the very existence of humans on this planet as the self-inflicted challenges like biodiversity loss, climate change, pollution and destructive economies can only be successfully faced together on this planet. Currently I hold this danger to be far bigger than the chance for a positive development, as the tools for change are not based in the hands of society but more and more in the hands of private competing interests.”

An executive at one of the world’s largest telecommunications companies said, “Most harmful are likely to be tools that obscure identity or reality. Everything from AI-generated deepfake videos or photos, conspiracy theories going viral online, bot accounts, echo chambers, to all manner of uses of technology for fraud may harm rights of citizens, hinder progress, reduce knowledge and threaten individual health and well-being, physical and emotional.”

Jonathan Stray, senior scientist at the Berkeley Center for Human-Compatible AI, studying algorithms that select and rank content, warned, “In regard to human knowledge, generative models for text, images and video will make it difficult to know what is true without specialist help. Essentially, we’ll need an AI layer on top of the Internet that does a new kind of ‘spam’ filtering in order to stand any chance of receiving reliable information.”

Kevin TLeicht, professor and head of the department of sociology at the University of Illinois-Urbana-Champaign, wrote, “On the human knowledge front we, as yet, have no solution to the simple fact that digital technology gives any random village idiot a national or international forum. We knew that science and best practices didn’t sell themselves, but we were completely unready for the alternative worlds that people would create that seem to systematically negate any of the advancements the Enlightenment produced. People who want modern technology to back anti-science and pre-Enlightenment values are justifiably referred to as fascists. The digital world has produced a global movement of such people and we will have to spend the next 20 years clawing and fighting back against it.”

Josh Calder, partner and founder at The Foresight Alliance, predicted, “Access to quality, truthful information will be undermined by information centralization, AI-produced fakes and propaganda of all types and the efforts of illiberal governments. Getting to high-quality information may take more effort and expense than most people are willing or able to expend. Centralized, cloud-based knowledge systems may enable distortion or rewriting of reality – at least as most people see it – in a matter of moments.”

Bart Knijnenburg, assistant professor and researcher on privacy decision-making and recommender systems at Clemson University, said, “I worry that the products of generative AI will become completely indistinguishable from actual human-produced knowledge. This has severe consequences for data integrity and authenticity. There have already been several example situations where GPT4 generates answers that look smart but are actually very wrong. Will a human evaluator of AI answers be able to detect such errors? How do we know for sure that this survey is being answered by real humans, rather than bots?”

Michael Kleeman, a senior fellow at the University of California, San Diego, who previously worked for Boston Consulting and Sprint, responded, “AI-enabled fakes of all kinds are a danger. We will face the risk of these undermining the basic trust we have in remote communications if not causing real harm in the short run. The flip side is they will create a better-informed and more-nuanced approach to interpreting digital media and communications, perhaps driving us more to in-person interactions.”

Jason Hong, professor of computer science at Carnegie Mellon’s Human-Computer Interaction Institute, said, “There will be more and better deepfakes, adaptive attacks on software and online services, fake personas online, fake discussion from chatbots online meant to ‘flood the zone’ with propaganda or disinformation, and more. It’s much faster and easier for attackers to disrupt online activities than for defenders to defend it.”

Isabel Pedersen, director of the Digital Life Institute at Ontario Tech University, predicted, “Digital life technologies are on course to further endanger social life and extend socioeconomic divides on a global scale by 2035. One cause will be the further displacement of legitimate news sources in the information economy. People will have even more trouble trusting what they read. The deprofessionalization of journalism is well under way and techno-cultural trends are only making this worse. Along these lines, one technology that will harm people in 2035 is AI-based content-generation technology used through a range of deployments. The appropriate use of automated writing technologies seems unlikely and will further impoverish digital life by unhinging legitimate sources of information from the public sphere. Text-generation technologies, large language models and more advanced Natural Language Processing (NLP) innovations are undergoing extensive hype now; they will progress to further disrupt information industries. In the worst instances, they will help leverage disinformation campaigns by actors motivated by self-serving or malicious reasons.”

Kyle Rose, principal architect at Akamai Technologies, observed, “AI is a value-neutral tool; while it can be used to improve lives and human productivity, it can also be used to mislead people. The biggest tech-enabled risk I see in the next decade (actually, just in the next year, and only getting worse beyond that point) is that AI will be leveraged by bad actors to create very convincing fictions that are used to create popular support for actions premised on a lie. That is likely to take the form of deepfake audio-visual content that fools large numbers of people into believing in events that didn’t actually happen. In an era of highly partisan journalism, without a trusted apolitical media willing to value truth over ideology, this will result in further bifurcation of perceived reality between left and right.”

Gina Neff, professor and director of the Minderoo Centre for Technology and Democracy at the University of Cambridge, said, “AI technologies will appear to be accurate but have hidden flaws and biases, making it difficult to challenge predictions or results. Guilty until proven otherwise – and it will take a lot to prove otherwise – will be the modus operandi of digital systems in 2035.”

A professor based in the U.S. Midwest warned, “AI harms will increase. Recommendation systems are reproducing historical harms in systematic ways through unexamined reproduction of past harms. … Systems that prioritize engagement provide increasingly extreme content and encourage other harm.”

Jim Kennedy, senior vice president for strategy at The Associated Press, responded, “Misinformation and disinformation are by far the biggest threats to digital life and to the peace and security of the world in the future. We have already seen the effects of this, but we probably haven’t seen the worst of it yet. The technological advances that promise to vastly improve our lives are the same ones giving bad actors the power to wage war against the truth and tear at the fabric of societies around the world. At the root of this problem is the lack of regulation and restraint of the major tech platforms that enable so much of our individual and collective digital experience. Governments exist to hold societies together. When will they catch up with the digital giants and hold them to account?”

Sam S. Adams, artificial general intelligence researcher at Metacognitive Technology, previously a distinguished engineer with IBM, commented, “In regard to human-to-human connections, the trend of increasing fragmentation of society will continue, aided and abetted by commercial and governmental systems specifically designed to ‘divide and conquer’ large populations around the world. There will continue to be problems with available knowledge. Propaganda and other disinformation will continue to grow, creating a balkanized global society organized around what channels, platforms or echo chambers they subscribe to. Postmodernism ends but leaves in its wake generations of adults with no common moral rudder to guide them through the rocks of future challenges.”

Fernando Barrio, lecturer in business and law at Queen Mary University of London, commented, “Uses of digital technology have led to an outbreak of political polarization and the constant creating of unbridgeable ideological divides, leading to more highly damaging social self-harming situations like Brexit in the UK, and the shocking Jan. 6, 2021, invasion of the U.S. Capitol. Technology does not create these situations, but its use is providing fertile ground for mischief, creating isolated people and affording them the tools to replicate and spread polarized and polarizing messages. The trivialization of almost everything via social media along with this polarization and the spread of misinformation is leading to an unfortunate decay in human rights.”

Henning Schulzrinne, Internet Hall of Fame member and co-chair of the Internet Technical Committee of the Institute of Electrical and Electronics Engineers (IEEE), warned, “The concentration of ad revenue and the lack of a viable alternative source of income will further diminish the reach and capabilities of local news media in many countries, degrading the information ecosystem. This will further increase polarization, facilitate government corruption and reduce citizen engagement.”

Deanna Zandt, writer, artist and award-winning technologist, wrote, “Deepfakes and misinformation will continue to undermine our faith in public knowledge and our ability to make individual and collective sound decisions about how we live our lives.”

Marcus Foth, professor of informatics at Queensland University of Technology, said false corporate virtue signaling is an example of digital manipulation that also causes damage, writing, “The most harmful or menacing changes are those portrayed as sustainable but are nothing more than greenwashing. Digital technology and humans’ use of digital systems are at the core of the greenwashing problem. We are told by corporations that in order to be green and environmentally friendly, we need to opt for the paper-based straw, the array of PV [photovoltaic] solar panels on our roofs, and the electric vehicle in our garage. Yet, the planetary ecocide is not based on an energy or resources crisis but on a consumption crisis. Late capitalism has the perverted quality of profiteering from the planetary ecocide by telling greenwashing lies – this extends to digital technology and humans’ use of digital systems from individual consumption choices such as solar and EVs to large-scale investments such as smart cities. The reason these types of technology are harmful is because they just shift the problem elsewhere – out of sight. The mining of rare earth metals continues to affect the poorest of the poor across the Global South. The ever-increasing pile of e-waste continues to grow due to planned obsolescence and people being denied a right to repair. The idea of a circular economy is being dummified by large corporations in an attempt to continue BAU – business as usual. The Weltschmerz caused by humans’ use of digital systems is what’s most menacing without that we know it.”

David Bray, distinguished fellow with the nonpartisan Stimson Center and the Atlantic Council, wrote, “Challenges of misinformation and disinformation are polarizing societies, sowing distrust and outpacing any truthful beliefs or facts. Dis- and misinformation will be on the rise by 2035, but they have been around ever since humans first emerged on Earth. One of the biggest challenges now is that people do not follow complicated narratives – they don’t go viral, and science is often complicated. We will need to find ways to win people over, despite the preference of algorithms and people for simple, one-sided narratives. We need more people-centered approaches to remedy the challenges of our day. Across communities and nations, we need to internally acknowledge the troubling events of history and of human nature, and then strive externally to be benevolent, bold and brave in finding ways wherever we can at the local level across organizations or sectors or communities to build bridges. The reason why is simple: We and future generations deserve such a world.”

Charlie Kaufman, a system security architect with Dell Technologies, said, “I hope for the best and fear the worst. Technology of late has been used to spread misinformation. I would hope that we will figure out a way to minimize that while making all public knowledge available to anyone who wants to ask.”

To make things even worse, at a time when they are needed most to tell fact from fiction, humans’ cognitive skills could be in decline

Peter Levine, professor of citizenship and public affairs at Tufts University, said, “I am worried about the substantial deterioration in our ability to concentrate, and especially to focus intently on lengthy and difficult texts. Deep reading allows us to escape our narrow experiences and biases and absorb alternative views of the world. Digital media are clearly undermining that capacity.”

Robert Atkinson, president of the Information Technology and Innovation Foundation, said, “One harm that will have significant impact is the continuing decline in people’s willingness to read and their ability to understand long-form documents, articles and books.”

Gus Hosein, executive director of Privacy International, said, “Human knowledge development will slow. As we learn more about what it is to be human and how we interact with one another, the fundamentalism and quest for simplicity will mean that we care less and less about discovery and will seek solace in natural solutions. This has benefits for sure, but just as the link between new age and well-being has some links to right wing and anti-science ideologies, this will grow as we stop obsessing about technology as a driver of human progress and just see a huge replacement of pre-2023 infrastructure with electrification.”

Rosalie Day, a policy leader and consultancy owner specializing in system approaches to data ethics, compliance and trust, said, “Misinformation will continue to grow, with accelerated amplification. Now, not only by the algorithms that play toward our own worst instincts, but also generative AI will further embed biases and make us more skeptical of what can be ‘seen.’ The latter will make the importance of digital literacy an even greater divide. The digitally challenged will increasingly rely on the credibility of the source of the information, which we know is detrimentally subjective. Generative AI will hurt the education of our workforce. It is difficult enough to teach and evaluate critical thinking now. I expect knowledge silos to increase as the use of generative AI concentrates subjects and the training data becomes the spawned data. Critical thought asks the thinker to incorporate knowledge, adapt ideas and modify accordingly. The ‘never seen before’ becomes the constraint and group think becomes the enforcer. Generative AI will also displace many educated and uneducated workers. Quality of life will go down because of the satisficing nature of human systems: Is it sufficient? Does the technology get it right within the normal distribution? Systems will exclude hiring people with passion or those particularly good at innovating because they are statistical outliers.”

Jeffrey D. Ullman, professor emeritus of computer science at Stanford University, commented, “What is the future of education? It has recently been noticed that ChatGPT is capable of writing that is of a quality to easily pass as a high school or college student’s essay. For now, the panic is unwarranted; there are short-term solutions being developed that will allow a reader to detect ChatGPT output and distinguish it from the work of high school students pretty well. But what happens when students can build their own trillion-parameter models (without much thought – just using publicly available online software tools and data) and use it to do their homework? Worse, the increasing prevalence of online education has made it possible for students to use all sorts of scams to avoid actually learning anything (e.g., hiring someone on the other side of the world to do their work for them). Are we going to raise a generation of students who get good grades but don’t actually learn anything?”

Fernando Barrio, lecturer in business and law at Queen Mary University of London, said, “There is a move in intellectual and academic circles to justify the dehumanization of social interactions and brand as technophobes anyone seeing it to be a negative that people spend most of their time today looking at digital devices. The claim is that those who spend hours physically isolated are actually more connected than others, and that spending hours watching trivial media is a new form of literacy. The advocates of technologically-driven social isolation and trivialization will have to explain why – in the age of greatest access to information in history – we see a constant decline in knowledge and in the capacity to analyze information, not to mention the current pandemic of mental health issues within the younger generations. By 2035, unless there is a radical change in the way people, especially the young, interact with technology, the current situation will worsen substantially.”

Naveen Rao, a health care entrepreneur and founder and managing partner at Patchwise Labs, responded, “Everything that’s bad today is going to get worse as a direct of result of the U.S. government’s failure to regulate social media platforms. Cyberbullying, corporate-fueled and funded misinformation campaigns, gun violence and political extremism will all become more pronounced and engrained, deeply shaping the minds of the next generation of adults (today’s grade schoolers).

“Adults’ ability to engage in critical thinking – their ability to discern facts and data from propaganda – will be undermined by the exponential proliferation of echo chambers, calcified identity politics and erosion of trust in the government and social institutions. These will all become even more shrouded by the wool of digital life’s ubiquity.

“The corporate takeover of the country’s soul – profit over people – will continue to shape product design, regulatory loopholes and the systemic extraction of time, attention and money from the population. I do think there will be a cultural counterbalance that emerges, at what point I can’t guess, toward less digital reliance overall, but this will be left to the individual or family unit to foment, rather than policymakers, educators, civic leaders or other institutions.”

Carolyn Heinrich, professor of public policy and education at Vanderbilt University, commented, “The most harmful aspects of digital tools and systems are those that are used to spread misinformation and to manipulate people in ways that are harmful to society. Digital tools are used to scam people of money, steal identities and to bully, blackmail and defame people, and so the expansion of digital tools and systems to areas where they are currently less present will also put more people at risk of these negative aspects of their use. The spread of misinformation promotes distrust in all sources of knowledge, to the detriment of the progress of human knowledge, including reputable research.”

Jeremy Pesner, senior policy analyst at the Bipartisan Policy Center, wrote, “There will continue to be a major dissonance between the way people act in person and the way they act on social media, and there will be no clear way to encourage or foster constructive, healthy conversations online when the participants have nothing concrete to gain from it.”

Most of the experts who are concerned about the future of health and well-being focused on the possibility that the current challenges posed by AI-facilitated social media feeds to mental and physical health will worsen. They also highlighted fears that these trends would take their toll: more surveillance capitalism; group conflict and political polarization; information overload; social isolation and diminishment of communication competence; social pressure; screen-time addictions; rising economic inequality; and the likelihood of mass unemployment due to automation.

Barry K. Chudakov, founder and principal at Sertain Research, wrote, “We have left the development of digital tools and systems to commercial interests. This has given rise to surveillance capitalism, thinking and acting as a gadget, being ‘alone together’ as we focus more on our phones than each other, sadness among young girls as they look into the distorting mirror of social media – among other unintended consequences. Humans entrain with digital technologies and digital systems; we adjust and conform to their logic. …

“As we develop more sophisticated, pervasive, human-mimicking digital tools such as robots or AI human voice assists, we need to develop a concomitant understanding of how we respond to these tools, how we change, adjust, alter our thinking and behavior as we engage with these tools.

“We need to start training ourselves – from an early age, through kindergarten well into graduate school – to understand how to use these tools in a healthy way. It is not useful or good for us to be alone together (Sherry Turkle), to think of ourselves as a gadget and to think as a gadget (Jaron Lanier), or to live always in the shallows (Nicholas Carr). Currently there is little or no systematic effort to educate technology users about the logic of digital tools and how we change as we use them. Some of these changes are for the good such as hurricane tracking to ensure community preparedness. … By 2035 digital realities will be destinations where we will live some (much?) of our lives in enhanced digital environments; we will have an array of digital assistants, prompts, (whether called Alexa or Siri) who interact with us. We need to develop moral and spiritual guidelines to help us and succeeding generations navigate these choppy waters.”

Philip J. Salem, a communications consultant and professor emeritus at Texas State University, wrote, “In regard to human wellness, I see three worrying factors. First, people will continue to prefer digital engagement to actual communication with others. They will use the technology to ‘amuse themselves to death’ (see Neil Postman) or perform for others, rather than engage in dialogue. Performances seek validation, and for these isolated people validation for their public performances will act as a substitute for the confirmation they should be getting from close relationships. Second, people will increase their predisposition to communicate with others who are similar to themselves. This will bring even more homogenous social networks and political bubbles. Self-concepts will lose more depth and governance will be more difficult. Third, communication competence will diminish. That is, people will continue to lose their abilities to sustain conversation.”

Jeffrey D. Ullman, professor emeritus of computer science, Stanford University, commented, “I remember from the 1960s the Mad Magazine satire of ‘The IBM Fight Song’: ‘…what if automation, idles half the nation…’ Well 60 years later, automation has steadily replaced human workers, and more recently, AI has started to replace brain work as well as physical labor. Yet unemployment has remained about the same. That doesn’t mean there won’t be a scarcity of work in the future, with all the social unrest it would entail. Especially a consequence of the rapid obsolescence of jobs means the rate at which people must be retrained will only increase, and at some point I think we reach a limit, where people just give up trying to learn new skills.”

Adam Nagy, a senior research coordinator at The Berkman Klein Center for Internet & Society at Harvard University, said, “People are increasingly alienated from their peers, struggling to form friendships and romantic relationships, removed from civic life and polarized across ideological lines. These trends impact our experiences online in negative ways, but they are also, to some extent, an outcome of the way digital life affects our moods, viewpoints and behaviors. The continuation of this vicious cycle spells disaster for the well-being of younger generations and the overall health of society.”

Philippa Smith, communications and digital media expert, research consultant and commentator, wrote, “It is unlikely that by 2035 existing harmful and menacing online behaviours, particularly in terms of human health and well-being – such as cyberbullying, abuse and harassment, scamming, identity theft, online hate, sexting, deepfakes, misinformation, dark web, fake news, online radicalisation or algorithmic manipulation – will have faded from view. In spite of legislation, regulation or counter measures, they will have morphed in more sinister ways as our lives become more digitally immersive, bringing new challenges to confront. Much will depend on the management of technology development. Attempts to predict new and creative ways in which negative outcomes can be circumnavigated will be required. My main concern for the future, however, is on the bigger-picture level and the effects that harmful and menacing changes in digital life will have on the human psyche and our sense of reality. Future generations may not necessarily be better off living a deeply immersive digital life, falling prey to algorithmic manipulation or conspiracy theories, or forgetting about the real physical world and all it has to offer. We will need to be careful in what we wish for.”

Sam S. Adams, artificial general intelligence researcher at Metacognitive Technology, previously a distinguished engineer with IBM, commented, “In regard to human well-being, I expect that digital globalization becomes a double-edged sword. There will be borderless communities with shared values around beauty and creativity on one side and echo chambers that justify and cheer genocide and imperial aggression on the other, especially in the face of the breakdown of economic globalization.”

Lambert Schomaker, a professor at the Institute of Artificial Intelligence and Cognitive Engineering at the University of Groningen, Netherlands, commented, “Current developments around ChatGPT and DALL-E 2, although in their early stages now, will have had a deep impact on the way humans look at themselves. This can also be seen from the reactions from artists, writers and researchers in the humanities. Many capabilities considered purely human now appear to be statistical of nature. Writing smooth, conflict-avoiding pieces of text is, apparently, fairly mechanical. This is very threatening. The psychological effect of these developments may be dramatic. Why go to school, the machine can do it all! As a consequence, motivation to work at all may drop. The only green lining here may be that physical activity will gain in importance. Given the current shortage of skilled workers in building, electrical engineering and agriculture, this may even be beneficial in some areas. However, the upheaval caused by the AI revolution may have an irreparable effect on the tissue of societies in all world cultures.”

Gus Hosein, executive director of Privacy International, said, “Loneliness will continue to rise, starting from early ages as some do not make it out of the end of online versus offline. Alongside the struggle around human rights versus traditional values, more loneliness will result in people who are different, being outcast from their physical communities and not finding ways to compensate.”

Aaron ChiaYuan Hung, associate professor of educational technology at Adelphi University, said, “I am concerned about the unhealthy fragmentation of society. More people than ever before are being exposed to confirmation bias because algorithms feed us what we like to see, not what we should see. Because so much of media (including news, popular culture, social media, etc.) is funded by getting our attention and because we are drawn to things that fit our worldview, we are constantly fed things programmed to drive us to think in particular ways. Because the economy is based so much on attention, it is hard to get tech companies to design products that nudge us out of our worldview, let alone encourage us to have civil discourse based on factual evidence about complex issues. Humans are more isolated today and often too insulated. They don’t learn how to have proper conversations with people they disagree with. They are often not open to new ideas. This isolation, coupled with confirmation bias, is fragmenting society. It could possibly be reduced or alleviated by the correct redesign and updating of digital technology.”

David Bray, distinguished fellow with the nonpartisan Stimson Center and the Atlantic Council, wrote, “In an era in which precision medicine is possible, so too will be precision bio-attacks, tailored and at a distance. This will become a national security issue if we don’t figure out how to better use technology to do the work of deliberative governance at the speed necessary to keep up with threats associated with pandemics. Exponentially reducing the time it takes to mitigate a biothreat agent will save lives, property and national economies. To do this, we need to:

  • “Automate detection by embedding electronic sensors and developing algorithms that take humans out of loop with characterizing a biothreat agent
  • “Universalize treatment methods by employing automated methods to massively select bacteriophages versus bacteria or antibody-producing E. coli versus viruses
  • “Accelerate mass remediation either via rain or the drinking water supply with chemicals to time-limit the therapy.”

These experts expressed frustration with the lack of effective government and corporate efforts to help solve, or at least mitigate, a number of wicked problems for humanity that are arising out of digital life. In their minds, those include challenges to democracy and the destruction of the human knowledge environment; the loss of individuals’ rights to privacy, data protection, security and human agency; and mass suffering due to online crime, mischief and harassment, to name a few. Most worry or expect that the leaders who are best positioned to effectively tackle these issues are motivated not to do so because they benefit too greatly from the status quo. Among the worst-case scenarios these experts imagined if there is no improvement are a worse-than-Orwellian cyber-dystopia in which governments or oligopolies control the internet and shape people’s preferences and decisions. They are concerned that the world will be divided into warring cyber-blocks.

Satish Babu, a pioneering internet activist based in India and longtime participant in ICANN and IEEE activities, said, “There will be many major concerns in the years ahead due to lack of effective attention to big issues. Social media and fake news will become more of a problem, enabling the hijacking democratic institutions and processes. There will continue to be insufficient regulatory control over Big Tech, especially for emerging technologies. There will be more governmental surveillance in the name of ‘national security.’ There will be an expansion of data theft and unauthorized monetization by tech companies. More people will become attracted by and addicted to gaming, and this will lead to self-harm. Cyber-harassment, bullying, stalking and the abetment of suicide will expand.”

Barry K. Chudakov, founder and principal at Sertain Research, wrote, “Digital technologies and digital systems change the OS (operating system) of human existence. We are moving from alphanumeric organization to algorithms and artificial intelligence; ones and zeroes and the ubiquity of miscellany will change how we organize the world. Considering human connections, governance and institutions, in each of those areas, digitization is a bigger change than going from horse and buggy to the automobile, a more pervasive change than land travel to air and space travel. This is a change that changes everything because soon there will hardly be any interaction, whether at your pharmacy or petitioning your congresswoman, that does not rely on digital technology to accomplish its ends. With that in mind, we might ask ourselves: Do we have useful insight into the grammar and operations of digital technologies and digital systems – how they work, and how they work on us? At the moment, the answer is no. By 2035 we will be more used to the prevalence of digital technologies, and we have a chance to gain more wisdom about them. Today the very thing we are starting to use most, the AI and the algorithms, the federated learning and quantum computing, is the thing we often know least about, and have almost no useful transparency protocols to help us monitor and understand it.

“Verifying digital information (all information is now digital) will continue to be a sine qua non for democracies. Lies, distortions of perceptions, insistence on self-serving assessments and pronouncements, fake rationales to cover treacheries – these threaten human connections, governance and institutions as few other things do. They not only endanger social and political interactions; they fray and ultimately destroy the fabric of civilized society. For this reason, by 2035 all information will come with verification protocols that render facts trustworthy or suspect; either true or false. The current ESG (Environmental, Social and Governance) initiative is a step in this direction.

“By the year 2035, the most harmful or menacing changes that are likely to occur in digital technology and humans’ use of digital systems will be focused directly on human connections, governance and institutions. … We should work to put in place governance, yes; but first, we need a basic pedagogy, a comprehensive understanding of how humans use digital technology and digital systems. We teach English, history, trigonometry, physics and chemistry. All of these disciplines and more are profoundly affected by digital technology and humans’ use of digital systems. Yet, generally speaking we have less understanding about how humans use and respond to digital technology than we have about the surface of Mars. (We know more about the surface of Mars than the bottom of the ocean, as the Mars Reconnaissance Orbiter Mars is fully mapped but the ocean is not.) As a result, our social and political interactions are often undermined by digital realities (deepfakes, flaming, Instagram face, teen girl sadness and suicide rates rising), and many are left dazed and confused by the speed with which so many anchors of prior human existence are being uprooted or simply discarded. …

“We need radical transparency so these protocols and behavioral responses do not become invisible – handed over to tech developers to determine our freedoms, privacy and destiny. That would be dangerous for all of our social and political interactions. For the sake of optimizing human connections, governance and institutions, we need education 2.0: a broad, comprehensive understanding of the history of technology adoption and the myths that adoption fostered, and then an ongoing, regularly updated, observation deck/report that looks broadly across humans’ use of technologies to see how we are adapting to each technology, the implications of that adoption, and recommendations for optimizing human health and well-being.”

The co-founder of an online nonprofit news organization said, “Face-to-face interactions will become almost sacred. There will be an increasing number of physical spaces where screens are not allowed. In fact, this will often be a selling point of a destination. These spaces will be both private and public, and especially in spaces where attention and intention are held as very valuable if not sacred – churches, civic spaces, all kinds of retreats, resorts, bars, weddings and wedding venues, and restaurants. This will be a kind of norm. People will know more and more where and where not to use phones. They will be put in their places.”

Robert Bell, co-founder of the Intelligent Community Forum, commented, “The potential for AI to be used for evil is almost unlimited, and it is certain to be used that way to some extent. A relatively minor – if still frightening example – are the bots that pollute social media to carry out the agenda of angry minorities and autocratic regimes. Powerful AI will also give malign actors new ways to create a ‘post-truth’ society using such tools such as deepfake images and videos. On the more frightening side will be weapons of unprecedented agility and destructive power, able to adapt to a battlespace at inhuman speed and, if permitted, make decisions to kill. Our challenge is that technology moves fast, and governments move slowly. A company founder recently told me that we live in a 21st century of big, knotty problems but we operate in an economy formed in the 20th century after the Second World War, managed by 19th-century government institutions. Keeping AI from delivering on its frightening potential will take an immense amount of work in policy and technology and must succeed in a world where a powerful minority of nations will refuse to go along.”

Kelly Bates, president of the Interaction Institute for Social Change, said, “We will harm citizens if there are no or limited controls over hate speech, political bullying, body shaming, personal attacks and the planning of insurrections on social media/online.”

R Ray Wang, founder and principal at Constellation Research, said, “The biggest challenge will be the control that organizations such as the World Economic Forum and other ‘powers that be’ have over our ability to have independent thinkers and thinking challenge the power of private-public partnerships with a globalist agenda. Policies are being created around the world to take away freedoms humanity has enjoyed and move us more toward the police state of China. Existing lawmakers have not created the tech policies to provide us with freedoms in a digital era.”

Fernando Barrio, lecturer in business and law at Queen Mary University of London, commented, “To this point in their development, people’s uses of the new digital technologies are primarily responsible for today’s extreme concentration of wealth, the overt glorification of the trivial and superficial, an exacerbation of extremes and political polarization and a relativization of human rights violations that may surpass most such influences of the past. Blind techno solutionism and a concerted push for keeping technology unregulated under the false pretense that regulation would hinder its development (and its growth being paramount to human development and happiness) led us to the present. Anyone who believes the fallacy that unbridled technological development was the only thing that kept the planet functioning during the global pandemic fails to realize that those technologies could well have evolved even better in a different, more human-centered regulatory and ethical environment, very likely with more stability. There needs to be a substantial change in the way that society regulates technology, or the overall result will not be positive.”

Stephen Abram, principal at Lighthouse Consulting, Inc., wrote, “ChatGPT has only been released for six weeks as I write this, and is already changing strategic thinking. Our political and governance structures are not competent to comprehend the international, transformative and open challenge this technology offers, and regulation, if attempted, will fail. If we can invest in the conversational and agreements to manage the outcomes of generative AI – good, neutral and bad – and avoid the near-term potential consequences of offloading human endeavor, creativity, intelligence, decisions, nuance and more, we might survive the first wave of generative AI.

“As copycat generative AIs proliferate, this is a Gold Rush that will change the world. Misinformation, disinformation, political influence through social media. As the tools, including ChatGPT, allow for the creation of fake videos, voices, text and more, the problem is going to get far worse and democracies are in peril. We have not made a dent in the role of bad actors and disinformation and the part they play in democracies. This is a big hairy problem that is decades away from a framework, let alone a solution.

“TikTok has become somewhat transformational. Ownership of this platform aside, the role of fake videos and its strong presence in post-millennial demographics are of concern. Are any of the alternatives in place that are better? (Probably not). …

“ChatGPT will start with a ‘let-a-thousand-flowers-bloom’ strategy for a few years. As always, human adoption of the tools will go through a curve that takes years and results in adoption that can be narrow or broad and sometimes with different shares of usage in different segments. It is likely that programming and coding will adopt more quickly. Narrow tools such as those for conversational customer service, art (sadly including publishing, video, visual art), writing (including all forms of writing – presentations, scripts, speeches, white papers), and more will emerge gradually but quickly.”

Gina Neff, professor and director of the Minderoo Centre for Technology and Democracy at the University of Cambridge, predicted, “By 2035 we will see large-scale systems with little room for opting out that lack the ability for people to rectify mistakes or hold systems and power accountable. The digital systems we now enjoy have been based up to now on an assumption of democratic control and governance. Challenges to democracy in democratic countries – and increasing use of AI systems for control by authoritarian governments in other countries – will mean that our digital systems will come with a high cost to freedom, privacy and rights.”

Jonathan Kolber, author of “A Celebration Society,” commented, “Without the emergence of a ‘third way,’ such as the restored and enhanced Venetian Republic-based model, the world will continue to crystallize into democracies and Orwellian states. Democracies will continue to be at risk of becoming fascist, regardless of the names it claims. As predicted as far back as the ancient Greeks, strongmen will emerge in times of crisis and instability, and accelerating climate change and accelerating automation with the attendant wholesale loss and disruption of jobs will provide these in abundance.

“Digital tools will enable a level of surveillance and control in all types of systems far beyond Orwell’s nightmares. Flying surveillance drones the size of insects, slaved to AI systems via satellite connections, will be mass-produced. These will be deployed individually or in groups according to shifting needs and conditions, according to the policy goals set by those with wealth and influence, those whom Adam Smith called ‘The Masters.’ In most cases, however, the drones will not be required for total surveillance and control of a populace. The ubiquitous phones and VR devices will suffice, with AIs discreetly monitoring all communication for signals deemed subversive or suspicious. Revolt will become increasingly difficult in such circumstances.

“We take universal surveillance as a given circa 2035. The only question becomes: Surveillance by whom, and to what effect? Our celebration society proposal turns this on its head.”

Micah Altman, social and information scientist at the Center for Research in Equitable and Open Scholarship at MIT, wrote, “There is more reason to be concerned than excited – not because digital life offers more peril than promise, but because the results of progress are incremental, while the results of failure could be catastrophic. Thus it is essential to govern digital platforms, to integrate social values into their design, and to establish mechanisms for transparency and accountability.

“The most menacing potential changes to life over the next couple of decades are the increasing concentration in the distribution of wealth, a related concentration of effective political power, and the ecological and societal disruptions likely to result from our collective failure to diligently mitigate climate change (further, the latter is related to the former).

“As a consequence, the most menacing potential changes to digital life are those that facilitate this concentration of power: The susceptibility of digital information and social platforms to be used for disinformation, for monopolization (often through the monetization and appropriation of information generated by individuals and their activities), and for surveillance. Unfortunately, the incentives for the creation of digital platforms, such as the monetization of individual attention, has created platforms on which it is easy to spread disinformation to 10 million people and monitor how they react, but hard to promote a meaningful discussion among even a hundred people.”

Alexander Klimburg, senior fellow at the Institute of Advanced Studies, Austria, predicted, “In the worst cases by 2035, two nightmare scenarios can develop – firstly, an age of warring cyber-blocks, where different internets are the battlefield for a ferocious battle between ideological intractable foes – democracies against the authoritarian regimes. In this scenario, a new forever war, not unlike the Global War on Terror but instead state-focused, keeps us mired in tit-for-tat attacks on critical infrastructure, undermines governments and destroys economies. A second nightmare is similar, but in some ways worse: The authoritarian voices who want a state-controlled Internet win the global policy fight, leading to a world where either governments or very few duopolies control the Internet, and therefore our entire news consumption, and censor our output, automatically shaping our preferences and beliefs along the way. Either the lights go out in cyberwar, or they never go out in a type of Orwellian cyber-dystopia that even democracies will not be fully safe from.”

Alexander Halavais, associate professor of social data science at Arizona State University, responded, “Cyberwar is already here and will increase in the coming decades. The hopeful edge of this may appear to be a reduction in traditional warfighters, but in practice this means that the front is everywhere. Along with the proliferation of strong encryption and new forms of small-scale autonomous robotics, the security realm will become increasingly unpredictable and fraught. I suspect there will be a combination of populist leaders seeking to capitalize on uses of disinformation, and others retreating from democratic structures in order to preserve technocratic and knowledge-based government. These paired tendencies are already visible, but if they become entrenched in some of the largest countries (and particularly in the United States), they will contribute to growing political and economic instability. There will be new, stronger national borders that will make international trade, as well as global cosmopolitanism, recede.”

David Clark, Internet Hall of Fame member and senior research scientist at MIT’s Computer Science and Artificial Intelligence Laboratory, commented, “The use of the Internet as a tool for interstate conflict (and conflict between state and nonstate actors) may have increasing real-world consequences. We may see increasing restriction of cross-border interaction at the application layer. The current abuse of social media for manipulative purposes is going to bring greater government attention to the experience, which may lead to a period of turbulent regulation with inconsistent character across the globe. The abuse of social media may lead to continued polarization of societies, which will have an uncertain, but potentially dramatic, effect on the nature of the Internet and its apps. Attacks and manipulation of online content may overwhelm the ability of defenders to maintain what they consider a factually grounded basis, and sites like Wikipedia may become less trustworthy.

“Those who view the Internet as a powerful tool for social action may come to realize that social movements have no special claim to the Internet as a tool – governments may have been slow to understand the power of the Internet but are learning how to shape the Internet experience of their citizens in powerful ways. The Internet can either become a tool for freedom or a tool for repression and manipulation, and we must not underestimate the motivation and capabilities of powerful organized actors to impose their desired character on the Internet and its users.”

Jeffrey D. Ullman, professor emeritus of computer science, Stanford University, commented, “While I am fairly confident that the major risks from the new technologies have technological solutions, there are a number of serious risks. Social media is responsible for the polarization of politics. It is no longer necessary to get your news from reasonable, responsible sources and many people have been given blinders that let them see only what they already believe. If this trend persists, we will see more events like Jan. 6, 2021, or the recent events in Brazil, possibly leading to social breakdown. I recall that with the advent of online gaming, it was claimed that ‘100,000 people live their lives primarily in cyberspace.’ I believe it was referring to things like playing World of Warcraft all day; 100,000 isn’t a real problem, but what if virtual reality (the metaverse) becomes a reality by 2035, as it probably will, and 100 million people are spending their lives there?”

Dan Lynch, internet pioneer and inventor of CyberCash, wrote, “I’m concerned about the huge reliance on digital systems while the amount of illegal activity is growing daily. One really can’t trust everything. Sure, buying stuff from Amazon is easy and it really doesn’t matter if a few things are dropped or missing. I suggest you stay away from the money apps! Their underlying math is shaky. I know. I invented CyberCash in the mid-1990s.”

Richard F. Forno, principal lecturer and director of the graduate cybersecurity program at the University of Maryland-Baltimore County, wrote, “As a cybersecurity professor rooted in the humanities, I worry that, as with most new technologies, individuals and society will be more interested in the likely potential benefits, conveniences, cost savings and the ‘cool factor’ and fail – or be unwilling – to recognize or even consider, the potential risks or ramifications. Over time, that can lead to info-social environments in which corruption, abuse and criminality thrive at the hands of a select few political or business entities, which in turn presents larger social problems requiring remediation.”

Robert M. Mason, a University of Washington professor emeritus expert in the impact of social media on knowledge work, said, “The erosion of trust and faith in human institutions is of concern. Expanded accessibility to a wider range of technologies and applications for storing and promoting falsehoods under the pretense of sharing information and knowledge is detrimental. Then there is also the growth in the number of ‘influencers’ who spread rumors based on false and incomplete information. In addition, the increased expectation of having rapid access to information and people’s accompanying impatience with delays or uncertainties associated with issues that require deeper research or analysis is extremely troublesome. There continues to be an erosion of trust in the institutions that value and support critical thinking and social equity.”

Raquel Gatto, general consul and head of legal for the network information center of Brazil, NIC.br, said, “The most harmful and menacing change by 2035 would be the overregulation that breaks the Internet. The risk of fragmentation that entails a misleading conceit of digital sovereignty is rising and needs to be addressed in order to avoid the loss of the open and global Internet that we know and value today.”

Tim Bray, a technology leader who has worked for Amazon, Google and Sun Microsystems, predicted, “The final collapse of the cryptocurrency/Web3 sector will be painful, and quite a few people will lose a lot of money – for some of them it’s money they can’t afford to lose. But I don’t think the danger will be systemic to any mainstream sector of the economy. Autocrats will remain firmly in control of China and Russia, and fascist-adjacent politicians will hold power in Israel and various places around Eastern Europe. In Africa and Southeast Asia, autocratic governments will be more the rule rather than the exception. A substantial proportion of the U.S. electorate will be friendly to antidemocratic forces. Large-scale war is perfectly possible at any moment should [Chinese President] Xi Jinping think his interests are served by an invasion of Taiwan. These maleficent players are increasingly digitally sophisticated. So my concern is not the arrival of malignant new digital technologies, but the lethal application of existing technologies to attack the civic fabric and defense capabilities of the world’s developed, democratic nations.”

Akah Harvey, director of engineering at Seven GPS, Cameroon, wrote, “We have to think long and hard about in just which industry domains we let artificial intelligence provide work product without some sort of rules in regard to it. We are soon going to have AI lawyers in our courts. What should we allow as acceptable from that AI in that setting? The danger in using these tools is the bias they may bring which we may not have yet conceived ever in the industry. This has the potential to sway judgment in a way that doesn’t render justice.

“Artificial intelligence that passes the Turing Test must be explainable. When people give up the security of their digital identity for a little more convenience, the risk could be far too great for the damage potential it represents. When we are interacting with agents, there’s need for proper identification as to whether that agent is an AI (acting autonomously) or a human. These tools are beating the test more and more these days; they can impersonate humans to carry out acts that could jeopardize the stability of any given institution and even global peace at large. In addition, we are most likely to be seeing more and more jobs taken over entirely by artificial entities. The dangers are existential and public policy needs to keep up as fast as it can as these new tools continue to evolve.”

Greg Sherwin, a leader in digital experimentation with Singularity University, wrote, “Humans on the wrong side of the digital divide will find themselves with all of the harms of digital technologies and little or no agency to control them or push back. This includes everything from insidious, pervasive dark patterns to hijack attention and motivation to finding themselves on the wrong end of algorithmic decision-making with no sense of agency nor recourse. This will result in mental health crises, loneliness and potential acts of resistance, rebellion and violence that further condemn and stigmatize marginalized communities.”

Alan Inouye, director of the office for information technology policy at the American Library Association, commented, “Perhaps ironically, the most harmful aspects by 2035 will arise from our very ubiquitous access to advanced technology. As the technology access playing field will become somewhat more level, the distinguishing difference or competitive advantage will be knowledge and social capital. Thus, the edge with ubiquitous access to advanced technology goes to knowledge workers and those highly proficient with the online world, and those who are well connected in that world. A divide between these people and others will become more visible, and resentment will build among those who do not understand that their profound challenge is in the realm of lacking adequate knowledge and social capital.

“It will take considerable education of and advocacy with policymakers to address this divide. The lack of a device or internet access is an obvious deficiency and plain to see, and policy solutions are relatively clear. Inadequate digital literacy and ability to engage in economic opportunity online is a much more profound challenge, much beyond one-time policy prescriptions as training classes or online modules. This is the latest stage of our society’s education and workforce challenge generally, as we see an increasing bifurcation of high achievers and low achievers in the U.S. education and workforce systems.”