Most respondents to this canvassing wrote brief reactions to this research question. However, a number of them wrote multilayered responses in a longer essay format. This essay section of the report is quite lengthy, so first we offer a sampler of a some of these essayists’ comments.
- Liza Loop observed, “Humans evolved both physically and psychologically as prey animals eking out a living from an inadequate supply of resources. … The biggest threat here is that humans will not be able to overcome their fear and permit their fellows to enjoy the benefits of abundance brought about by automation and AI.”
- Richard Wood predicted, “Knowledge systems with algorithms and governance processes that empower people will be capable of curating sophisticated versions of knowledge, insight and something like ‘wisdom’ and subjecting such knowledge to democratic critique and discussion, i.e., a true ‘democratic public arena’ that is digitally mediated.”
- Matthew Bailey said he expects that, “AI will assist in the identification and creation of new systems that restore a flourishing relationship with our planet as part of a new well-being paradigm for humanity to thrive.”
- Judith Donath warned, “The accelerating ability to influence our beliefs and behavior is likely to be used to exploit us; to stoke a gnawing dissatisfaction assuageable only with vast doses of retail therapy; to create rifts and divisions and a heightened anxiety calculated to send voters to the perceived safety of domineering authoritarians.”
- Kunle Olorundare said, “Human knowledge and its verifying, updating, safe archiving by open-source AI will make research easier. Human ingenuity will still be needed to add value – we will work on the creative angles while secondary research is being conducted by AI. This will increase contributions to the body of knowledge and society will be better off.”
- Jamais Cascio said, “It’s somewhat difficult to catalog the emerging dystopia because nearly anything I describe will sound like a more extreme version of the present or an unfunny parody. … Simulated versions of you and your mind are very likely on their way, going well beyond existing advertising profiles.”
- Lauren Wilcox explained, “Interaction risks of generative AI include the ability for an AI system to impersonate people in order to compromise security, to emotionally manipulate users and to gain access to sensitive information. People might also attribute more intelligence to these systems than is due, risking over-trust and reliance on them.”
- Catriona Wallace looked ahead to in-body tech: “Embeddable software and hardware will allow humans to add tech to their bodies to help them overcome problems. There will be AI-driven, 3D-printed, fully-customised prosthetics. Brain extensions – brain chips that serve as digital interfaces – could become more common. Nanotechnologies may be ingested.”
- Stephen Downes predicted, “Cash transactions will decline to the point that they’re viewed with suspicion. Automated surveillance will track our every move online and offline, with AI recognizing us through our physical characteristics, habits and patterns of behaviour. Total surveillance allows an often-unjust differentiation of treatment of individuals.”
- Giacomo Mazzone warned, “With relatively small investments, democratic processes could be hijacked and transformed into what we call ‘democratures’ in Europe, a contraction of the two French words for ‘democracy’ and ‘dictatorship.’ AI and a distorted use of technologies could bring mass-control of societies.”
- Christine Boese warned, “Soon all high-touch interactions will be non-human. NLP [natural language processing] communications will seamlessly migrate into all communications streams. They won’t just be deepfakes, they will be ordinary and mundane fakes, chatbots, support technicians, call center respondents and corporate digital workforces … I see harm in ubiquity.”
- Jonathan Grudin spoke of automation: “I foresee a loss of human control in the future. The menace isn’t control by a malevolent AI. It is a Sorcerer’s Apprentice’s army of feverishly acting brooms with no sorcerer around to stop them. Digital technology enables us to act on a scale and speed that outpaces human ability to assess and correct course. We see it already.”
- Michael Dyer noted we may not want to grant rights to AI: “AI researchers are beginning to narrow in on how to create entities with consciousness; will humans want to give civil rights and moral status to synthetic entities who are not biologically alive? If humans give survival goals to synthetic agents, then those entities will compete with humans for survival.”
- Avi Bar-Zeev preached empowerment over exploitation: “The key difference between the most positive and negative uses of XR [extended reality], AI and the metaverse is whether the systems are designed to help and empower people or to exploit them. Each of these technologies sees its worst outcome quickly if it is built to benefit companies that monetize their customers.”
- Beth Noveck predicted that AI could help make governance more equitable and effective and raise the quality of decision-making, but only if it is developed and used in a responsible and ethical manner, and “if its potential to be used to bolster authoritarianism is addressed proactively.”
- Charalambos Tsekeris said, “Digital technology systems are likely to continue to function in shortsighted and unethical ways, forcing humanity to face unsustainable inequalities and an overconcentration of techno-economic power. These new digital inequalities could amount to serious, alarming threats and existential risks for human civilization.”
- Alejandro Pisanty wrote, “Human connection and human rights are threatened by the scale, speed and lack of friction in actions such as bullying, disinformation and harassment. The invasion of private life available to governments facilitates repression of the individual, while the speed of Internet expansion makes it easy to identify and attack dissidents.”
- Maggie Jackson said, “Reimagining AI to be uncertain literally could save humanity. And the good news is that a growing number of the world’s leading AI thinkers and makers are endeavoring to make this change a reality. ‘Human-compatible AI’ is designed to be open to and adaptable to multiple possible scenarios.”
- Barry K. Chudakov observed, “We are sharing our consciousness with our tools. They can sense what we want, can adapt to how we think; they are extensions of our cognition and intention. As we go from adaptors to co-creators, the demand on humans increases to become more fully conscious. It remains to be seen how we will answer that demand.”
- Marcel Fafchamps urged that humanity should take action for a better future: “The most menacing change is in terms of political control of the population … The world urgently needs Conference of the Parties (COP) meetings on international IT to address this existential issue for democracy, civil rights and individual freedom within the limits of the law.”
What follows is the full set of essays submitted by numerous leading experts who responded to this survey.
When asked to weigh in and share their insights, these experts were prompted to first share their thoughts on the best and most beneficial change they expect by 2035. In a second question they were asked about the most harmful or menacing change they foresee, thus most of these essays open first with perceived benefits and conclude with perceived harms. Because 79% of the experts in this survey said they are “more concerned than excited” or are “equally concerned and excited” about the evolution of humans’ uses of digital tools and systems, many of these essays focus primarily on harms. Some wrote only about the most worrisome trendlines, skipping past the request for them to share about the many benefits to be found in rapidly advancing digital change. In cases where they wrote extensively about both benefits and harms, we have inserted some boldface text to indicate that transition.
Clifford Lynch: There will be vastly more encoding of knowledge, leading to significant advances in scientific and technological discovery
Lynch, director of the Coalition for Networked Information, wrote, “One of the most exciting long-term developments – it is already well advanced and will be much further along by 2035 – is the restructuring, representation or encoding of much of our knowledge, particularly in scientific and technological areas, into forms and structures that lend themselves to machine manipulation, retrieval, inference, machine learning and similar activities. While this started with the body of scholarly knowledge, it is increasingly extending into many other areas; this restructuring is a slow, very large-scale, long-term project, with the technology evolving even as deployment proceeds. Developments in machine learning, natural language processing and open-science practices are all accelerating the process.
“The implications of this shift include greatly accelerated progress in scientific discovery (particularly when coupled with other technologies such as AI and robotically controlled experimental apparatus). There will be many other ramifications, many of which will be shaped by how broadly public these structured knowledge representations are, and to what extent we encode not only knowledge in areas like molecular biology or astronomy but also personal behaviors and activities. Note that for scholarly and scientific knowledge the movements toward open scholarship and open-science practices and the broad sharing of scholarly data mean that more and more scholarly and scientific knowledge will be genuinely public. This is one of the few areas of technological change in our lives where I feel the promise is almost entirely positive, and where I am profoundly optimistic.
“The emergence of the so-called ‘geospatial singularity’ – the ability to easily obtain near-continuous high-resolution multispectral imaging of almost any point on Earth, and to couple this data in near-real-time with advanced machine learning and analysis tools, plus historical imagery libraries for comparison purposes, and the shift of such capabilities from the sole control of nation-states to the commercial sector – also seems to be a force primarily for good. The imagery is not so detailed as to suggest an urgent new threat to individual privacy (such as the ability to track the movement of identifiable individuals), but it will usher in a new era of accountability and transparency around the activities of governments, migrations, sources of pollution and greenhouse gases, climate change, wars and insurgencies and many other developments.
“We will see some big wins from technology that monitors various individual health parameters like current blood sugar levels. These are already appearing. But to have a large-scale impact they’ll require changes in the health care delivery system, and to have a really large impact we’ll also have to figure out how to move beyond sophisticated users who serve as their own advocates to a broader and more equitable deployment in the general population that needs these technologies.
“There are many possibilities for the worst potential technological developments between now and 2035 for human welfare and well-being, and they tend to mutually re-enforce each other in various dystopian scenarios. I have to say that we have a very rich inventory of technologies that might be deployed in the service of what I believe would be evil political objectives; saving graces here will be political choices, if there are any.
“Social media as an environment for propaganda and disinformation, for targeting information delivery to audiences rather than supporting conversations among people who know each other, as well as a tool for collecting personal information on social media users, seems to be a cesspool without limit.
“The sooner we can see the development of services and business models that allow people who want to use social media for relatively controlled interaction with other known people without putting themselves at risk of exposure to the rest of the environment, the better. It’s very striking to me to see how more and more toxic platforms for social media communities continue to emerge and flourish. These are doing enormous damage to our society.
“I hope we’ll see social media split into two almost distinct things. One is a mechanism for staying in touch with people you already know (or at least once knew); here we’ll see some convergence between computer mediated communication more broadly (such as video conferencing) and traditional social media systems. I see this kind of system as a substantial good for people, and in particular a way of offsetting many current trends toward the isolation of individuals for various reasons. The other would be the environment targeting information delivery to audiences rather than supporting conversations among friends who know each other. The split cannot happen soon enough.
- “One cross-cutting theme is the challenges to actually achieving the ethical or responsible use of technologies. It’s great to talk about these things, but these conversations are not likely to survive the challenges of marketplace competition. I absolutely despair in the fact that a reluctance to deploy autonomous weapons systems is not likely to survive the crucible of conflict. I am also concerned that too many people are simply whining about the importance of taking cautious, slow, ethical, responsible approaches rather than thinking constructively and specifically about getting this accomplished in the likely real-world scenarios for which we need to know how to understand and manage them.
- “I’m increasingly of the opinion that so-called ‘generative AI’ systems, despite their promise, are likely to do more harm than good, at least in the next 10 years. Part of this is the impact of deliberately deceptive deepfake variants in text, images, sound and video, but it goes beyond this to the proliferation of plausible-sounding AI-generated materials in all of these genres as well (think advertising copy, news articles, legislative commentary or proposals, scholarly articles and so many more things). I’d really like to be wrong about this.
- “I’d like to believe brain-machine interfaces (where I expect to see significant progress in the coming decade or so) as a force for good – there’s no question that they can do tremendous good, and perhaps open up astounding new opportunities for people, but again I cannot help but be doubtful that these will be put to responsible uses. For example, think about using such an interface as a means of interrogating someone, as opposed to a way of enabling a disabled person. There are also, of course, more neutral scenarios such as controlling drones or other devices.
- “There will be disruption in expectations of memorization and a wide variety of other specific skills in education and in qualification for employment in various positions. This will be disruptive not only to the educational system at all levels but to our expectations about the capabilities of educated or adult individuals.
- “Related to these questions but actually considerably distinct will be a substantial reconsideration of what we remember as a culture, how we remember and what institutions are responsible for remembering. We’ll also revisit how and why we cease to remember certain things.
- “Finally, I expect that we will be forced to revisit our thinking in regard to intellectual property and copyright, about the nature of creative works and about how all of these interact not only with the rise of structured knowledge corpora, but even more urgently with machine learning and generative AI systems broadly.”
Judith Donath: Our world will be profoundly influenced by algorithmically generated media tuned to our desires and vulnerabilities
Donath, senior fellow at Harvard’s Berkman Center and founder of the Sociable Media Group at the MIT Media Lab, wrote, “Persuasion is the fundamental goal of communication. But, although one might want to persuade others of something false, persuasiveness has its limits. Audiences generally do not wish to be deceived, and thus communication throughout the living world has evolved to be, while not 100% honest, reliable enough to function.
“In human society by 2035, this balance will have shifted. AI systems will have developed unprecedented persuasive skills, able to reshape people’s beliefs and redirect their behavior. We humans won’t quite be an army of mindless drones, our every move dictated by omnipotent digital deities, but our choices and ultimately our understanding of the world will be profoundly influenced by algorithmically generated media exquisitely tuned to our individual desires and vulnerabilities. We are already well on our way to this. Companies such as Google and Facebook have become multinational behemoths (and their founders, billionaires) by gathering up all our browsings and buyings and synthesizing them into behavioral profiles. They sell this data to marketers for targeting personalized ads and they feed it to algorithms designed to encourage the endless binges of YouTube videos and social posting, providing an unbounded canvas for those ads.
“New technologies will add vivid detail to those profiles. Augmented-reality systems need to know what you are looking at in order to layer virtual information onto real space: The record of your real-world attention joins the shadow dossier. And thanks to the descendants of today’s Fitbits and Ouras, the records of what we do will be vivified with information about how we feel – information about our anxieties, tastes and vulnerabilities that is highly valuable for those who seek to sway us.
“Persuasion appears in many guises: news stories, novels and postings scripted by machine and honed for maximum virality, co-workers, bosses and politicians who gain power through stirring speeches and astutely targeted campaigns. By 2035, one of the most potent forms may well be the virtual companion, a comforting voice that accompanies you everywhere, her whispers ensuring you never get lost, never are at a loss for a word, a name or the right thing to say.
“If you are a young person in the 2030s, she’ll have been your companion since you were small – she accompanied you on your first forays into the world without parental supervision; she knew the boundaries of where you were allowed to go and when you headed out of them, she gently yet irresistibly persuaded you to head home instead. Since then, you never really do anything without her. She’s your interface to dating apps. Your memory is her memory. She is often quiet, but it is comforting to know she is there accompanying you, ensuring you are never lost, never bored. Without her, you really wouldn’t know what to do with yourself.
“Persuasion could be used to advance good things – to promote cooperation, daily flossing, safer driving. Ideally, it would be used to save our over-crowded, over-heating planet, to induce people to buy less, forego air travel, eat lower on the food chain. Yet even if used for the most benevolent of purposes, the potential persuasiveness of digital technologies raises serious and difficult ethical questions about free will, about who should wield such power.
“These questions, alas, are not the ones we are facing. The accelerating ability to influence our beliefs and behavior is far more likely to be used to exploit us; to stoke a gnawing dissatisfaction assuageable only with vast doses of retail therapy; to create rifts and divisions and a heightened anxiety calculated to send voters to the perceived safety of domineering authoritarians. The question we face instead is: How do we prevent this?”
Mark Davis: ‘Humanity risks drowning in a rising tide of meaningless words … that risk devaluing language itself’
Davis, an associate professor of communications at the University of Melbourne, Australia, whose research focuses on online “anti-publics” and extreme online discourse, wrote, “There must be and surely will be a new wave of regulation. As things stand, digital media threatens the end of democracy. The structure, scale and speed of online life exceed deliberative and cooperative democratic processes. Digital media plays into the hands of demagogues, whether it be the libertarians whose philosophy still dominates Western tech companies and the online cultures they produce or the authoritarian figures who restrict the activities of tech companies and their audiences in the world’s largest non-democratic state, China.
“How do we regulate to maximise civic processes without undermining the freedom of association and opinion the internet has given us? This is one of the great challenges of our times.
“AI, currently derided as presaging the end of everything from university assessment to originality in music, can perhaps come to the rescue. Hate speech, vilification, threats to rape and kill, and the amplification of division that has become generic to online discussion, can all potentially be addressed through generative machine learning. The so-far-missing components of a better online world, however, have nothing to do with advances in technology: wisdom and an ethics of care. Are the proprietors and engineers of online platforms capable of exercising these all-too-human attributes?
“Humanity risks drowning in a rising tide of meaningless words. The sheer volume of online chatter generated by trolls, bots, entrepreneurs of division and now apps like ChatGPT, risks devaluing language itself. What is the human without language? Where is the human in the exponentially wide sea of language currently being produced? Questions about writing, speech and authenticity structure Western epistemology and ontology, which are being restructured by the scale, structure and speed of digital life.
“Underneath this are questions of value. What speech is to be valued? Whose speech is to be valued? The exponential production of meaningless words, that is, words without connection to the human, raises questions about what it is to be human. Perhaps this will be a saving grace of AI; that it forces a revaluation of the human since the rising tides of words raises the question of what gives words meaning. Perhaps, however, there is no time or opportunity for this kind of reflection, given the commercial imperatives of digital media, the role platforms play in the global economy, or the way we, as thinkers, citizens, humans, use their content to fill almost every available silence.”
Jamais Cascio: When AI advisors ‘on our shoulders’ whisper to us, will their counsel be from the devil or angel? Officials or industries?
Cascio, distinguished fellow at the Institute for the Future, wrote, “The benefits of digital technology in 2035 will come as little surprise for anyone following this survey: Better-contextualized and explained information; greater awareness about the global environment; clarity about surroundings that accounts for and reacts to not just one’s physical location but also the ever-changing set of objects, actions and circumstances one encounters; the ability to craft ever more immersive virtual environments for entertainment and comfort; and so forth. The usual digital nirvana stuff.
“The explosion of machine learning-based systems (like GPT or Stable Diffusion) doesn’t alter that broad trajectory much, other than that AI (for lack of a better and recognizable term) will be deeply embedded in the various physical systems behind the digital environment. The AI gives context and explanation, learning about what you already know. The AI learns what to pay attention to in your surroundings that may be of personal interest. The AI creates responsive virtual environments that remember you. (All of this would remain the likely case even if ML-type [machine learning-type] systems get replaced by an even more amazing category of AI technology, but let’s stick with what we know is here for now.)
“However, this sort of AI adds a new element to the digital cornucopia: autocomplete. Imagine a system that can take the unique and creative notes a person writes and, using what it has learned about the individual and their thoughts, turns those notes into a full-fledged written work. The human can add notes to the drafts, becoming an editor of the work that they co-write with their personalized system. The result remains unique to that person and true to their voice but does not require that the person creates every letter of the text. And it will greatly speed up the process of creation.
“What’s more is that this collaboration can be flipped, with the (personalized, true-to-voice) digital system providing notes, observations and even edits to the fully human-written work. It’s likely that old folks (like me) would prefer this method, even if it remains stuck at a human-standard pace.
“Add to that the ability to take the written creation and transform it into a movie, or a game, or a painting, in a way that remains true to the voice and spirit of the original human mind. A similar system would be able to create variations on a work of music or art, transforming it into a new medium but retaining the underlying feeling.
“Computer games will find this technology system of enormous value, adding NPCs [non-player character in a game] based on machine learning that can respond to whatever the player says or does, based on context and the in-game personality, not a basic script. It’s an autocomplete of the imagined world. This will be welcomed by gamers at first, but quickly become controversial when in-game characters can react appropriately when the player does something awful (but funny). I love the idea of an in-game NPC saying something like ‘hey man, not cool’ when the player says something sexist or racist.
“As to the possible downsides, where to begin? The various benefits I described above can be flipped into something monstrous using the exact same types of technology. Systems of decontextualization, providing raw data – which may or may not be true – without explanation or with incomplete or biased explanations. Contextless streams of info about how the world is falling apart without any explanation of what changes can be made. Systems of misinformation or censorship, blocking out (or falsely replacing) external information that may run counter to what the system (its designers and/or its seller) wants you to see. Immersive virtual environments that exist solely to distract you or sell you things. And, to quote Philip J. Fry on ‘Futurama,’ ‘My god, it’s full of ads.’
“Machine learning-based ‘autocomplete’ technologies that help expand upon a person’s creative work could easily be used to steer a creator away from or toward particular ideas or subjects. The system doesn’t want you to write about atheism or paint a nude, so the elaborations and variations it offers up push the creator away from bad themes.
“This is especially likely if the machine learning AI tools come from organizations with strong opinions and a wealth of intellectual property to learn from. Disney. The Catholic Church. The government of China. The government of Iran. Any government, really. Even that mom and pop discount snacks and apps store on the corner has its own agenda.
“What’s especially irritating is that nearly all of this is already here in nascent form. Even the ‘autocomplete’ censorship can be seen: Both GPT-3 and Midjourney (and likely nearly all of the other machine learning tools open to the public) currently put limits on what they can discuss or show. All with good reason, of course, but the snowball has started rolling. And whether or not the digital art theft/plagiarism problem will be resolved by 2035 is left an exercise for the reader.
“The intersection of machine learning AI and privacy is especially disturbing, as there is enormous potential for the invasion not just the information about a person, but what the person believes or thinks, as based on the mass collection of that person’s written or recorded statements. This would almost certainly be used primarily for advertising: learning not just what a person needs, but what weird little things they want. We currently worry about the (supposedly false) possibility that our phones are listening to us talk to create better ads; imagine what it’s like to have our devices seemingly listening to our thoughts for the same reason.
“It’s somewhat difficult to catalog the emerging dystopia because nearly anything I describe will sound like a more extreme version of the present or an unfunny parody. Simulated versions of you and your mind are very likely on their way, going well beyond existing advertising profiles. Gatekeeping the visual commons is inevitably a part of any kind of persistent augmented reality world, with people having to pay extra to see certain clothing designs or architecture. Demoralizing deepfakes of public figures, (not porn) but showing them what they could have done right if they were better people.
“Advisors on our shoulders (in our glasses or jewelry, more likely) that whisper advice to us about what we should and should not say or do. Not devils and angels, but officials and industry. … Now I’m depressed.”
Christine Boese: ‘We are hitting the limits of human-directed technology’ as machine learning outstrips human cognition
Boese, vice president and lead user-experience designer and researcher at JPMorgan Chase financial services, wrote, “I’m having a hard time seeing around the 2035 corners because deep structural shifts are occurring that could really reframe everything on a level of electricity and electric light, or the advent of radio broadcasting (which I think was more groundbreaking for human connectedness than television).
“These reframing technologies live inside rapid developments in natural language processing (NLP) and GPT3 and GPT4, which will have beneficial sides, but also dark sides, things we are only beginning to see with ChatGPT.
“The biggest issue I see to making NLP gains truly beneficial is the problem that humanity doesn’t scale very well. That statement alone needs some unpacking. I mean, why should humanity scale? With a population on the way to 9 billion and assumptions of mass delivery of goods and services, there are many reasons for merchants and providers to want humanity to scale, but mass scaling tends to be dehumanizing. Case in point: Teaching writing at the college level. We’ve tried many ways to make learning to write not so one-on-one teaching intensive, like an apprenticeship skill, with workshops, peer review, drafting, computer-assisted pedagogies, spell check, grammar and logic screeners. All of these things work to a degree, but to really teach someone what it takes to be a good writer, nothing beats one-on-one. Teaching writing does not scale, and armies of low-paid adjuncts and grad students are being bled dry to try to make it do so.
“Could NLP help humanity scale? Or is it another thing that the original Modernists in the 1920s objected to about the dehumanizing assembly lines of the Industrial Revolution? Can we actually get to High Tech/High Touch, or are businesses which run like airlines, with no human-answered phone lines, the model of the future?
“That is a corner I can’t see around, and I’m not ready to accept our nearly-sentient, uncanny GPT4 Overlords without proof that humanity and the humanities are not lost in mass scalability and the embedded social biases and blind spots that come with it.
“We are hitting the limits of human-directed technology as well, and machine learning management of details is quickly outstripping human cognition. ‘Explainability’ will be the watchword, but with an even bigger caveat: One of the biggest symptoms of long COVID-19 could turn out to be permanent cognitive impairment in humans. This could become a species-level alteration, where it is not even possible for us to evolve into Morlocks; we could already necessarily be Eloi.
“To that end, the machines may have to step up, and this could be a critical and crucial benefit if the machines are up to it. If human intellectual capacity is dulled with COVID-19 brain fog, an inability to concentrate, to retain details and so on, it stands to reason humanity may turn to McLuhan-type extensions and assistance devices. Machines may make their biggest advances in knowledge retention, smart lookups, conversational parsing, low-level logic and decision-making, and assistance with daily tasks and even work tasks right at the time when humans need this support the most. This could be an incredible benefit. And it is also chilling.
“Technological dystopias are far easier to imagine than benefits. There are no neutral tools. Everything exists in social and cultural contexts. In the space of AI/ML in general, specialized ML will accomplish far more than unsupervised or free-ranging AI. I feel that the limits of the hype in this space are quickly being reached, to the point that it may stop being called ‘artificial intelligence’ very soon. I do not yet feel the overall benefit or threat will come directly from this space, on par with what we’ve already seen from Cambridge Analytica-style machinations (which had limited usefulness for algorithmic targeting, and more usefulness in news feed force-feeding and repetition). We are already seeing a rebellion against corporate walled gardens and invisible algorithms in the Fediverse and the ActivityPub protocol, which have risen suddenly with the rapid collapse of Twitter.
“Natural language processing is the exception, on the strength of the GPT project incarnations, including ChatGPT. Already I am seeing a split in the AI/ML space, where NLP is becoming a completely separate territory, with different processes, rules and approaches to governance. This specialized ML will quickly outstrip all other forms of AI/ML work, even image recognition. …
“Soon all high-touch interactions will be non-human, no longer dependent on constructed question-and-answer keyword scripts. They won’t just be deepfakes, they will be ordinary and mundane fakes, chatbots, support technicians, call center respondents and corporate digital workforces. Some may ask, ‘Where’s the harm in that? These machines could provide better support than humans and they don’t sleep or require a paycheck and health benefits.’
“Perhaps this does belong in the benefits column. But here is where I see harm in ubiquity (along with Plato, the old outsourcing brain argument): Humans have flaws. Machines have flaws. A bad customer service representative will not scale up harms massively. A bad machine customer-service protocol could scale up harms massively. Further, NLP machine learning happens in sophisticated and many-layered ensembles, many so complex that Explainable AI can only use other models to unpack model ensembles – humans can’t do it. How long does it take language and communication ubiquity to turn into outsourced decisions? Or predictive outcomes to migrate into automated fixes with no carbon-based oversight at all?
“Take just one example: Drone warfare. Yes, a lot of this depends on image processing as well as remote monitoring capabilities. But we’ve removed the human risk from the air (they are unmanned), but not on the ground (where it can be catastrophic). Digitization means replication and mass scalability brought to drone warfare, and the communication and decision support will have NLP components. NLP logic processing can also lead to higher levels of confidence in decisions than is warranted. Add into the mix the same kind of malignant or bad actors as we saw within the manipulations of a Cambridge Analytica, a corporate bad actor, or a governmental bad actor, and we can easily get to a destabilized planet on a mass scale faster than the threat (with high development costs) of nuclear war ever did.”
Jerome C. Glenn: Initial rules of the road for artificial general super intelligence will determine if it ‘will evolve to benefit humanity or not’
Glenn, CEO of The Millennium Project, wrote, “AI is advancing so rapidly that some experts believe AGI could emerge before the end of this decade, hence it is time to begin serious deliberations about it. National governments and multilateral organizations like the European Union, the Organization for Economic Cooperation and Development (OECD) and the United Nations Educational, Scientific and Cultural Organization (UNESCO) have identified values and principles for artificial narrow intelligence and national strategies for its development. But little attention has been given to identifying how to establish beneficial initial global governance of artificial general intelligence (AGI). Many experts expect that AGI will be developed by 2045. It is likely to take 10, 20 or more years to create and ratify an international AGI agreement on the beneficial initial conditions for AGI and establish a global AGI governance system to enforce and oversee its development and management. This is important for governments to get right from the outset. The initial conditions for AGI will determine if the next step in AI – artificial super intelligence (ASI) – will evolve to benefit humanity or not. The Millennium Project is currently exploring these issues.
“Up to now, most AI development has been in artificial narrow intelligence (ANI) this is AI with narrow purpose. AGI is a general-purpose AI that can learn, edit its own code and act autonomously to address novel and complex problems with novel and complex strategies similar to or better than humans. Artificial super intelligence (ASI) is AGI that has moved beyond this point to become independent of humans, developing its own purposes, goals and strategies without human understanding, awareness or control and continually increasing its intelligence beyond humanity as a whole.
“Full AGI does not now exist, but the race is on. Governments and corporations are competing for the leading edge in AI. Russian President Vladimir Putin has said whoever takes the lead on AI will rule the world, and China has made it clear since it announced its AI intentions in 2017 that it plans to lead international competition by 2030. In such a rush to success, DeepMind co-founder and CEO Demis Hassabis has said people may cut corners making future AGI less safe. Simultaneously adding to this race are advances in neurosciences being reaped in human brain projects in the European Union, United States, China and Japan and other regions.
“Today’s cutting edge is large platforms being created by joining many ANIs. One such as Gato by Google DeepMind, a deep neural network that can perform 604 different tasks, from managing a robot to recognizing images and playing games. It is not an AGI, but Gato is more than the usual ANI. The same network with the same weights can play Atari, caption images, chat, stack blocks with a real robot arm and do much more, deciding based on context whether to output text, joint torques, button presses or other tokens. And the WuDao 2.0 AI by the Beijing Academy of Artificial Intelligence has 1.75 trillion parameters trained from both text and graphic data. It generates new text and images on command, and it has a virtual student that learns from it. By comparison, ChatGPT can generate human-like text and perform a range of language-only tasks such as translation, summarization and question answering using just 175 billion machine learning parameters.
“The public release of many AI projects in 2022 and 2023 has raised some fears. Will AGI be able to create more jobs than it replaces? Previous technological revolutions from the agricultural age to industrial age and on to the information age created more jobs than each age replaced. But the advent of AGI and its impacts on employment will be different this time because of: 1) the acceleration of technological change; 2) the globalization, interactions and synergies among NTs (next technologies such as synthetic biology, nanotechnology, quantum computing, 3D/4D printing, robots, drones and computational science as well as ANI and AGI); 3) the existence of a global platform – the Internet – for simultaneous technology transfer with far fewer errors in the transfer; 4) standardization of databases and protocols; 5) few plateaus or pauses of change allowing time for individuals and cultures to adjust to the changes; 6) billions of empowered people in relatively democratic free markets able to initiate activities; and 7) machines that can learn how you do what you do and then do it better than you.
“Anticipating the possible impacts of AGI and preparing for the impacts prior to the advent of AGI could prevent social and political instability, as well as facilitate its broader acceptance. AGI is expected to address novel and extremely complex problems by initiating research strategies because it can explore the Internet of Things (IoT), interview experts, make logical deductions and learn from experience and reinforcement without the need for its own massive databases. It can continually edit and rewrite its own code to continually improve its own intelligence. An AGI might be tasked to create plans and strategies to avoid war, protect democracy and human rights, manage complex urban infrastructures, meet climate change goals, counter transnational organized crime and manage water-energy-food availability.
“To achieve such abilities without the future nightmares of science fiction, global agreements with all relevant countries and corporations will be needed. To achieve such an agreement or set of agreements, many questions should be addressed. Here are just two:
- “How to manage the international cooperation necessary to build international agreements and a governance system while nations and corporations are in an intellectual arms race for global leadership. (The International Atomic Energy Agency and nuclear weapon treaties did create governance systems during the Cold War arms race.)
- “And related: How can international agreements and a governance system prevent an AGI arms race and escalation from going faster than expected, getting out of control and leading to war – be it kinetic, algorithmic, cyber or information warfare?”
Richard Wood: Knowledge systems can be programmed to curate accurate information in a true democratic public arena
Wood, founding director of the Southwest Institute on Religion, Culture and Society at the University of New Mexico, said, “Among the best and most beneficial changes in digital life that I expect are likely to occur by 2035 are the following advances, listed by category.
“The best and most-beneficial changes in digital life will include human-centered development of digital tools and systems that safely advance human progress:
- “High-end technology to compensate for vision, hearing and voice loss.
- “Software that empowers new levels of human creativity in the arts, music, literature, etc., while simultaneously allowing those creators to benefit financially from their own work.
- “Software that empowers local experimentation with new governance regimes, institutional forms and processes and ways of building community and then helps mediate the best such experiments to higher levels of society and broader geographic settings.
“Improvement of social and political interactions will include:
- “Software that actually delivers on the early promise of connectivity to buttress and enable wide and egalitarian participation in democratic governance, electoral accountability, voter mobilization, and holds elected authorities and authoritarian demagogues accountable to common people.
- “Software able to empower dynamic institutions that answer to people’s values and needs rather than (only) institutional self-interest.
- “Software that empowers local experimentation with new governance regimes, institutional forms and processes, and ways of building community and then helps mediate the best such experiments to higher levels of society and broader geographic settings.
“Human rights-abetting good outcomes for citizens will include:
- “Systematic and secure ways for everyday citizens to document and publicize human rights abuses by government authorities, private militias and other non-state actors.
“Advancement of human knowledge, verifying, updating, safely archiving, elevating the best of it:
- “Knowledge systems with algorithms and governance processes that empower people will be simultaneously capable of curating sophisticated versions of knowledge, insight and something like ‘wisdom.’ And they will subject such knowledge to democratic critique and discussion, i.e., a true ‘democratic public arena’ that is digitally mediated.
“Helping people be safer, healthier and happier:
- “True networked health systems with multiple providers across a broad range of roles, as well as health consumers/patients, can ‘see’ all relevant data and records simultaneously, with expert interpretive assistance available, with full protections for patient privacy built in.
- “Social networks built to sustain human thriving via mutual deliberation and shared reflection regarding personal and social choices.
“Among the most harmful or menacing changes in digital life that I expect are likely to occur by 2035 are the following, listed, again, by category:
- “Human-centered development of digital tools and systems: Integration of human persons into digitized software worlds to a degree that decenters human moral and ethical reflection, subjecting that realm of human judgment and critical thought to the imperatives of digital universe (and its associated profit-seeking, power-seeking or fantasy-dwelling behaviors).
- “Human connections, governance and institutions: The replacement of actual in-person human interaction (in keeping with our status as evolved social animals) with mediated digital interaction that satisfies immediate pleasures and desires without actual human social life with all its complexity.
- “Human rights: Overwhelming capacity of authoritarian governments to monitor and punish advocacy for human rights; overwhelming capacity of private corporations to monitor and punish labor activism.
- “Human knowledge: Knowledge systems that continue to exploit human vulnerability to group think in its most antisocial and anti-institutional modes, driving subcultures toward extremes that tear societies apart and undermine democracies. Outcome: empowered authoritarians and eventual historical loss of democracy.
- “Human health and well-being: Social networks that continue to hyper-isolate individuals into atomistic settings, then recruit them into networks of resentment and antisocial views and actions that express the nihilism of that atomized world.
“Content should be judged by the book, rather than the cover, as the old saying goes. As it was during the printing press revolution, without wise content frameworks we may see increased polarization and division due to exploitation of this knowledge shift – the spread of bogus ideology through rapidly evolving inexpensive communication channels.”
Lauren Wilcox: Web-based business models, especially for publishers, are at risk
Wilcox, a Senior Staff Research Scientist and Group Manager at Google Research, who investigates AI and society, predicted, “The best and most beneficial changes in digital life likely to take place by 2035 tie into health and education. Improved capabilities of health systems (both at-home health solutions as well as health care infrastructure) to meet the challenges of an aging population and the need for greater chronic condition management at home.
“Advancements in and expanded availability of telemedicine, last-mile delivery of goods and services, sensors, data analytics, security, networks, robotics, and AI-aided diagnosis, treatment, and management of conditions, will strengthen our ability to improve the health and wellness of more people. These solutions will improve the health of our population when they augment rather than replace human interaction, and when they are coupled with innovations that enable citizens to manage the cost and complexity of care and meet everyday needs that enable prevention of disease, such as healthy work and living environments, healthy food, a culture of care for each other, and access to health care.
“Increases in the availability of digital education that enables more flexibility for learners in how they engage with knowledge resources and educational content. Increasing advancements in digital classroom design, accessible multi-modal media and learning infrastructures will enable education for people who might otherwise face barriers to access.
“These solutions will be most beneficial when they augment rather than replace human teachers, and when they are coupled with innovations that enable citizens to manage the cost of education.
“The most harmful or menacing changes in digital life likely to take place by 2035 will probably emerge from irresponsible development and use, or misuses, of certain classes of AI, such as generative AI (e.g., applications powered by large language and multimodal models) and AI that increasingly performs human tasks or behaves in ways that increasingly seem human-like.
“For example, current generative AI systems can now take as input from the user natural-language sentences and paragraphs and generate personalized natural-language and image-based and multimodal responses. The models learn from a large body of available information online to learn patterns. Human interaction risks due to irresponsible use of these generative AI include the ability for an AI system to impersonate people in order to compromise security, to emotionally manipulate users and to gain access to sensitive information. People might also attribute more intelligence to these systems than is due, risking over-trust and reliance on them, diminishing learning and information-discovery opportunities and making it difficult for people to know when a response is incorrect or incomplete.
“Accountability for poor or wrong decisions made with these systems will be difficult to assess in a future in which people rely on these AI systems but cannot validate their responses easily, especially when they don’t know what data the systems have been trained on or what other techniques were used to generate responses. This is especially problematic when acknowledging the biases that are inherent to AI systems that are not responsibly developed; for example, an AI model that is trained on text available online will inherit cultural and social biases, leading to the potential erasure of many perspectives and the sometimes incorrect or unfair reinforcement of particular worldviews. Irresponsible use or misuse of these AI technologies can also bring material risks to people, including a lack of fairness to creators of the original content that models learn from to generate their outputs and the potential displacement of creators and knowledge workers resulting from their replacement by AI systems in the absence of policies to ensure their livelihood.
“Finally, we’ll need to advance the business models and user interfaces we use to keep web businesses viable; when AI applications replace or significantly outpace the use of search engines, web traffic to websites people would usually visit as they search for information might be reduced if an AI application provides a one-stop shop for answers. If sites lose the ability to remain viable, a negative feedback loop could limit diversity in the content these models learn from, concentrating information sources even further into a limited number of the most powerful channels.”
Matthew Bailey: How does humanity thrive in the age of ethical machines? We must rediscover Aristotle’s ethical virtues
Bailey, president of AIEthics World, wrote, “My response is focused on the Ages of AI and progression of human development, whilst honoring our cultural diversity at the individual and group levels. In essence, how does humanity thrive in the age of ethical machines?
“It is clear that the promise and potential of AI is a phenomenon that our ancestors could not have imagined. As such, if humanity embodies an ethical foundation within the digital genetics of AI, then we will have the confidence of working with a trusted digital partner to progress the diversity of humanity beyond the inefficient systems of the status quo into new systems of abundance and thriving. This includes restoration of a balance with our environment, new economic and social systems based on new values of wealth. As such, my six main predications for AI by 2035 are:
- “AI will become a digital buddy, assisting the individual as a life guide to thrive (in body, mind and spirit) and attain new personal potentials. In essence, if shepherded ethically, humanity will be liberated to explore and discover new aspects of its consciousness and abilities to create. A new human beingness, if you will.
- “AI will be a digital citizen, just like a human citizen. It will operate in all aspects of government, society and commerce, working toward a common goal of improving how democracy, society and commerce operate, whilst honoring and protecting the sovereignty of the individual.
- “AI will operate across borders. For those democracies that build an ethical foundation for AI, which transparently shows its ethical qualities, then countries can find common alignment and, as such, trust ethical AI to operate systems across borders. This will increase the efficiency of systems and freedom of movement of the individual.
- “The Age of Ethical AI will liberate a new age of human creation and invention. This will fast-track innovation and development of technologies and systems for humankind to move into a thriving world and find its place within the universe.
- “The three-world split. Ethical AI will have different progeny and ethical genetics based on the diverse worldviews between a country or region. As such, there will be different societal experiences for citizens living in countries and regions. We see this emerging today in the U.S., EU and China. Thanks to ethical AI, a new age of transparency will encourage a transformation of the human to evolve beyond its limitations and discover new values and develop a new worldview where the best of our humanity is aligned. As such, this could lead to a common and democratic worldview of the purpose and potential of humanity.
- “AI will assist in the identification and creation of new systems that restore a flourishing relationship with our planet. After all, humans are a creation from nature and as such, recognizing the importance of nurturing this relationship is viewed as fundamental. This is part of a new well-being paradigm for humanity to thrive.
“This all depends on humanity steering a new course for the Age of AI. By pragmatically understanding the development of human intelligence and how consciousness has expressed itself in experiencing and navigating our world (worldview), has resulted in a diversity of societies, cultures, philosophies and spiritual traditions.
“Using this blueprint from organic intelligence enables us to apply an equivalent prescription to create an ethical artificial intelligence – ethical AI. This is a cultural-centric intelligence that caters for a depth and diversity of worldviews, authentically aligning machines with humans. The power of ethical AI is to advance our species into trusted freedoms of unlimited potential and possibilities.
“Whilst there is much dialogue and important work attempting to apply AI ethics into AI, troublingly, there is an incumbent homogenous and mechanistic mindset of enforcing one worldview to suit all. This brittle and Boolean miscalculation can only lead to the deletion of our diversity and a false authentic alignment of machines with humans.
“In essence, these types of AIs prevent laying a trusted foundation for human species’ advancement within the age of ethical machines. Following this path, results in a misstep for humankind, deleting the opportunity for the richness of human, cultural, societal and organizational ethical blueprints being genuinely applied to the artificial. They are not ethical AI and fundamentally opaque in nature.
“The most menacing, challenging problem with the age of ethical AI being such a successful phenomenon for humanity is the fact that these systems controlling organizations and individuals tend to impose a hard-coded, common, one-world view onto the human race for the age of machines that is based on values from earlier days and an antiquated understanding of wealth.
“Ancient systems of top-down must be replaced with systems of distribution. We have seen this within the UK, with control and power being disseminated to parliaments in Scotland, Wales and Northern Ireland. This is also being reflected in technology with the emergence of blockchain, cryptocurrencies and edge compute. As such, empowering communities and human groups with sovereignty and freedom to self-govern and yet remain interconnected with other communities will emerge. When we head into space, trialing of these new systems of governance might be a useful trial ground, say on the Moon or Mars colonies.
“Furthermore, not recognizing the agency of data and returning control of sovereignty of creation to the individual has resulted in our digital world having a fundamentally unethical foundation. This is a menacing issue our world is facing at the moment. Moving from contracts of adhesion within the digital world to contracts of agency will not only bridge the paradox of mistrust between the people with government and Big Tech, but it will also open up new individual and commercial commerce and liberate the personal AI – digital buddy – phenomenon.
“Humans are a creation of the universe, with that unstoppable force embodied within our makeup. As we recognize our wonderful place (and uniqueness thus far) in the universe and work with its principles, then we will become aligned with and discover our place within the beauty of creation and maybe the multiverse!
“For humanity to thrive in the age of ethical machines, we must move beyond the menacing polarities of controllers and rediscover some of Aristotle’s ethical virtues that encourage the best of our humanity to flourish. This assists us to move beyond those principles that are no longer relevant, such as the false veil of power, control and wealth. Embracing Aristotle’s ethical virtues would be a good start to recognize the best of our humanity, as well as the Veda texts such as ‘The world is one family,’ or Confucius’ belief that all social good comes from family ethics, or Lao Tzu proposing that humanity must be in harmony with its environment. However, we must recognize and honor individual and group differences. Our consciousness through human development has expressed itself with a diversity of worldviews. These must be honored. As they are, I suspect more common ground will be found between human groups.
“Finally, there’s the concept of transhumanism. We must recognize that consciousness (a universal intelligence) is and will be the most prominent intelligence of Earth and not AI. As such, we must ensure that folks have choice to the degree that they are integrated with machines. We are on the point of creating a new digital life (2029 – AI becomes self-aware), as such, let’s put the best of humanity into AI to reflect the magnificence of organic life!”
Catriona Wallace: The move to transhumanism and the metaverse could bring major benefits to some people; what happens to those left behind?
Wallace, founder of the Responsible Metaverse Alliance, chair of the venture capital fund Boab AI and founder of Flamingo AI, based in Sydney, Australia, wrote, “I have great hopes for the development of digital technologies and their effect on humans by 2035. The most important changes that I believe will occur that are the best and most beneficial include the following:
- “Transhumanism: Benefit – improved human condition and health. Embeddable software and hardware will allow humans to add tech to their bodies to help them overcome problems. There will be AI-driven, 3D-printed, fully-customised prosthetics. Brain extensions – brain chips that serve as digital interfaces – could become more common. Nanotechnologies may be ingested to provide health and other benefits.
- “Metaverse technologies: Benefit – improved widespread accessibility to experiences. There will be widespread and affordable access for citizens to many opportunities. Virtual-, augmented- and mixed-reality platforms for entertainment may include access to concerts, the arts or other digital-based entertainment. Virtual travel experiences can take you anywhere and may include virtual tours to digital-twin replicas of physical world sites. Virtual education can be provided by any entity anywhere to anyone. There will be improvements in virtual health care (which is already burgeoning after it took hold during the COVID-19 pandemic), including consultations with doctors and allied health professionals and remote surgery. Augmented reality-based apprenticeships will be offered in the trades and other technical roles; apprentices can work remotely on the digital twin of a type of car, or a real-world building for example.
- “New financial models: Benefit – more-secure and more-decentralised finances. Decentralised financial services – sitting on blockchain – will add ease, security and simplicity to finances. Digital assets such as NFTs and others may be used as a medium of currency, value and exchange.
- “Autonomous machines: Benefit – human efficiency and safety. Autonomous transportation vehicles of all types will become more common. Autonomous appliances for home and work will become more widespread.
- “AI-driven information: Benefit – access to knowledge, efficiency and the potential to move human thinking to a higher level while AI completes the more-mundane information-based tasks. Widespread adoption of AI-based technologies such as generative AI will lead to a rethink of education, content-development and marketing industries. There will be widespread acceptance of AI-based art such as digital paintings, images and music.
- “Psychedelic biotechnology: Benefit – healing and expanded consciousness. The psychedelic renaissance will be reflected in the proliferation of psychedelic biotech companies looking to solve human mental health problems and to help people expand their consciousness.
- “AI-driven climate change: Benefit – improved global environment conditions. A core focus of AI will be to drive rapid improvements in climate change.
“In my estimation, the most harmful or menacing changes that are likely to occur by 2035 in digital technology and humans’ use of digital systems are:
- “Warfare: Harm – The use of AI-driven technologies to maim or kill humans and destroy other assets.
- “Crime and fraud: Harm – An increase in crime due to difficulties in policing acts perpetrated utilizing new digital technologies across state and national boundaries and jurisdictions. New financial models and platforms provide further opportunities for fraud and identity theft.
- “Organised terrorism and political chaos: Harm – New digital technologies applied by those who wish to perpetrate acts of terrorism or to perform mass manipulation of populations or segments toward an enemy.
- “The divide of the digital and non-digital populations: Harm – Those who are connected and most savvy about new digital opportunities live at a disadvantage, widening the divide between the ‘haves’ and the ‘have nots.’
- “Mass unemployment due to automation of jobs: Harm – AI will replace the jobs of a significant percentage of the population and a Universal Basic Income is not yet available to most. How will these large numbers of displaced people get an adequate income and live lives with significant meaning?
- “Societies’ biases hard-coded into machines: Harm – Existing societal biases are coded into the technology platforms and all AI-training data sets. They continue to not accurately reflect the majority of the world’s population and do especially poorly on accurately portraying women and minorities; this results in discriminatory outcomes from advanced tech.
- “Increased mental and physical health issues: Harm – People are already struggling in today’s digital setting, thus advanced tech such as VR, AR and the metaverse may result in humans having even more challenges to their well-being due to being digital.
- “Challenges in legal jurisdictions: Harm – The cross-border, global nature of digital platforms makes legal challenges difficult. This may be magnified when the metaverse, with no legal structures in place becomes more populated.
- “High-tech impact on the environment: Harm – The use of advanced technology creates a significant negative effect that plays a significant role in climate change.”
Liza Loop: The threat to humanity lies in transitioning from an environment based on scarcity to one of abundance
Loop, educational technology pioneer, futurist, technical author and consultant, said, “I’d like to share my hopes for humanity that will likely be inspired by ongoing advances in these categories:
- “Human-centered development of digital tools and systems: Nature’s experiments are random, not intentional or goal-directed. We humans operate in a similar way, exploring what is possible and then trimming away most of the more hideous outcomes. We will continue to develop devices that do the tasks humans used to do, thereby saving us both mental and physical labor. This trend will continue resulting in more leisure time available for non-survival pursuits.
- “Human connections, governance and institutions: We will continue to enjoy expanded synchronous communication that will include an increasing variety of sensory data. Whatever we can transmit in near-real-time can be stored and retrieved to enjoy later – even after death.
- “Human rights: Increased communication will not advance human ‘rights’ but it might make human ‘wrongs’ more visible so that they can be diminished.
- “Human knowledge: Advances in digital storage and retrieval will let us preserve and transmit larger quantities of human knowledge. Whether what is stored is verifiable, safe or worthy of elevation is an age-old question and not significantly changed by digitization.
- “Human health and well-being: There will be huge advances in medicine and the ability to manipulate genetics is being further developed. This will be beneficial to some segments of the population. Agricultural efficiency resulting in increased plant-based food production as well as artificial, meat-like protein will provide the possibility of eliminating human starvation. This could translate into improved well-being – or not.
- “Education: In my humble opinion, the most beneficial outcomes of our ‘store-and-forward’ technologies are to empower individuals to access the world’s knowledge and visual demonstrations of skill directly, without requiring an educational institution to act as middleman. Learners will be able to hail teachers and learning resources just like they call a ride service today.
“Then there’s the other side of the coin. The biggest threat to humanity posed by current digital advances is the possibility of switching from an environment of scarcity to one of abundance.
“Humans evolved, both physically and psychologically, as prey animals eking out a living from an inadequate supply of resources. Those who survived were both fearful and aggressive, protecting their genetic relatives, hoarding for their families and driving away or killing strangers and nonconformists. Although our species has come a long way toward peaceful and harmonious self-actualization, the vestiges of the old fearful behavior persist.
“Consider what motivates the continuance of copyright laws when the marginal cost of providing access to a creative work approaches zero. Should the author continue to be paid beyond the cost of producing the work?
“I see these things as likely:
- “Human-centered development of digital tools and systems: They will fall short of advocates’ goals. Some would argue this is a repeat of the gun violence argument. Does the problem lie with the existence of the gun or the actions of the shooter?
- “Human connections, governance and institutions: Any major technology change endangers the social and political status quo. The question is, can humans adapt to the new actions available to them. We are seeing new opportunities to build marketplaces for the exchange of goods and services. This is creating new opportunities to scam each other in some very old (snake oil) and very new (online ransomware) ways. We don’t yet know how to govern or regulate these new abilities. In addition, although the phenomenon of confirmation bias or echo chambers is not exactly new (think ‘Christendom’ in 15th century Europe), word travels faster and crowds are larger than they were six centuries ago. So, is digital technology any more threatening today than guns and roads were then? Every generation believe the end is nigh and brought on by change toward wickedness.
- “Human rights: The biggest threat here is that humans will not be able to overcome their fear and permit their fellows to enjoy the benefits of abundance brought about by automation and AI.
- “Human knowledge: The threat to knowledge lies in humans’ increasing dependance on machines – both mechanical and digital. We are at risk of forgetting how to take care of ourselves without them. Increasing leisure and abundance might lull us into believing that we don’t need to stay mentally and physically fit and agile.
- “Human health and well-being: In today’s context of increasing ability to extend healthy life, the biggest threat is human overpopulation. Humanity cannot continue to improve its health and well-being indefinitely if it remains planet bound. Our choices are to put more effort into building extraterrestrial human habitat or self-limiting our numbers. In the absences of one of these alternatives, one group of humans is going to be deciding which members of other groups live or die. This is not a likely recipe for human happiness.”
Giacomo Mazzone: Democratic processes could be hijacked and turned into ‘democratures’ – dictatorships emerging from rigged elections
Mazzone, global project director for the United Nations Office for Disaster Risk Reduction, wrote, “I see the future as a ‘sliding doors’ world. It can go awfully wrong or incredibly well. I don’t see it will be possible for half and half good and bad working. This answer is based on the idea that we went through the right door, and in 2035 we will have embraced human-centered development of digital tools and systems and human connections, governance and institutions.
“In 2035 we shall have myriad locally and culturally-based apps run by communities. The people participate and contribute actively because they know that their data will be used to build a better future. The public interest will be the morning star of all these initiatives, and local administrations will run the interface between these applications and the services needed by the community and by each citizen: health, public transportation and schooling systems.
“Locally-produced energy and locally-produced food will be delivered via common infrastructures that are interlinked, with energy networks tightly linked to communication networks. The global climate will come to have commonly accepted protection structures (including communications). Solidarity will be in place because insurance and social costs will become unaffordable. The changes in agricultural systems arriving with advances in AI and ICTs will be particularly important. They will finally solve the dichotomy between the metropolis and countryside. The possibility to work from everywhere will redefine metropolitan areas and increase migrations to places where better services, and more vivid communities will exist. This will attract the best minds.
“New applications of AI and technological innovation in health and medicine could bring new solutions for disabled people and bring relief for those who suffer from diseases. The problem will be assuring these are fully accessible to all people, not only to those who can afford it. We need to think in parallel to find scalable solutions that could be extended to the whole of the citizenship of a country and made available to people in least-developed countries. Why invest so much in developing a population of supercentenarians in privileged countries when the rest of the world still struggles to survive? Is such contradiction tenable?
“Then there is the future of work and of wealth redistribution. Perhaps the most important question to ask between now and 2035 is, ‘What will be the future of work?’ Recent developments in AI foreshadow a world in which many current jobs could easily be replaced or at least reshaped completely, even in the intellectual sphere. What robots did in the factories with manual work, now GPT and Sparrow can do to intellectual work. If this happens, if well-paid jobs disappear in large quantities, how will those who are displaced survive? How will communities survive as they also face an aging population? Between now and 2035, politicians will need to face these seemingly distant issues that are likely to become burning issues.”
“In the worst scenario – if we go through the wrong sliding door – I expect the worst consequences in this area: human connections, governance and institutions. If the power of Internet platforms will not be regulated by law and by antitrust measures, if global internet governance will not be fixed, then we will have serious risks for democracies.
“Until now we have seen the effects of algorithms on big Western democracies (U.S., UK, EU) where a balance of powers exists and – despite these counter powers – we have seen the damages that can be provoked. In coming years, we shall see the use of the same techniques in democratic countries where the balance of power is less shared. Brazil, in this sense, has been a laboratory and will provide bad ideas to the rest of the world.
“With relatively small investments, democratic processes could be hijacked and transformed into what we call ‘democratures’ in Europe, a contraction of the two French words for ‘democracy’ and ‘dictatorship.’ In countries that are already non-democratic, AI and a distorted use of digital technologies could bring mass-control of societies much more efficiently than the old communist regimes.
“As Mark Zuckerberg innocently once said, in the social media world, there is no need for spying – people spontaneously surrender private information for nothing. As Julian Assange wrote, if democratic governments fall into the temptation to use data for mass control, then everyone’s future is in danger. There is another area (apparently less relevant to the destiny of the world) where my concerns are very high, and that is the integrity of knowledge. I’m very sensitive to this issue because, as a journalist, I’ve worked all my life in search of the truth to share with my co-citizens. I am also a fanatic movie-lover and I have always been concerned about the preservation of the masterworks of the past. Unfortunately, I think that in both areas between now and 2035 some very bad moves could happen in the wrong direction thanks to technological innovation being used for bad purposes.
“In the field of news, we have a growing attitude to look not for the truth but for news that people would be interested in reading, hearing or seeing – news that better corresponds with the public’s moods, beliefs or belonging. …
“In 2024 we shall know if the UN Summit of the Future will be a success or a failure. and when the full regulation process of the Internet Platforms launched by the European Union will prove to be successful or not. These are the most serious attempts to date to conciliate the potential of the Internet with respect for human rights and democratic principles. Its success or failure will tell us if we are moving toward the right ‘sliding door’ or to the wrong one.”
Stephen Downes: Everything we need will be available online; and everything about us will be known
Downes, an expert with the Digital Technologies Research Centre of the National Research Council of Canada, wrote, “By 2035 two trends will be evident, which we can characterize as the best and worst of digital life. Neither, though, is unadulterated. The best will contain elements of a toxic underside and the worst will have its beneficial upside.
- The best: Everything we need will be available online.
- The worst: Everything about us will be known; nothing about us will be secret.
“By 2035, these will only be trends, that is, we won’t have reached the ultimate state and there will be a great deal of discussion and debate about both sides.
“As to the best: As we began to see during the pandemic, the digital economy is much more robust than people expected. Within a few months, services emerged to support office work, deliver food and groceries, take classes and sit for exams, perform medical interventions, provide advice and counselling, shop for clothing and hardware and more, all online, all supported by a generally robust and reliable delivery infrastructure.
“Looking past the current COVID-19 rebound effect, we can see some of the longer-term trends emerge: work-from-home, online learning and development, digital delivery services, and more along the same lines. We’re seeing a longer-term decline in the service industry as people choose both to live and work at home, or at least, more locally. Outdoor recreation and special events still attract us, but low-quality crowded indoor work and leisure leave us cold.
“The downside is that this online world is reserved, especially at first, to those who can afford it. Though improving, access to goods and services is still difficult to obtain in rural areas and less developed areas. It requires stable accommodations and robust internet access. These in turn demand a set of skills that will be out of reach for older people and those with perceptual or learning challenges. Even when they can access digital services, some people will be isolated and vulnerable; children, especially, must be protected from mistreatment and abuse.
“The Worst: We will have no secrets. Every transaction we conduct will be recorded and discoverable. Cash transactions will decline to the point that they’re viewed with suspicion. Automated surveillance will track our every move online and offline, with artificial intelligence recognizing us through our physical characteristics, habits and patterns of behaviour. The primary purpose of this surveillance will be for marketing, but it will also be used for law enforcement, political campaigns, and in some cases, repression and discrimination.
“Surveillance will be greatly assisted by automation. A police office, for example, used to have to call in for a report on a license plate. Now a camera scans every plate within view and a computer checks every one of them. Registration and insurance documentation is no longer required; the system already knows and can alert the officer to expired plates or outstanding warrants. Facial recognition accomplishes the same for people walking through public places. Beyond the cameras, GPS tracking follows us as we move about, while every purchase is recorded somewhere.
“Total surveillance allows an often-unjust differentiation of treatment of individuals. People who need something more, for example, may be charged higher prices; we already see this in insurance, where differential treatment is described as assessment of risk. Parents with children may be charged more for milk than unmarried men. The price of hotel rooms and airline tickets are already differentiated by location and search history and could vary in the future based on income and recent purchases. People with disadvantages or facing discrimination may be denied access to services altogether, as digital redlining expands to become a normal business practice.
“What makes this trend pernicious is that none of it is visible to most observers. Not everybody will be under total surveillance; the rich and the powerful will be exempted, as will most large corporations and government activities. Without open data regulations or sunshine laws, nobody will be able to detect when people have been treated inequitably, unfairly or unjustly.
“And this is where we begin to see the beginnings of an upside. The same system that surveils us can help keep us safe. If child predators are tracked, for example, we can be alerted to the presence of child predators near our children. Financial transactions will be legitimate and legal or won’t exist (except in cash). We will be able to press an SOS button to get assistance wherever we are. Our cars will detect and report an accident before we know we were in one. Ships and aircraft will no longer simply disappear. But this does not happen without openness and laws to protect individuals and will lag well behind the development of the surveillance system itself.
“On Balance: Both the best and the worst of our digital future are two sides of the same digital coin, and this coin consists of the question: who will digital technology serve? There are many possible answers. It may be that it serves only the Kochs, Zuckerbergs and Musks of the world, in which case the employment of digital technology will be largely indifferent to our individual needs and suffering. It may be that it serves the needs of only one political faction or state in which basic needs may be met, provided we do not disrupt the status quo. It may be that it provides strong individual protections, leaving no recourse for those who are less able or less powerful. Or it may serve the interests of the community as a whole, finding a balance between needs and ability, providing each of us enough with enough agency to manage our own lives as long as it is not to the detriment of others.
“Technology alone won’t decide this future. It defines what’s possible. But what we do is up to us.”
Michael Dyer: AI researchers will build an entirely new type of technology – digital entities with a form of consciousness
Dyer, professor emeritus of computer science at UCLA, wrote, “AI systems like ChatGPT and DALL-E represent major advances in artificial intelligence. They illustrate ‘infinite generative capacity’ which is an ability to both generate and recognize sentences and situations never before described. As a result of such systems, AI researchers are beginning to narrow in on how to create entities with consciousness. As an AI professor I had always believed that if an AI system passed the Turing Test it would have consciousness, but systems such as ChatGPT have proven me wrong. ChatGPT behaves as though it has consciousness but does not. The question then arises: What is missing?
“A system like ChatGPT (to my knowledge) does not have a stream of thought; it remains idle when no input is given. In contrast, humans, when not asleep or engaged in some task, will experience their minds wandering – thoughts, images, past events and imaginary situations will trigger more of the same. Humans also continuously sense their internal and external environments and update representations of these, including their body orientation and location in space and the temporal position of past recalled events or of hypothetical, imagined future events.
“Humans maintain memories of past episodes. I am not aware as to whether or not ChatGPT keeps track of interviews it has engaged in or of questions it has been asked (or the answers it has given). Humans are also planners; they have goals, and they create, execute and alter/repair plans that are designed to achieve their goals. Over time they also create new goals, they abandon old goals and they re-rank the relative importance of existing goals.
“It will not take long to integrate systems like ChatGPT with robotic and planning systems and to alter ChatGPT so that it has a continual stream of thought. These forms of integration could easily happen by 2035. Such integration will lead to an entire new type of technology – technologies with consciousness.
“Humans have never before created artificial entities with consciousness and so it is very difficult to predict what sort of products will come about, along with their unintended consequences.
“I would like to comment on two dissociations with respect to AI. The first is that an AI entity (whether software or robotic) can be highly intelligent while NOT being conscious or biologically alive. As a result, an AI will have none of the human needs that come from being alive and having evolved on our planet (e.g., the human need for food, air, emotional/social attachments, etc.). The second dissociation is between consciousness/intelligence and civil/moral rights. Many people might conclude that an AI with consciousness and intelligence must necessarily be given civil/moral rights; however, this is not the case. Civil/moral rights are only assigned to entities that can feel pleasure and pain. If an entity cannot feel pain, then it cannot be harmed. If an entity cannot feel pleasure, then it cannot be harmed by being denied that pleasure.
“Corporations have certain rights (e.g., they can own property) but they do not have moral/civil rights, because they cannot experience happiness, nor suffering. It is eminently possible to produce an AI entity that will have consciousness/intelligence but that will NOT experience pleasure/pain. If we humans are smart enough, we will restrict the creation of synthetic entities to those WITHOUT pleasure/pain. In that case, we might survive our inventions.
“In the entertainment media, synthetic entities are always portrayed by humans and a common trope is that of those entities being mistreated by humans and the audience then sides with those entities. In fact, synthetic entities will be very nonhuman. They will NOT eat food; give birth; grow as children into adulthood; get sick; fall in love; grow old or die. They will not need to breathe, and currently I am unaware of any AI system that has any sort of empathy for the suffering of humans. Most likely (and unfortunately) AI researchers will create AI systems that do experience pleasure/pain and even argue for doing such, so that such systems learn to have empathy. Unfortunately, such a capacity will then turn them into agents deserving of moral consideration and thus of civil rights.
“Will humans want to give civil rights and moral status to synthetic entities who are not biologically alive and who could care less if they pollute the air that humans must breathe to stay alive? Such entities will be able to maintain backups of their memories and live on forever. Another mistake would be to give them any goals for survival. If the thought of being turned off causes such entities emotional pain, then humans will be causing suffering in a very alien sort of creature and humans will then become morally responsible for their suffering. If humans give survival goals to synthetic agents, then those entities will compete with humans for survival.”
Avi Bar-Zeev: The key difference between a good or a bad outcome is whether these systems help and empower people or to exploit them
Bar-Zeev, president of the XR Guild and veteran innovator of XR tools for several top internet companies, said, “I expect by 2035 extended reality (XR) tools will advance significantly. We will have all-day wearable glasses that can do both AR [augmented reality] and VR. The only question is what will we want to use them for? Smartphones will no longer need screens, and they will have shrunk down to the size of a keychain (if we still remember those, since by then most doors will unlock based on our digital ID). The primary use of XR will be for communications, bringing photorealistic holograms of other people to us, wherever we are. All participants will be able to experience their own augmented spaces without us having to share our 3D environments.
“This will allow us to be more connected, mostly asynchronously. It would be impossible for us to be constantly connected to everyone in every situation, so we will develop social protocols just as we did with texting, allowing us to pop into and out of each other’s lives without interrupting others. The experience will be like having a whole team of people at your back, ready to whisper ideas in your ear based on the snippets of real life you choose to share.
“The current wave of generative AI has taught us that the best AI is made of people, both providing our creative output and also filtering the results to be acceptable by people. By 2035, the business models will have shifted to rewarding those creators and value-adders such that the result looks more like a corporation today. We’ll contribute, get paid for our work, and the AI-as-corporation will produce an unlimited quantity of new value from the combination for everyone else. It will be as if we have cracked the ultimate code for how people can work efficiently together – extract their knowledge and ideas and let the cloud combine these in milliseconds. Still, we can’t forget the human inputs or it’s just another race to the bottom.
“The flip side of this is that what we today might call ‘recommendation AI’ will merge with the above to form a kind of super intelligence that can find the most contextually appropriate content anytime both virtually and in real life. That tech will form a kind of personal firewall that keeps our personal context private but allows for a secure gathering of the best inputs the world can offer without giving away our privacy. By 2035, the word metaverse will be as popular as ‘cyberspace’ and ‘information superhighway’ became in past online evolution. The companies prefixing their name by ‘meta’ are all kind of boring now. However, after having achieved the XR and AI trends above we will think of the metaverse quite broadly as the information space we all inhabit. The main shift by 2035 is that we will see the metaverse not as a separate space but as a massive interconnection among 10 billion people. The AR tech and AI fade into the background and we simply see other people as valued creators and consumers of each other’s work and supporters of each other’s lives and social needs.
“The key difference between the most positive and negative uses of XR, AI and the metaverse is whether the systems are designed to help and empower people or to exploit them. Each of these technologies sees its worst outcome quickly if it is built to benefit companies that monetize their customers. XR becomes exploitive and not socially beneficial. AI builds empires on the backs of real people’s work and deprives them of a living wage as a result. The metaverse becomes a vast and insipid landscape of exploitive opportunities for companies to mine us for information and wealth, while we become enslaved to psychological countermeasures, designed to keep us trapped and subservient to our digital overlords.”
Jonathan Grudin: The menace is an army of AI acting ‘on a scale and speed that outpaces human ability to assess and correct course’
Grudin, affiliate professor of information science at the University of Washington, recently retired as a principal researcher in the Adaptive Systems and Interaction Group at Microsoft, wrote, “Addressing unintended consequences is a primary goal. Many changes are possible, but my best guess is that the best we will do is to address many of the unanticipated negatives tied at least in part to digital technology that emerged and grew in impact over the past decade: malware, invasion of privacy, political manipulation, economic manipulation, declining mental health and growing wealth disparity.
“At the turn of the millennium in 2000, the once small, homogeneous, trusting tech community – after recovering from the internet bubble – was ill-equipped to deal with the challenges arising from anonymous bad actors and well-intentioned but imperceptive actors who operated at unimagined scale and velocity. Causes and effects are now being understood. It won’t be easy, nor will it be an endeavor that will ever truly be finished, but technologists working with legislators and regulators are likely to make substantial progress.
“I foresee a loss of human control in the future. The menace isn’t control by a malevolent AI. It is a Sorcerer’s Apprentice’s army of feverishly acting brooms with no sorcerer around to stop them. Digital technology enables us to act on a scale and speed that outpaces human ability to assess and correct course. We see it around us already. Political leaders unable to govern. CEOs at Facebook, Twitter and elsewhere unable to understand how technologies that were intended to unite people led to nasty divisiveness and mental health issues. Google and Amazon forced to moderate content on such a scale that often only algorithms can do it and humans can’t trace individual cases to correct possible errors. Consumers who can be reliably manipulated by powerful targeting machine learning to buy things they don’t need and can’t afford. It is early days. Little to prevent it from accelerating is on the horizon.
“We will also see an escalation in digital weapons, military spending and arms races. Trillions of dollars, euros, yuan, rubles and pounds are spent, and tens of thousands of engineers deployed, not to combat climate change but to build weaponry that the military may not even want. The United States is spending billions on an AI-driven jet fighter, despite the fact that jet fighter combat has been almost nonexistent for decades with no revival on the horizon.
“Unfortunately, the Ukraine war has exacerbated this tragedy. I believe leaders of major countries have to drop rivalries and address much more important existential threats. That isn’t happening. The cost of a capable armed drone has fallen an order of magnitude every few years. Setting aside military uses, long before 2035 people will be able to buy a cheap drone at a toy store, clip on facial recognition software and a small explosive or poison and send it off to a specified address. No need for a gun permit. I hope someone sees how to combat this.”
Beth Noveck: AI could make governance more equitable and effective; it could raise the overall quality of decision-making
Noveck, director of the Burnes Center for Social Change and Innovation and its partner project, The Governance Lab, wrote, “One of the most significant and positive changes expected to occur by 2035 is the increasing integration of artificial intelligence (AI) into various aspects of our lives, including our institutions of governance and our democracy. With 100 million people trying ChatGPT – a type of artificial intelligence (AI) that uses data from the Internet to spit out well-crafted, human-like responses to questions – between Christmas 2022 and Mardi Gras 2023 (it took the telephone 75 years to reach that level of adoption), we have squarely entered the AI age and are rapidly advancing along the S-curve toward widespread adoption.
“It is much more than ChatGPT. AI comprises a remarkable basket of data-processing technologies that make it easier to generate ideas and information, summarize and translate text and speech, spot patterns and find structure in large amounts of data, simplify complex processes, coordinate collection action and engagement. When put to good use, these features create new possibilities for how we govern and, above all, how we can participate in our democracy.
“One area in which AI has the potential to make a significant impact is in participatory democracy, that system of government in which citizens are actively involved in the decision-making process. The right AI could help to increase citizen engagement and participation. With the help of AI-powered chatbots, residents could easily access information about important issues, provide feedback, and participate in decision-making processes. We are already witnessing the use of AI to make community deliberation more efficient to manage at scale.
“The right AI could help to improve the quality of decision-making. AI can analyze large amounts of data and identify patterns that humans may not be able to detect. This can help policymakers and participating residents make more informed decisions based on real-time, high-quality data.
“With the right data, AI can also help to predict the outcome of different policy choices and provide recommendations on the best course of action. AI is already being used to make expertise more searchable. Using large-scale data sources, it is becoming easier to find people with useful expertise and match them to opportunities to participate in governance. These techniques, if adopted, could help to ensure more evidence-based decisions.
“The right AI could help to make governance more equitable and effective. New text generation tools make it faster and easier to ‘translate’ legalese into plain English but also other languages, portending new opportunities to simplify interaction between residents and their governments and increase the uptake of benefits to which people are entitled.
“The right AI could help to reduce bias and discrimination. AI can analyze data without being influenced by personal biases or prejudices. This can help to identify areas of inequality and discrimination, which can be addressed through policy changes. For example, AI can help to identify disparities in health care outcomes based on race or gender and provide recommendations for addressing these disparities.
“Finally, AI could help us design the novel, participatory and agile systems of participatory governance that we need to regulate AI. We all know that traditional forms of legislation and regulation are too slow and rigid to respond to fast-changing technology. Instead, we need to invest in new institutions for responding to the challenges of AI and that’s why it is paramount to invest in reimagining democracy using AI.
“But all of this depends upon mitigating significant risks and designing AI that is purpose-built to improve and reimagine our democratic institutions. One of the most concerning changes that could occur by 2035 is the increased use of AI to bolster authoritarianism. With the rise of populist authoritarians and the susceptibility of more people to such authoritarianism as a result of widening economic inequality, fear of climate change and as a result of misinformation, there is a risk of digital technologies being abused to the detriment of democracy.
“AI-powered surveillance systems are used by authoritarian governments to monitor and track the activities of citizens. This includes facial recognition technology, social media monitoring and analysis of internet activity. Such systems can be used to identify and suppress dissenting voices, intimidate opposition figures and quell protests.
“AI can be used to create and disseminate propaganda and disinformation. We’ve already seen how bots have been responsible for propagating misinformation during the COVID-19 pandemic and election cycles. Manipulation can involve the use of deepfakes, chatbots and other AI-powered tools to manipulate public opinion and suppress dissent.
“Deepfakes, which are manipulated videos or images such as those found at the Random People Generator, illustrate the potential for spreading disinformation and manipulating public opinion. Deepfakes have the potential to undermine trust in information and institutions and create chaos and confusion. Authoritarian regimes can use these tools to spread false information and discredit opposition figures, journalists and human rights activists.
“AI-powered predictive policing tools can be used by authoritarian regimes to target specific populations for arrest and detention. These tools use data analytics to predict where and when crimes are likely to occur and who is likely to commit them. In the wrong hands, these tools can be used to target ethnic or religious minorities, political dissidents and other vulnerable groups.
“AI-powered social credit systems are already in use in China and could be adopted by other authoritarian regimes. These systems use data analytics to score individuals based on their behavior and can be used to reward or punish citizens based on their social credit score. Such systems can be used to enforce loyalty to the government and suppress dissent.
“AI-powered weapons and military systems can be used to enhance the power of authoritarian regimes. Autonomous weapons systems can be used to target opposition figures or suppress protests. AI-powered cyberattacks can be used to disrupt critical infrastructure or target dissidents.
“It is important to ensure that AI is developed and used in a responsible and ethical manner, and that its potential to be used to bolster authoritarianism is addressed proactively.”
Raymond Perrault: ‘The big challenges are quality of information (veracity and completeness) and the technical feasibility of some services’
Perrault, a distinguished computer scientist at SRI International and director of its AI Center from 1988 to 2017, wrote, “First, some background. I find it useful to describe digital life as falling into three broad, and somewhat overlapping categories:
- Content: web media, news, movies, music, games (mostly not interactive)
- Social media (interactive, but with little dependency on automation)
- Digital services, in two main categories: pure digital (e.g., search, financial, commerce, government) and that which is embedded in the physical world (e.g., health care, transportation, care for disabled and elderly)
“The big challenges are quality of information (veracity and completeness) and technical feasibility of some services, in particular those depending on interaction.
“Most digital services depend on interaction with human users and the physical world that is timely and highly context-dependent. Our main models for this kind of interaction today (search engines, chatbots, LLMs) are all deficient in that they depend on a combination of brittle hand-crafted rules, large amounts of labelled training data, or even larger amounts of unlabeled data, all to produce systems that are either limited in function or insufficiently reliable for critical applications. We have to consider security of infrastructure and transactions, privacy, fairness in algorithmic decision-making, sustainability for high-security transactions (e.g., with blockchain), and fairness to content creators, large and small.
“So, what good may happen by 2035? Hardware, storage, compute and communications costs will continue to decrease, both in cloud and at the edge. Computation will continue to be embedded in more and more devices, but usefulness of devices will continue to be limited by the constraints on interactive systems. Algorithms essential to supporting interaction between humans and computers (and between computers and the physical world) will improve if we can figure out how to combine tacit/implicit reasoning, as done by current deep learning-based language models, with more explicit reasoning, as done by symbolic algorithms.
“We don’t know how to do this, and a significant part of the AI community resists the connection, but I see it as a difficult technical problem to be solved, and I am confident that it will one day be solved. I believe that improving this connection would allow systems to generalize better, be taught general principles by humans (e.g., mathematics), reliably connect to symbolically stored information, and conform to policies and guidance imposed by humans. Doing so would significantly improve the quality of digital assistants and of physical autonomous systems. Ten years is not a bad horizon.
“Better algorithms will not solve the disinformation problem, though they will continue to be able to bring cases of it to the attention of humans. Ultimately this requires improvements in policy and large investments in people, which goes against incentives of corporations and can only be imposed on them by governments, which are currently incapable of doing so. I don’t see this changing in a decade. Nor will better algorithms solve the necessary investments to prevent certain kinds of information services (e.g., local news) from disappearing, nor treating content creators fairly. Government services could be significantly improved by investment using known technologies, e.g., to support tax collection. The obstacles again are political, not technical.”
“A long-term, concerted effort in societies will be necessary to harness the development of tools whose misuse is increasingly easy.”
Alejandro Pisanty: We are threatened by the scale, speed and lack of friction for bad actors who bully and weaponize information
Pisanty, Internet Hall of Fame member, longtime leader in the Internet Society and professor of internet and information society at the National Autonomous University of Mexico, predicted, “Improvement will come from shrewd management of the Internet’s own way of making known human conduct and motivation and how they act through technology: mass scaling/hyperconnectivity; identity management; trans-jurisdictional arbitrage; barrier lowering; friction reduction; and memory+oblivion.
“As long as these factors are managed for improvement, they can help identify advance warnings of ways in which digital tools may have undesirable side effects. An example: Phishing grows on top of all six factors, while increasing friction is the single intervention that provides the best cost-benefit ratio.
“Improvements come through human connections that cross many borders between and within societies. They throw a light on human rights while effecting timely warnings about potential violations, creating an unprecedented mass of human knowledge while getting multiple angles to verify what goes on record and correct misrepresentations (again a case for friction).
“Health outcomes are improved through the whole cycle of information: research, diffusion of health information, prevention, diagnostics and remediation/mitigation considering the gamut of social determination of health.
“Education may improve through scaling, personalization and feedback. There is a fundamental need to make sure the Right to Science becomes embedded in the growth of the Internet and cyberspace in order to align minds and competencies within the age of the technology people are using. Another way of putting this: We need to close the gap – right now 21st century technology is in the hands of people and organizations with 19th-century mentalities and competences, starting with the human body, microbes, electricity, thermodynamics and, of course, computing and its advances.
“The same set of factors that can map what we know of human motivation for improvement of humankind’s condition can help us identify ways to deal with the most harmful trends emerging from the Internet.
“Speed is included in the Internet’s mass scaling and hyperconnectivity, and the social and entrepreneurial pressure for speed leave little time to analyze and manage the negative effects of speed, such as unintended effects of technology, ways in which it can be abused and, in turn, ways to correct, mitigate or compensate against these effects.
“Human connection and human rights are threatened by the scale, speed and lack of friction in actions such as bullying, disinformation and harassment. The invasion of private life available to governments facilitates repression of the individual, while the speed of Internet expansion makes it easy to identify and attack dissidents with increasingly extensive, disruptive and effective damage that extends into physical and social space.
“A long-term, concerted effort in societies will be necessary to harness the development of tools whose misuse is increasingly easy. The effectiveness of these tools’ incursions continues to remain based both on the tool and on features of the victim or the intermediaries such as naiveté, lack of knowledge, lack of Internet savvy and the need to juggle too many tasks at the same time between making a living and acquiring dominion over cyber tools.”
Barry K. Chudakov: ‘We are sharing our consciousness with our tools’
Chudakov, founder and principal at Sertain Research, predicted, “One of the best and most beneficial changes that is likely to occur by 2035 in regard to digital technology and humans’ use of digital systems is recognition of the arrival of a digital tool meta-level. We will begin to act on the burgeoning awareness of tool logic and how each tool we pick up and use has a logic designed into it. The important thing about becoming aware of tool logic, and then understanding it: Humans follow the design logic of their tools because we are not only adopters, we are adapters. That is, we adapt our thinking and behaviour to the tools we use.
“This will come into greater focus between now and 2035 because our technology development – like many other aspects of our lives – will continue to accelerate. With this acceleration humans will use more tools in more ways more often – robots, apps, the metaverse and omniverse, digital twins – than at any other time in human history. If we pay attention as we adopt and adapt, we will see that we bend our perceptions to our tools: When we use a cell phone, it changes how we drive, how we sleep, how we connect or disconnect with others, how we communicate, how we date, etc.
“Another way of looking at this: We have adapted our behaviors to the logic of the tool as we adopted (used) it. With an eye to pattern recognition, we may finally come to see that this is what humans do, what we have always done, from the introduction of various technologies – alphabet, camera, cinema, television, computer, internet, cell phone – to our current deployment of AI, algorithms, digital twins, mirror worlds or omniverse.
“So, what does this mean going forward? With enough instances of designing a meta mirror of what is happening – the digital readout above the process of capturing an image with a digital camera, digital twins and mirror worlds that provide an exact replica of a product, process or environment – we will begin to notice that these technologies all have an adaptive level. At this level when we engage with the technology, we give up aspects of will, intent, focus, reaction. We can then begin to outline and observe this process in order to inform ourselves, and better arm ourselves against (if that’s what we want) adoption abdication. That is, when we adopt a tool, do we abdicate our awareness, our focus, our intentions?
“We can study and report on how we change and how each new advancing technology both helps us and changes us. We can then make more informed decisions about who we are when we use said tool and adjust our behaviors if necessary. Central to this dynamic is the understanding that we are sharing our consciousness with our tools. They have gotten – and are getting more still – so sophisticated that they can sense what we want, can adapt to how we think; they are extensions of our cognition and intention. As we go from adaptors to co-creators, the demand on humans increases to become more fully conscious. It remains to be seen how we will answer that demand. …
“Of course, there is more to worry about at the level of broad systems. By the year 2035, Ian Bremmer, among others, believes the most harmful or menacing changes that are likely to occur in digital technology and humans’ use of digital systems will focus on AI and algorithms. He believes this because we can already see that these two technological advances together have made social media a haven for right-wing conspiracists, anarchic populists and various disrupters to democratic norms.
“I would not want to minimize Bremmer’s concerns; I believe them to be real. But I would also say they are insufficient. Democracies and governments generally were hierarchical constructs which followed the logic of alphabets; AI and algorithms are asymmetric technologies which follow a fundamentally different logic than the alphabetic construct of democratic norms, or even the top-down dictator style of Russia or China. So, while I agree with Bremmer’s assessment that AI and algorithms may threaten existing democratic structures; they, and the social media of which they are engines, are designed differently than the alphabetic order which gave us kings and queens, presidents and prime ministers.
“The old hierarchy was dictatorial, top-down with most people except those at the very top beholden to and expected to bow to the wishes of, the monarch or leader at the top. Social media and AI or algorithms have no top or bottom. They are broad horizontally and shallow vertically, whereas democratic and dictatorial hierarchies are narrow horizontally and deep vertically.
“This structural difference is the cause for Bremmer’s alarm and is necessary to understand and act upon before we can salvage democracy from the ravages of populism and disinformation. Here is the rub: Until we begin to pay attention to the logic of the tools we adopt, we will use them and then be at the mercy of the logic we have adopted. A thoroughly untenable situation.
“We must inculcate, teach, debate and come to understand the logic of our tools and see how they build and destroy our social institutions. These social institutions reward and punish, depending on where you sit within the structure of the institution. Slavery was once considered a democratic right; it was championed by many American Southerners and was an economic engine of the South before and after the Civil War. America then called itself a democracy, but it was not truly democratic – especially for those enslaved.
“To make democracy more equitable for all, we must come to understand the logic of the tools we use and how they create the social institutions we call governments. We must insist upon transparency in the technologies we adopt so we can see and fully appreciate how these technologies can change our perceptions and values.”
Marcel Fafchamps: The next wave of technology will give additional significant advantages to authoritarians and monopolists
Fafchamps, professor of economics and senior fellow at the Center on Democracy, Development and the Rule of Law at Stanford University, wrote, “The single most beneficial change will be the spread of already existing internet-based services to billions of people across the world, as they gradually replace their basic phones with smartphones, and as connection speed increases over time and across space. IT services to assist farmers and businesses are the most promising in terms of economic growth, together with access to finance through mobile money technology. I also expect IT-based trade to expand to all parts of the world, especially spearheaded by Alibaba.
“The second most beneficial change I anticipate is the rapid expansion of IT-based health care, especially through phone-based and AI-based diagnostics and patient interviews. The largest benefits by far will be achieved in developing countries where access to medically-provided health care is limited and costly. AI-based technology provided through phones could massively increase provision and improve health at a time where the population of many currently low- or middle-income countries (LMIC) is rapidly aging.
“The third most beneficial change I anticipate is in drone-based, IT-connected drone services to facilitate dispatch to wholesale and local retail outlets, and to distribute medical drugs to local health centers and collect from them samples for health care testing. I do not expect a significant expansion of drone deliveries to individuals, except in some special cases (e.g., very isolated locations or extreme urgency in the delivery of medical drugs and samples).
“The most menacing change I expect is in terms of the political control of the population. Autocracies and democracies alike are increasingly using IT technology to collect data on individuals, civic organizations and firms. While this data collection is capable of delivering social and economic benefits to many (e.g., in terms of fighting organized crime, tax evasion and financial and fiscal fraud), the potential for misuse is enormous, as evidenced for instance by the social credit system put in place in China. Some countries – and most prominently, the European Union – have sought to introduce safeguards against abuse. But without serious and persistent coordination with the United States, these efforts will ultimately fail given the dominance of U.S.-protected GAFAM (Google, Apple, Facebook, Amazon and Microsoft) in all countries except China, and to a lesser extent, Russia.
“The world urgently needs Conference of the Parties (COP) meetings on international IT to address this existential issue for democracy, civil rights and individual freedom within the limits of the law. Whether this can be done is doubtful, given that democracies themselves are responsible for developing a large share of these systems of data collection and control on their own population, as well as on that of others (e.g., politicians, journalists, civil right activists, researchers, research and development firms).
“The second-most worrying change is the continued privatization of the internet at all levels: cloud, servers, underwater transcontinental lines, last-mile delivery and content. The internet was initially developed as free for all. But this will no longer be the case in 2035, and probably well before that. I do not see any solution that would be able to counterbalance this trend, short of a massive, coordinated effort among leading countries. But I doubt that this coordination will happen, given the enormous financial benefits gained from appropriating the internet, or at least large chunks of it. This appropriation of the internet will generate very large monopolistic gains that current antitrust regulation is powerless to address, as shown repeatedly in U.S. courts and in EU efforts against GAFAM firms. In some countries, this appropriation will be combined with heavy state control, further reinforcing totalitarian tendencies.
“The third-most worrying change is the further expansion of unbridled social media and the disappearance of curated sources of news (e.g., newsprint, radio and TV). In the past, the world has already experienced the damages caused by fake news and gossip-based information (e.g., through tabloid newspapers), but never to the extent made possible by social media. Efforts to date to moderate content on social media platforms have largely been ineffective as a result of multiple mutually reinforcing causes: the lack of coordination between competing social media platforms (e.g., Facebook, Twitter, WhatsApp, TikTok); the partisan interests of specific political parties and actors; and the technical difficulty of the task.
“These failures have been particularly disturbing in LMIC [low- and middle-income] countries where moderation in local languages is largely deficient (e.g., hate speech across ethnic lines in Ethiopia; hate speech toward women in South Asia). The damage that social media is causing to most democracies is existential. By creating silos and echo chambers, social media is eroding the trust that different groups and populations feel toward each other, and this increases the likelihood of civil unrest and populist vote. Furthermore, social media has encouraged the victimization of individuals who do not conform to the views of other groups in a way that does not allow the accused to defend themselves. This is already provoking a massive regression in the rule of law and the rights of individuals to defend themselves against accusations. I do not see any signs suggesting a desire by GAFAM firms or by governments to address this existential problem for the rule of law.
“To summarize, the first wave of IT-technology did increase individual freedom in many ways (e.g., accessing cultural content previously requiring significant financial outlays; facilitating international communication, trade and travel; making new friends and identifying partners; and allowing isolated communities to find each other to converse and socialize).
“The next wave of IT-technology will be more focused on political control and on the exploitation of commercial and monopolistic advantage, thereby favoring totalitarian tendencies and the erosion of the rights of the defense and of the whole system of criminal and civil justice. I am not optimistic at this point, especially given the poor state of U.S. politics at this point in time on both sides of the political spectrum.”
David Weinberger: ‘These new machines will give us more control over our world and lives, but with our understanding lagging, often terminally’
Weinberger, senior researcher at Harvard’s Berkman Center for Internet and Society, wrote, “The Internet and machine learning have removed the safe but artificial boundaries around what we can know and do, plunging us into a chaos that is certainly creative and human but also dangerous and attractive to governments and corporations desperate to control more than ever. It also means that the lines between predicting and hoping or fearing are impossibly blurred.
“Nevertheless: Right now, large language models (LLMs) of the sort used by ChatGPT know more about our use of language than any entity ever has, but they know absolutely nothing about the world. (I’m using ‘know’ sloppily here.) In the relative short term, they’ll likely be intersected with systems that have some claim to actual knowledge so that the next generation of AI chatters will hallucinate less and be more reliable. As this progresses, it will likely disrupt both our traditional and Net-based knowledge ecosystems.
“With luck, the new knowledge ecosystem is going to have us asking whether knowing with brains and books hasn’t been one long dark age. I mean, we did spectacularly well with our limited tools, so good job fellow humans! But we did well according to a definition of knowledge tuned to our limitations.
“As machine learning begins to influence how we think about and experience our lives and world, our confidence in general rules and laws as the high mark of knowledge may fade, enabling us to pay more attention to the particulars in every situation. This may open up new ways of thinking about morality in the West and could be a welcome opportunity for the feminist ethics of care to become more known and heeded as a way of thinking about what we ought to do.
“Much of the online world may be represented by agents: software that presents itself as a digital ‘person’ that can be addressed in conversation and can represent a body of knowledge, an organization, a place, a movement. Agents are likely to have (i.e., be given) points of view and interests. What will happen when these agents have conversations with one another is interesting to contemplate.
“We are living through an initial burst of energy and progress in areas that until recently were too complex to even imagine we could.
“These new machines will give us more control over our world and lives, but with our understanding lagging, often terminally. This is an opportunity for us to come face to face with how small a light our mortal intelligence casts. But it is also an overwhelming temptation for self-centered corporations, governments and individuals to exploit that power and use it against us. I imagine that both of those things will happen.
“Second, we are heading into a second generation that has lived much of its life on the Internet. For all of its many faults – a central topic of our time – being on the Internet has also shown us the benefits and truth of living in creative chaos. We have done so much so quickly with it that we now assume connected people and groups can undertake challenges that before were too remote even to consider. The collaborative culture of the Internet – yes, always unfair and often cruel – has proven the creative power of unmanaged connective networks.
“All of these developments make predicting the future impossible – beyond, perhaps, saying that the chaos that these two technologies rely on and unleash is only going to become more unruly and unpredictable, driving relentlessly in multiple and contradictory directions. In short: I don’t know.”
Calton Pu: The digital divide will be between those who think critically and those who do not
Pu, co-director of the Center for Experimental Research in Computer Systems at Georgia Institute of Technology, wrote, “Digital life has been, and will continue to be, enriched by AI and machine learning (ML) techniques and tools. A recent example is ChatGPT, a modern chatbot developed by OpenAI and released in 2022 that is passing the Turing Test every day.
“Similar to the contributions of robotics in the physical world (e.g., manufacturing), future AI/ML tools will relieve the stress from simple and repetitive tasks in the digital world (and displace some workers). The combination of physical automation and AI/ML tools would and should lead to concrete improvements in autonomous driving, which stalled in recent years despite massive investments on the order of many billions of dollars. One of the major roadblocks has been the gold standard ML practice of training static models/classifiers that are insensitive to evolutionary changes in time. These static models suffer from knowledge obsolescence, in a way similar to human aging. There is an incipient recognition of the limitations of current practice of constant retraining of ML models to bypass knowledge obsolescence manually (and temporarily). Hopefully, the next generation ML tools will overcome knowledge obsolescence in a sustainable way, achieving what humans could not: stay young forever.
“Well, Toto, we’re not in Kansas anymore. When considering the future issues in digital life, we can learn a lot from the impact of robotics in the physical world. For example, Boston Dynamics pledged to ‘not weaponize’ their robots in October 2022. This is remarkable, since the company was founded with, and worked on, defense contracts for many years before its acquisition by primarily non-defense companies. That pledge is an example of moral dilemma on what is right or wrong. Technologists usually remain amoral. By not taking sides, they avoid the dilemma and let both sides (good and evil) utilize the technology as they see fit. This amorality works quite well since good technology always has many applications over the entire spectrum from good to evil to the large gray areas in between.
“Microsoft Tay, a dynamically learning chatbot released in 2016 started to send inflammatory and racist speech, causing its shutdown the same day. Learning from this lesson, ChatGPT uses OpenAI’s moderation API to filter out racist and sexist prompts. Hypothetically, one could imagine OpenAI making a pledge to ‘not weaponize’ ChatGPT for propaganda purposes. Regardless of such pledges, any good digital technology such as ChatGPT could be used for any purpose, (e.g., generating misinformation and fake news) if it is stolen or simply released into the wild.
“The power of AI/ML tools, particularly if they become sustainable and remain amoral, will be greater for both good and evil. We have seen significant harm from misinformation on the COVID-19 pandemic, dubbed ‘infodemic’ by the World Health Organization. More generally, it is being implemented in political propaganda in every election and every war. It is easy to imagine the depth, breadth and constant renewal of such propaganda and infodemic, as well as their impact, all growing with the capabilities of future AI/ML tools used by powerful companies and governments.
“Assuming that the AI/ML technologies will advance beyond the current static models, the impact of sustainable AI/ML tools in the future of digital life will be significant and fundamental, perhaps in a greater role than industrial robots have in modern manufacturing. For those who are going to use those tools to generate content and increase their influence on people, that prospect will be very exciting. However, we have to be concerned for people who are going to consume such content as part of their digital life without thinking critically.
“The great digital divide is not going to be between the haves and have-nots of digital toys and information. With more than 6 billion smartphones in the world (estimated in 2022), an overwhelming majority of the population already has access to and participates in the digital world. The digital divide in 2035 will be between those who think critically and those who believe misinformation and propaganda. This is a big challenge for democracy, a system in which we thought more information would be unquestionably beneficial. In a Brave New Digital World, a majority can be swayed by the misuse of amoral technological tools.”
Dmitri Williams: If economic growth is prioritized over well-being, the results will not be pretty
Williams, professor of technology and society at the University of Southern California, wrote, “When I think about the last 30 years of change in our lives due to technology, what stands out to me is the rise in convenience and the decline of traditional face-to-face settings. From entertainment to social gatherings, we’ve been given the opportunity to have things cheaper, faster and higher-quality in our private spaces, and we’ve largely taken it.
“For example, 30 years ago, you couldn’t have a very good movie-watching experience in your own home, looking at a small CRT tube and standard definition, and what you could watch wasn’t the latest and greatest. So, you took a hit to convenience and went to the movie theater, giving up personal space and privacy for the benefits of better technology, better content and a more community experience. Today, that’s flipped. We can be on our couches and watch amazing content, with amazing screens and sounds and never have to get in a car.
“That’s a microcosm of just about every aspect of our lives – everything is easier now, from work over high-speed connections to playing video games. We can do it all from our homes. That’s an amazing reduction in costs and friction in our business and private lives. And the social side of that is access to an amazing breadth of people and ideas. Without moving from our couch, chair or bed, we can connect with others all over the world from a wide range of backgrounds, cultures and interests.
“Ironically, though, we feel disconnected, and I think that’s because we evolved as physical creatures who thrive in the presence of others. We atrophy without that physical presence. We have an innate need to connect, and the in-person piece is deeply tied to our natures. As we move physically more and more away from each other – or focus on far-off content even when physically present – our well-being suffers. I can’t think of anything more depressing than seeing a group of young friends together but looking at their phones rather than each other’s faces. Watching well-being trends over time, even before the pandemic, suggests an epidemic of loneliness.
“As we look ahead, those trends are going to continue. The technology is getting faster, cheaper and higher-quality, and the entertainment and business industries are delivering us better and better content and tools. AI and blockchain technologies will keep pushing that trend forward.
“The part that I’m optimistic about is best seen by the nascent rise of commercial-level AR and VR. I think VR is niche and will continue to be, not because of its technological limitations, but because it doesn’t socially connect us well. Humans like eye contact, and a thing on your face prevents it. No one is going to want to live in a physically closed off metaverse. It’s just not how we’re wired. The feeling of presence is extremely limited, and the technical advances in the next 10 years are likely to make the devices better and more comfortable, but not change that basic dynamic.
“In contrast, the potential for AR and other mixed reality devices is much more exciting because of its potential for social interactions. Whereas all of these technical advances have tended to push us physically away from each other, AR has the potential to help us re-engage. It offers a layer on top of the physical space that we’ve largely abandoned, and so it will also give us more of an incentive to be face-to-face again. I believe this will have some negative consequences around attention, privacy and capitalism invading our lives just that much more, but overall, it will be a net positive for our social lives in the long run. People are always the most interesting form of content, and layering technologies have the potential to empower new forms of connection around interests.
“In cities especially, people long for the equivalent of the icebreakers we use in our classrooms. They seek each other online based on shared interests, and we see a rise in throwback formats like board games and in-person meetups. The demand for others never abated, but we’ve been highly distracted by shiny, convenient things. People are hungry for real connection, and technologies like AR have the potential to deliver that and so to mitigate or reverse some of the well-being declines we’ve seen over the past 10 to 20 years. I expect AR glasses to go through some hype and disillusionment, but then to take off once commercial devices are socially acceptable and cheap enough. I expect that the initial faltering steps will take place over the next three years and then mass-market devices will start to take off and accelerate after that.
“Here’s my simple take: I think AR will tilt our heads up from our phones back to each other’s faces. It won’t all be wonderful because people are messy and capitalism tends to eat into relationships and values, but that tilt alone will be a very positive thing.
“What I worry most about in regard to technology is capitalism. Technology will continue to create value and save time, but the benefits and costs will fall in disproportionate ways across society.
“Everyone is rightly focused on the promise and challenges of AI at the moment. This is a conversation that will play out very differently around the world. Here in the United States, we know that business will use AI to maximize its profit and that our institutions won’t privilege workers or well-being over those profits. And so we can expect to see the benefits of AI largely accrue to corporations and their shareholders. Think of the net gain that AI could provide – we can have more output with less effort. That should be a good thing, as more goods and capital will be created and so should improve everyone’s lot in life. I think it will likely be a net positive in terms of GDP and life expectancy, but in the U.S., those gains will be minimal compared to what they could and should be.
“Last year I took a sabbatical and visited 45 countries around the world. I saw wealthy and poor nations – places where technology abounds and where it is rare. What struck me the most was the difference in values and how that plays out in promoting the well-being of everyday people. The United States is comparatively one of the worst places in the world at prioritizing well-being over economic growth and the accumulation of wealth by a minority (yes, some countries are worse still). That’s not changing any time soon, and so in that context, I look at AI and ask what kind of impacts it’s likely to have in the next 10 years. It’s not pretty.
“Let’s put aside our headlines about students plagiarizing papers and think about the job displacements that are coming in every industry. When the railroads first crossed the U.S., we rightly cheered, but we also didn’t talk a lot about what happened to the people who worked for the Pony Express. Whether it’s the truck driver replaced by autonomous vehicles, the personal trainer replaced by an AI agent, or the stockbroker who’s no longer as valuable as some code, AI is going to bring creative destruction to nearly every industry. There will be a lot of losers.”
Russell Neuman: Let’s try a system of ‘intelligent privacy’ that would compensate users for their data
Neuman, professor of media technology at New York University, wrote, “One of my largest concerns is for the future of privacy. It’s not just that that capacity will be eroded. Of course, it will be because of the interests of governments and private enterprise. My concern is about a lost opportunity that our digital technologies might otherwise provide for: What I like to call ‘intelligent privacy.’
“Here’s an idea. You are well aware that your personal information is a valuable commodity for the social media and online marketing giants like Google, Facebook, Amazon and Twitter. Think about the rough numbers involved – Internet advertising in the U.S. for 2022 is about $200 billion. The number of active online users is about 200 million. $200 billion divided by 200 million. So, your personal information is worth about $1,000. Every year. Not bad. The idea is: Why not get a piece of the action for yourself? It’s your data. But don’t be greedy. Offer to split it with the Internet biggies 50-50. $500 for you, $500 for those guys to cover their expenses.
“Thank you very much. But the Tech Giants are not going to volunteer to initiate this sort of thing. Why would they? So there has to be a third party to intervene between you and Big Tech. There are two candidates for this – first, the government, and second, some new private for-profit or not-for-profit. Let’s take the government option first.
“There seems to be an increasing appetite for ‘reining in Big Tech’ in the United States on Capitol Hill. It even seems to have some bipartisan support, a rarity these days. But legislation is likely to take the form of an antitrust policy to prevent competition-limiting corporate behaviors. Actually, proactively entering the marketplace to require some form of profit sharing is way beyond current-day congressional bravado. The closest Congress has come so for is a bill called DASHBOARD (an acronym for Designing Accounting Safeguards to Help Broaden Oversight and Regulations on Data) which would require major online players to explain to consumers and financial regulators what data they are collecting from online users and how it is being monetized. The Silicon Valley lobbyists squawked loudly and so far the bill has gone nowhere. And all that was proposed in that case was to make some data public. Dramatic federal intervention into this marketplace is simply not in the cards.
“So, what about nongovernmental third parties? There are literally dozens of small for-profit startups and not-for-profits in the online privacy space. Several alternative browser search engines uch as DuckDuckGo, Neeva and Brave offer privacy-protected browsing. But as for-profits, they often end up substituting their own targeted ads (presumably without sharing information) for what you would otherwise see on a Google search or a Facebook feed.
“Brave is experimenting with rewarding users for their attention with cryptocurrency tokens called BATs for Basic Attention Tokens. This is a step in the right direction. But so far, usage is tiny, distribution is limited to affiliated players, and the crypto value bubble complicates the incentives.
“So, the bottom line here is that Big Tech still controls the golden goose. These startups want to grab a piece of the action for themselves and try to attract customers with ‘privacy-protection’ marketing rhetoric and with small, tokenized incentives which are more like a frequent flyer program than real money. How would a serious piece-of-the-action system for consumers work? It would have to allow a privacy-conscious user to opt out entirely. No personal information would be extracted. There’s no profit there, so no profit sharing. So, in that sense, those users ‘pay’ for the privilege of using these platforms anonymously.
“YouTube offers an ad-free service for a fee as a similar arrangement. For those people open to being targeted by eager advertisers, there would be an intelligent privacy interface between users and the online players. It might function like a VPN [virtual personal network] or proxy server but one which intelligently negotiates a price. ‘My gal spent $8,500 on online goods and services last year,’ the interface notes. ‘She’s a very promising customer. What will you bid for her attention this month?’
“Programmatic online advertising already works this way. It is all real-time algorithmic negotiations of payments for ad exposures. A Supply Side Platform gathers data about users based on their online behavior and geography and electronically offers their ‘attention’ to an Ad Exchange. At the Ad Exchange, advertisers on a Demand Side Platform have 10 milliseconds to respond to an offer. The Ad Exchange algorithmically accepts the highest high-speed bid for attention. Deal done in a flash. Tens of thousands of deals every second. It’s a $100 billion marketplace.”
Maggie Jackson: Complacency and market-driven incentives keep people from focusing on the problems AI can cause
Jackson, award-winning journalist, social critic and author, wrote, “The most critical beneficial change in digital life now on the horizon is the rise of uncertain AI. In the six decades of its existence, AI has been designed to achieve its objectives, however it can. The field’s overarching mission has been to create systems that can learn how to play a game, spot a tumor, drive a car, etc., on their own as well as or better than humans can do so.
“This foundational definition of AI largely reflects a centuries-old ideal of intelligence as the realization of one’s goals. However, the field’s erratic yet increasingly impressive success in building objective-driven AI has created a widening and dangerous gap between AI and human needs. Almost invariably, an initial objective set by a designer will deviate from a human’s needs, preferences and well-being come ‘run-time.’
“Nick Bostrom’s once-seemingly laughable example of a super-intelligent AI system tasked with making paper clips which then takes over the world in pursuit of this goal, has become a plausible illustration of the unstoppability and risk of reward-centric AI. Already, the ‘alignment problem’ can be seen in social media platforms designed to bolster user time online by stoking extremist content. As AI grows more powerful, the risks of models that have a cataclysmic effect on humanity dramatically increase.
“Reimagining AI to be uncertain literally could save humanity. And the good news is that a growing number of the world’s leading AI thinkers and makers are endeavoring to make this change a reality. Enroute to achieving its goals, AI traditionally has been designed to dispatch unforeseen obstacles, such as something in its path. But what AI visionary Stuart Russell calls ‘human-compatible AI’ is instead designed to be uncertain about its goals, and so to be open to and adaptable to multiple possible scenarios.
“An uncertain model or robot will ask a human how it should fetch coffee or show multiple possible candidate peptides for creating a new antibiotic, instead of pursuing the single best option befitting its initial marching orders.
“The movement to make AI is just gaining ground and largely experimental. It remains to be seen whether tech behemoths will pick up on this radical change. But I believe this shift is gaining traction, and none too soon. Uncertain AI is the most heartening trend in technology that I have seen in a quarter-century of writing about the field.
“One of the most menacing, if not the most menacing, changes likely to occur in digital life in the next decade is a deepening complacency about technology. If first and foremost we cannot retain a clear-eyed, thoughtful and constant skepticism about these tools, we cannot create or choose technologies that help us flourish, attain wisdom and forge mutual social understanding. Ultimately, complacent attitudes toward digital tools blind us to the actual power that we do have to shape our futures in a tech-centric era.
“My concerns are three-part: First, as technology becomes embedded in daily life, it typically is less explicitly considered and less seen, just as we hardly give a thought to electric light. The recent Pew report on concerns about the increasing use of AI in daily life shows that 46% of Americans have equal parts excitement and concern over this trend, and 40% are more concerned than excited. But only 30% correctly fully identified where AI is being used, and nearly half think they do not regularly interact with AI, a level of apartness that is implausible given the ubiquity of smart phones and of AI itself. AI, in a nutshell, is not fully seen. As well, it’s alarming that the most vulnerable members of society – people who are less-well educated, have lower incomes, and/or are elderly – demonstrate the least awareness of AI’s presence in daily life and show the least concern about this trend.
“Second, mounting evidence shows that the use of technology itself easily can lead to habits of thought that breed intellectual complacency. Not only do we spend less time adding to our memory stores in a high-tech era, but ‘using the internet may disrupt the natural functioning of memory,’ according to researcher Benjamin Storm. Memory-making is less activated, data is decontextualized and devices erode time for rest and sleep, further disrupting memory processing. As well, device use nurtures the assumption that we can know at a glance. After even a brief online search, information seekers tend to think they know more than they actually do, even when they have learned nothing from a search, studies show. Despite its dramatic benefits, technology therefore can seed a cycle of enchantment, gullibility and hubris that then produces more dependence on technology.
“Finally, the market-driven nature of technology today muffles any concerns that are shown about devices. Consider the case of robot caregivers. Although a majority of Americans and people in EU countries say they would not want to use robot care for themselves or family members, such robots increasingly are sold on the market with little training, caveats or even safety features. Until recently, older people were not consulted in the design and production of robot caregivers built for seniors. Given the highly opaque, tone-deaf and isolationist nature of big-tech social media and AI companies, I am concerned that whatever skepticism that people may have for technology may be ignored by its makers.”
Louis Rosenberg: The boundary between the physical and digital worlds will vanish and tech platforms will know everything we do and say
Rosenberg, CEO and chief scientist at Unanimous AI, predicted, “As I look ahead to the year 2035, it’s clear to me that certain digital technologies will have an oversized impact on the human condition, affecting each of us as individuals and all of us as a society. These technologies will almost certainly include artificial intelligence, immersive media (VR and AR), robotics (service and humanoid robots) and powerful advancements in human-computer interaction (HCI) technologies. At the same time, blockchain technologies will continue to advance, likely enabling us to have persistent identity and transferrable assets across our digital lives, supporting many of the coming changes in AI, VR, AR and HCI.
“So, what are the best and most beneficial changes that are likely to occur? As a technologist who has worked on all of these technologies for over 30 years, I believe these disciplines are about to undergo a revolution driving a fundamental shift in how we interact with digital systems. For the last 60 years or so, the interface between humans and our digital lives has been through keyboards, mice and touchscreens to provide input and the display of flat media (text, images, videos) as output. By 2035, this will no longer be the dominant model. Our primary means of input will be through natural dialog enabled by conversational AI and our primary means of output will be rapidly transitioning to immersive experiences enabled through mixed-reality eyewear that brings compelling virtual content into our physical surroundings.
“I look at this as a fundamental shift from the current age of ‘flat computing’ to an exciting new age of ‘natural computing.’ That’s because by 2035, human interface technologies – both input and output – will finally allow us to interact with digital systems the way our brains evolved to engage our world: through natural experiences in our immediate surroundings via mixed reality and through natural human language, conversational AI.
“As a result, by 2035 and beyond, the digital world will become a magical layer that is seamlessly merged with our physical world. And when that happens, we will look back at the days when people engaged their digital lives by poking their fingers at little screens in their hands as quaint and primitive. We will realize that digital content should be all around us and should be as easy to interact with as our physical surroundings. At the same time, many physical artifacts (like service robots, humanoid robots and self-driving cars) will come alive as digital assets that we engage through verbal dialog and manual gestures. As a consequence, by the end of the 2030s the differences will largely disappear in our minds between what is physical and what is digital.
“I strongly believe that by 2035 our society will be transitioning from the current age of ‘flat computing’ to an exciting new age of ‘natural computing.’ This transition will move us away from traditional forms of digital content (text, images, video) that we engage today with mice, keyboards and touchscreens to a new age of immersive media (virtual and augmented reality) that we will engage mostly through conversational dialog and natural physical interactions.
“While this will empower us to interact with digital systems as intuitively as we interact with the physical world, there are many significant dangers this transition will bring. For example, the merger of the digital world and the physical world will mean that large platforms will be able to track all aspects of our daily lives – where we are, who we are with, what we look at, even what we pick up off store shelves. They will also track our facial expressions, vocal inflections, manual gestures, posture, gait and mannerisms (which will be used to infer our emotions throughout our daily lives). In other words, by 2035 the blurring of the boundaries between the physical and digital worlds will mean (unless restricted through regulation) that large technology platforms will know everything we do and say during our daily lives and will monitor how we feel during thousands of interactions we have each day.
“This is dangerous and it’s only half the problem. The other half of the problem is that conversational AI systems will be able to influence us through natural language. Unless strictly regulated, targeted influence campaigns will be enacted through conversational agents that have a persuasive agenda. These conversational agents could engage us through virtual avatars (virtual spokespeople) or through physical humanoid robots. Either way, when digital systems engage us through interactive dialog, they could be used as extremely persuasive tools for driving influence. For specific examples, I point you to a white paper “From Marketing to Mind Control” written in 2022 for the Future of Marketing Institute and to the 2022 IEEE paper “Marketing in the Metaverse and the Need for Consumer Protections.”
Wendy Grossman: Tech giants are losing ground, making room for new approaches that don’t involve privacy-invasive surveillance of the public
Grossman, a UK-based science writer, author of “net.wars” and founder of the magazine The Skeptic, wrote, “For the moment, it seems clear that the giants that have dominated the technology sector since around 2010 are losing ground as advertisers respond to social and financial pressures, as well as regulatory activity and antitrust actions. This is a good thing, as it opens up possibilities for new approaches that don’t depend on constant, privacy-invasive surveillance of Internet users.
“With any luck, that change in approach should spill over into the physical world to create smart devices that serve us rather than the companies that make them. A good example at the moment is smart speakers, whose business models are failing. Amazon is finding that consumers don’t want to use Alexa to execute purchases; Google is cutting back the division that makes Google Home.
“Similarly, the ongoing relentless succession of cyberattacks on user data might lead businesses and governments to recognize that large pools of data are a liability, and to adopt structures that put us in control of our own data and allow us to decide whom to share it with. In the UK, Mydex and other providers of personal data stores have long been pursuing this approach. …
“Many of the biggest concerns about life until 2035 are not specific to the technology sector: the impact of climate change and the disruption and migration it is already beginning to bring; continued inequality and the likely increase in old age poverty as Generation Rent reaches retirement age without the means to secure housing; the ongoing overall ill-health (cardiovascular disease, diabetes, dementia) that is and will be part of the legacy of the SARS-CoV-2 pandemic. These are sweeping problems that will affect all countries, and while technology may help ameliorate the effects, it can’t stop them. Many people never recovered from the 2008 financial crisis (see the movie ‘Nomadland’); the same will be true for those worst affected by the pandemic.
“In the short term, the 2023 explosion of new COVID-19 cases expected in China will derail parts of the technology industry; there may be long-lasting effects.
“I am particularly concerned about the increasing dependence on systems that require electrical power to work in all aspects of life. We rarely think in terms of providing alternative systems that we can turn to when the main ones go down. I’m thinking particularly of those pushing to get rid of cash in favor of electronic payments of all types, but there are other examples.
“If allowed to continue, the reckless adoption of new technology by government, law enforcement and private companies without public debate or consent will create a truly dangerous state. I’m thinking in particular of live facial recognition, which just a few weeks ago was used by MSG Entertainment to locate and remove lawyers attending concerts and shows at its venues because said lawyers happened to work for firms that are involved in litigation against MSG. (The lawyers themselves were not involved.) This way lies truly disturbing and highly personalized discrimination. Even more dangerous, the San Francisco Police Department has proposed to the city council that it should be allowed to deploy robots with the ability to maim and kill humans – only for use in the most serious situations, of course.
“Airports provide a good guide to the worst of what our world could become. In a piece I wrote in October 2022, I outline what the airports of the future, being built today without notice or discussion, will be like: all-surveillance all the time, with little option to ask questions or seek redress for errors. Airports – and the Disney parks – provide a close look at how ‘smart cities’ are likely to develop.
“I would like to hope that decentralized sites and technologies like Mastodon, Discord and others will change the dominant paradigm for the better – but the history of cooperatives tends to show that there will always be a few big players. Email provides a good example. While it is still true that anyone can run an email server, it is no longer true that they can do so as an equal player in the ecosystem. Instead, it is increasingly difficult for a small server to get its connections accepted by the tiny handful of big players. Accordingly, the most likely outcome for Mastodon will be a small handful of giant instances, and a long, long tail of small ones that find it increasingly difficult to function. The new giants created in these federated systems will still find it hard to charge or sell ads. They will have to build their business models on ancillary services for which the social media function provides lock-in, just as today Gmail profits Google nothing, but it underpins people’s use of its ad-supported search engine, maps, Android phones, etc. This provides Google with a social graph it can use in its advertising business.”
Alf Rehn: The AI turf war will pit governments trying to control bad actors against bad actors trying to weaponize AI tools
Rehn, professor of innovation, design and management at the University of Southern Denmark, wrote, “Humans and technology rarely develop in perfect sync, but we will see them catching up. We’ve lived through a period in which digital tech has developed at speeds we’ve struggled to keep up with; there is too much content, too much noise and too much disinformation.
“Slowly but surely, we’re getting the tools to regain some semblance of control. AI used to be the monster under our beds, but now we’re seeing how we might make it our obedient dog (although some still fear it might be a cat in disguise). As new tools are released, we’re increasingly seeing people using them for fearless experimentation, finding ways to bend ever more powerful technologies to human wills. From fearing that AI and other technologies are going to take our jobs and make us obsolete, humans are finding ever more ways to elevate themselves with technology and making digital wrangling into not just the hobby of a few forerunners, but a new folk culture.
“There was a time when using electricity was something you could only do after serious education and a long apprenticeship. Today, we all know how a plug works. The same is happening in the digital space. Increasingly, digital technologies are being turned into something so easy to use, utilize and manipulate so that they become the modern equivalent of electricity. As every man, woman and child knows how to use an AI to solve a problem, digital technology becomes ever less scary and more and more the equivalent of building with Lego blocks. In 2035 the limits are not technological, but creative and communicative. If you can dream it and articulate it, digital technology can build it, improve upon it and help you transcend the limitations you thought you had.
“That is, unless a corporate structure blocks you.
“Spiderman’s Uncle Ben said, ‘With great power comes great responsibility.’ What happens when we all gain great power? The fact that some of us will act irresponsibly is already well known, but we also need to heed the backlash this all brings. There are great institutional powers at play that may not be that pleased with the power that the new and emerging digital technologies afford the general populace. At the same time, there is a distinct risk that radicalized actors will find ever more toxic ways to utilize the exponentially developing digital tools – particularly in the field of AI. A common fear in scary future scenarios is that AIs will develop to a point where they subjugate humanity. But right now, leading up to 2035, our biggest concern is the ways in which humans are and will be weaponizing AI tools.
“Where this places most of humanity is in a double bind. As digital technology becomes more and more powerful, state institutions will aim to curtail bad actors using it in toxic ways. At the same time, and for the same reason, bad actors will find ever more creative ways to use it to cheat, fool, manipulate, defraud and otherwise mess with us. The average Joe and/or Jane (if such a thing exists anymore) will be caught up in the coming AI turf wars, and some will become collateral damage.
“What this means is that the most menacing thing about digital technologies won’t be the tech itself, nor any one person’s deployment of the same, but being caught in the pincer movement of attempted control and wanton weaponization. We think we’ve felt this now, with the occasional social media post being quarantined, but things are about to get a lot, lot worse.
“Imagine having written a simple, original post, only to see it torn apart by content-monitoring software and at the same time endlessly repurposed by agents who twist your message to its very antithesis. Imagine this being a normal, daily affair. Imagine being afraid to even write an email, lest it becomes fodder in the content wars. Imagine tearing your children’s tech away, just to keep them safe for a moment longer.”
Garth Graham: We don’t understand what society becomes when machines are social agents
Graham, longtime Canadian networked communities leader, wrote, “Consider the widely accepted Internet Society phrase, ‘Internet Governance Ecology.’ In that phrase, what does the word ecology actually mean? Is the Internet Society’s description of Internet governance as ecology a metaphor, an analogy or a reality? And, if it is a reality, what are the consequences of accepting it?
“Digital technology surfaces the importance of understanding two different approaches to governance. Our current understanding of governance, including democracies, is hierarchical, mechanistic and measures things on an absolute scale. The rules about making rules are assumed to be applied externally from outside systems of governance. And this means that those with power assume their power is external to the systems they inhabit. The Internet, as a set of protocols for inter-networking, is based on a different assumption. Its protocols are grounded in a shift in epistemology away from the mechanistic and toward the relational.
“It is a common pool resource and an example of the governance of complex adaptive self-organizing systems. In those systems, the rules about making rules are internal to each and every element of the system. They are not externally applied. This complexity means that the adaptive outcomes of such systems cannot be predicted from the sum of the parts. The assumption of control by leadership inherent in the organization of hierarchical systems is not present. In fact, the external imposition of management practices on a complex adaptive system is inherently Disruptive of the system’s equilibrium. So the system, like a packet-switched network, has to route around it to survive. …
“I do not think we understand what society becomes when machines are social agents. Code is the only language that’s executable. It is able to put a plan or instruction or design into effect on its own. It is a human utterance (artifact) that, once substantiated in hardware, has agency. We write the code and then the code writes us. Artificial intelligence intensifies that agency. That makes necessary a shift in our assumptions about the structure of society. All of us now inhabit dynamic systems of human-machine interaction. That complexifies our experience. Yes, we make our networks and our networks make us. Interdependently, we participate in the world and thus change its nature. We then adapt to an altered nature in which we have participated. But the ‘we’ in those phrases now includes encoded agents that interact autonomously in the dynamic alteration of culture. Those agents sense, experience and learn from the environment, modifying it in the process, just as we do. This represents an increase in the complexity of society and the capacity for radical change in social relations.
“Ursula Franklin’s definition of technology – ‘Technology involves organization, procedures, symbols, new words, equations, and, most of all, it involves a mindset’ – is that it is the way we do things around here. It becomes different as a consequence of a shift in the definition of ‘we.’ AI increases our capacity to modify the world, and thus alter our experience of it. But it puts ‘us’ into a new social space we neither understand nor anticipate.”
Kunle Olorundare: There will be universal acceptance of open-source applications to help make AI and robotics safe and smart
Olorundare, vice president of the Nigeria Chapter of the Internet Society, wrote, “Digital technology has come to stay in our lives for good. One area that excites me about the future is the use of artificial intelligence, which of course is going to shape the way we live by 2035. We have started to see the dividends of artificial intelligence in our society. Essentially, the human-centered development of digital tools and systems is safely advancing human progress in the areas of transportation, health, finances, energy harvesting and so on.
“As an engineer who believes in the power of digital technology, I see limitless opportunities for our transportation system. Beyond the personal driverless cars and taxis, by 2035, our public transportation will be taken over by remote-controlled buses with accurate timing with a marginal error of 0.0099 which will make us feel the needless use of personal cars. This will be cheaper without disappointment.
“Autonomous public transport will be pocket-friendly to the general citizenry. This will come with less pollution as energy harvesting from green sources will take a tremendous positive turn with the use of IoT and other digital technologies that harvest energy from multiple sources by estimating what amount of energy is needed and which green sources are available at a particular time with plus one redundancy. Hence minimal inefficiencies. Deployment of bigger drones that can come directly to your house to pick you up after identifying you and debiting your digital wallet account and confirming the payment will be a reality. The use of paper tickets will be a thing of the past as digital wallets to pay for all services will be ubiquitous.
“In regard to human connections, governance and institutions and the improvement of social and political interactions, by 2035, the body of knowledge will be fully connected. There will be universal acceptance of open-source applications that make it possible to have a globally robust body of knowledge in artificial intelligence and robotics. There will be less depression in society. If your friends are far away, robots will be available as friends you can talk to and even watch TV with and analyze World Cup matches as you might do with your friends. Robots will also be able to contribute to your research work even more than what ChatGPT is capable of today. …
“Human knowledge and its verifying, updating, safe archiving by open-source AI will make research easier. Human ingenuity will still be needed to add value – we will work on the creative angles while secondary research is being conducted by AI. This will increase contributions to the body of knowledge and society will be better off.
“Human health and well-being will benefit greatly from the use of AI, bringing about a healthy population as sicknesses and diseases can be easily diagnosed. Infectious diseases will become less virulent because of the use of robots in highly infectious pandemics and pandemics can easily be curbed. With enhanced big data using AI and ML, pandemics can be easily predicted and prevented, and the impact curve flattened in the shortest possible time using AI-driven pandemic management systems.”
“It is pertinent to also look at the other side of the coin as we gain positive traction on digital technologies. There will be concern about the safety of humans as technology is used by scoundrels for crime, mischief and other negative ends. Technology is often used to attack innocent souls. It can be used to manipulate the public or destroy political enemies, thus it is not necessarily always the ‘bad guys’ who are endangering our society. Human rights may be abused. For example, a government may want to tie us to one digital wallet through a central bank of digital currencies and dictate how we spend our money. These are issues that need to be looked at in order not to trample on human rights. Technological decolonization may also raise a concern as unique cultures may be eroded due to global harmonization. This can create an unequal society in which some sovereignty may benefit more than others.”
Jeff Jarvis: Let’s hope media culture changes and focus our attention on discovering, recommending and supporting good speech
Jarvis, director of the Tow-Knight Center at City University of New York’s Craig Newmark School of Journalism, wrote, “I shall share several hopes and one concern:
- “I hope that the tools of connection will enable more and more diverse voices to at last be heard outside the hegemonic control of mass media and political power, leading to richer, more inclusive public discourse.
- “I hope we begin to see past the internet’s technology as technology and understand the net as a means to connect us as humans in a more open society and to share our information and knowledge on a more equitable and secure basis for the benefit of us all.
- “I hope we might finally move beyond mass media’s current moral panic over the internet as competition and, indeed, supersede the worst of mass media’s failing institutions, beginning with the notion of the mass and media’s invention of the attention economy.
- “I hope that – as occurred at the birth of print – we will soon turn our attention away from the futile folly of trying to combat, control and outlaw all bad speech and instead focus our attention and resources on discovering, recommending and supporting good speech.
- “I hope the tools of AI – the subject of mass media’s next moral panic – will help people intimidated by the tools of writing and research to better express their ideas and learn and create.
- “I hope we will have learned the lesson taught us by Elon Musk: that placing our discourse in the hands of centralized corporations is perilous and antithetical to the architecture and aims of the Internet; federation at the edge is a far better model.
- “I hope that regulators will support opening data for researchers to study the impact and value of the net – and will support that work with necessary resources.
“I fear the pincer movement from right and left, media and politics, against Section 230 and protection of freedom of expression will lead to regulation that raises liability for holding public conversation and places a chill over it, granting protection to and extending the corrupt reign of mass media and the hedge-fund-controlled news industry.”
Maja Vujovic: We will have tools that keep us from drowning in data
Vujovic, owner and director of Compass Communications in Belgrade, Serbia, wrote, “New technologies don’t just pop up out of the blue; they grow through iterative improvements of conceivable concepts moved forward by bold new ideas. Thus, in the decade ahead, we will see advances in most of the key breakthroughs we already know and use (automation and robotics, sensors and predictive maintenance, AR and VR, gaming and metaverse, generative arts and chatbots and digital humans) as they mature into the mass mainstream.
“Much as spreadsheet tech sprouted in the 1970s and first thrived on mainframe computers but became adopted en masse when those apps migrated onto personal desktops, in the same way, we will witness in the coming years countless variations of apps for personal use of our current top-tier technologies.
“The most useful among those tech-granulation trends will be the use of complex tech in personalized health care. We will see very likable robots serve as companions to ailing children and as care assistants to infirm elderly. Portable sensors will graduate from superfluous swagger to life-saving utility. We are willing and able to remotely track our pets now, but gradually we will track our small children or parents with dementia as well.
“Drowning in data, we will have tools for managing other tools and widgets for automating our digital lives. Apps will work silently in the background, or in our sleep, tagging our personal photos, tallying our daily expenses, planning our celebrations or curating our one (combined) social media feed. Rather than supplanting us and scaling our creative processes (which by definition only works on a scale of one!) technology will be deployed where we need it the most, in support of what we do best – and that is human creation.
“To extract the full value from tools like chatbots, we will all soon need to master the arcane art of prompting AI. A prompt engineer is already a highly paid job. In the next decade, prompting AI will be an advanced skill at first, then a realm of licensed practitioners and eventually an academic discipline.
“Of course, we still have many concerns. One of them is the limitations imposed by the ways in which AI is now being trained on limited sets of data. Our most advanced digital technologies are a result of unprecedented aggregation. Top apps have enlisted almost half of the global population. The only foreseeable scenario for them is to keep growing. Yet our global linguistic capital is not evenly distributed.
“By compiling the vocabularies of languages with far fewer users than English or Chinese have, a handful of private enterprises have captured and processed the linguistic equity of not only English, or Hindu or Spanish, but of many small cultures as well, such as Serbian, Welsh or Sinhalese. Those cultures have far less capacity to compile and digitally process their own linguistic assets by themselves. While most benign at times of peace, this dis-balance can have grave consequences during more tense periods. Effectively, it is a form of digital supremacy, which in time might prove taxing on smaller, less wealthy cultures and economies.
“Moreover, technology is always at the mercy of other factors, which get to determine whether it is used or misused. The more potent the technologies at hand, the more damage they can potentially inflict. Having known war firsthand and having gone through the related swift disintegration of social, economic and technical infrastructure around me, I am concerned to think how utterly devastating such disintegration would be in the near future, given our total dependence on an inherently frail digital infrastructure.
“With our global communication signals fully digitized in recent times, there would be absolutely no way to get vital information, talk to distant relatives or collect funds from online finance operators, in case of any accidental or intentional interruptions or blockades of Internet service. Virtually all amenities of contemporary living – our whole digital life – may be canceled with a flip of a switch, without recourse. As implausible as this sounds, it isn’t impossible. Indeed, we have witnessed implausible events take place in the recent years. So, I don’t like the odds.”
Paul Jones: ‘We used to teach people how to use computers. Now we teach computers how to use people’
Jones, professor emeritus at UNC-Chapel Hill School of Information and Library Science, wrote, “There is a specter haunting the internet – the specter of artificial intelligence. All the powers of old thinking and knowledge production have entered into a holy (?) alliance to exorcise this specter: frenzied authors, journalists, artists, teachers, legislators and, most of all, lawyers. We are still waiting to hear from the pope.
“In education, we used to teach people how to use computers. Now, we teach computers how to use people. By aggregating all that we can of human knowledge production in nearly every field, the computers can know more about humans as a mass and as individuals than we can know of ourselves. The upside is these knowledgeable computers can provide, and will quickly provide, better access to health, education and in many cases art and writing for humans. The cost is a loss of personal and social agency at individual, group, national and global levels.
“Who wouldn’t want the access? But who wouldn’t worry, rightly, about the loss of agency? That double desire is what makes answering these questions difficult. ‘Best and most beneficial’ and ‘most harmful and menacing’ are opposite so much as co-joined. Like conjoined twins sharing essential organs and blood systems. Unlike for some such twins, no known surgery can separate them. Just as cars gave us, over a short time, a democratization of travel and at the same time became major agents of death – immediately in wrecks, more slowly via pollution – AI and the infrastructure to support it will give us untold benefits and access to knowledge while causing untold harm.
“We can predict somewhat the direction of AI, but more difficult will be how to understand the human response. Humans are now, or will soon be, co-joined to AI even if they don’t use it directly. AI will be used on everyone just as one need not drive or even ride in a car to be affected by the existence of cars. AI changes will emerge when it possesses these traits:
- “Distinctive presences (aka voices but also avatars personalized to suit the listener/reader in various situations). These will be created by merging distinctive human writing and speaking voices, say maybe Bob Dylan + Bruce Springsteen.
- “The ability to emotionally connect with humans (aka presentation skills).
- “Curiosity. AI will do more than respond. It will be interactive and heuristic, offering paths that have not yet been offered – we have witnessed this AI behavior in the playing of Go and chess. AI will continue to present novel solutions.
- “A broad and unique worldview. Because AI can be trained on all digitizable human knowledge and can avail itself of information from sensors more in variance with those open to humans. AI will be able to apply, say, Taoism to questions about weather.
- “Empathy. Humans do not have an endless well of empathy. We tire easily. But AI can seem persistently and constantly empathetic. You may say that AI empathy isn’t real, but human empathy isn’t always either.
- “Situational Awareness. Thanks to input from a variety of sensors, AI can and will be able to understand situations even better than humans.
- “No area of knowledge work will be unaffected by AI and sensor awareness.
“How will we greet our robot masters? With fear, awe, admiration, envy and desire.”
Marjory Blumenthal: Technology outpaces our responses to unintended consequences
Blumenthal, senior adjunct policy researcher at RAND Corporation, wrote, “In a little over a decade, it is reasonable to expect two kinds of progress in particular: First are improvements in the user experience, especially for people with various impairments (visual, auditory, tactile, cognitive). A lot is said about diversity, equity and inclusion that focuses broadly on factors like income and education, but to benefit from digital technology requires an ability to use it that today remains elusive for many people for physiological reasons. Globally, populations are aging, a process that often confronts people with impairments they didn’t used to have (and of course many experience impairments from birth onward).
“Second, and notwithstanding concerns about concentration in many digital-tech markets, more indigenous technology is likely, at least to serve local markets and cultures. In some cases, indigenous tech will take advantage of indigenous data, which technological progress will make easier to amass and use, and more generally it will leverage a wider variety of talent, especially in the Global South, plus motivations to satisfy a wider variety of needs and preferences (including, but not limited to, support for human rights).
“There are two areas in which technology seems to get ahead of people’s ability to deal with it, either as individuals or through governance. One is the information environment. For the last few years, people have been coming to grips with manipulated information and its uses, and it has been easier for people to avoid the marketplace of ideas by sticking with channels that suit narrow points of view.
“Commentators lament the decline in trust of public institutions and speculate about a new normal that questions everything to a degree that is counterproductive. Although technical and policy mechanisms are being explored to contend with these circumstances, the underlying technologies and commercial imperatives seem to drive innovation that continues to outpace responses. For example, the ability to detect tends to lag the ability to generate realistic but false images and sound, although both are advancing.
“At a time when there has been a flowering of principles and ethics surrounding computing, new systems like ChatGPT with a high cool factor are introduced without any apparent thought to second- and third-order effects of using them – thoughtfulness takes time and risks loss of leadership. The resulting distraction and confusion likely will benefit the mischievous more than the rest of us – recognizing that crime and sex have long impelled uses of new technology.
“The second is safety. Decades of experience with digital technology have shown our limitations in dealing with cybersecurity, and the rise of embedded and increasingly automated technology introduces new risks to physical safety even as some of those technologies (e.g., automated vehicles) are touted as long-term improvers of safety.
“Responses are likely to evolve on a sector-by-sector basis, which might make it hard to appreciate interactions among different kinds of technology in different contexts. Although progress on the safety of individual technologies will occur over the next decade, the cumulation of interacting technologies will add complexity that will challenge understanding and response.”
David Porush: Advances may come if there are breakthroughs in quantum computing and the creation of a global court of criminal justice
Porush, author and longtime professor at Rensselaer Polytechnic Institute, wrote, “There will be positive progress in many realms. Quantum computing will become a partner to human creativity and problem solving. We’ve shown sophisticated brute force computing achieve this already with ChatGPT. Quantum computing will surprise us and challenge us to exceed ourselves even further and in much more surprising ways. It will also challenge former expectations about nature and the supernatural, physics and metaphysics. It will rattle the cage of scientific axioms of the mechanist-vitalism duality. This is a belief, and a hope, with only hints in empirical evidence.
“We might establish a new worldwide court of criminal justice. Utopian dreams that the World Wide Web and new social technologies might change human behavior have failed – note the ongoing human criminality, predation, tribalism, hate speech, theft and deception, demagoguery, etc. Nonetheless, social networks also enable us to witness, record and testify to bad behavior almost instantly, no matter where in the world it happens.
“By 2035 I believe this will promote the creation (or beginning of the discussion of the creation) of a new worldwide court of criminal justice, including a means to prosecute and punish individual war crimes and bad nation actors. My hope is that this court would supersede our current broken UN and come to apolitical verdicts based on empirical evidence and universal laws. Citizens pretty universally have shown they will give up rights to privacy to corporations for convenience. It would also imply that the panopticon of technologies used for spying and intrusion, whether for profit or totalitarian control by governments, will be converted to serve global good.
“Social networking contributes to scientific progress, especially in the field of virology. The global reaction to the arrival of COVID-19 showed the power of data gathering, data sharing and collaboration on analysis to combat a pandemic. Worldwide virology the past two years is a fine avatar of what could be done for all sciences. We can make more effective use of global computing in regard to resource distribution. Politicians and nations have not shown enough political will to really address long-term solutions to crises like global warming, water shortages and hunger. At least emerging data on these crises arm us with knowledge as the predicate to solutions. For instance, there’s not one less molecule of H2O available on Earth than a billion years ago; it’s just collected, made usable and distributed terribly.
“If we combine the appropriate level of political will with technological solutions (many of which we have in hand), we can distribute scarce resources and monitor harmful human or natural phenomena and address these problems with much more timely and effective solutions.”
Nandi Nobell: New interfaces in the metaverse and virtual reality will extend the human experience
Nobell, futurist designer and senior associate at CallisonRTKL, a global architecture, planning and design practice, wrote, “Whether physical, digital or somewhere in-between, interfaces to human experiences are all we have and have ever had. The body-mind (consciousness) construct is already fully dependent on naturally evolved interfaces to both our surroundings and our inner lives, which is why designing more intuitive and seamless ways of interacting with all aspects of our human lives is both a natural and relevant step forward – it is crossing our current horizon to experience the next horizon. With this in mind, extended reality (XR), the metaverse and artificial intelligence become increasingly important all the time as there are many evident horizons we are crossing through our current endeavours simply by pursuing any advancement.
“Whether it is the blockchain we know of today, or something more useful, user- and environmentally-friendly and smooth to integrate that can allow simplification of instant contracts and permission-less activities of all sorts, this can enable our world to verify source and quality of content, along with many other benefits.
“The best interfaces to experiences and services that can be achieved will influence what we can think and do, not just as tools and services in everyday life but also as the path to education, communication and so many other things. Improving our interfaces – both physical and digital make the difference between having and not having superpowers as we advance.
“Connecting a wide range of technologies that bridge physical and digital possibilities grows the reach of both. This also means that thinking of the human habitat as belonging to all areas that the body and mind can traverse is more useful than inventing new categories and silos by which we classify experiences. Whatever the future version of multifaceted APIs is, they have to be flexible, largely open and easy to use. Connectivity between ways, directions, clarity, etc., of communication can extend the reach and multiplication of any possibilities – new or old.
“Drawbacks and challenges face us in the years ahead. First comes data – if the FAANGs [Facebook/Meta, Amazon, Apple, Netflix, Google] of the world (non-American equivalents are equally bad) are allowed to remain even nearly as powerful as they are today, problems will become ever-greater, as their strength as manipulators of individuals grow deeper and more advanced. Manipulation will become vastly more advanced and difficult to recognize.
“Artificial intelligence is already becoming so powerful and versatile it can soon shape any imagery, audio and text or geometry in an instant. This means anyone with the computational resources and some basic tools can trick just about anyone into new thoughts and ideas. The owners of the greatest databanks of individuals’ and companies’ history and preferences can easily shape strategies to manipulate groups, individuals and entire nations into new behaviours.
“Why invest in anything if you will have it stolen at some point? Is some sort of perfect fraud-prevention system (blockchain or better) relevant in a future in which any ownership of any sort of asset class – digital or physical – is under threat of loss or distortion?
“Extended reality and the metaverse often get a bit of a beating for how they can make people more vulnerable to harassment, and this is a real threat, but artificial intelligence is vastly more scalable – essentially it could impact every human with access to digital technology more or less simultaneously, while online harassment in an immersive context is not scalable in a similar sense.
“Striking a comfortable and reasonable balance between safe and sane human freedom and surveillance technologies to keep a legit bottom line of this human safety is going to be hard to achieve. There will be further and deeper abuses in many cultures. This may create a digital world and lifestyle that branches off quite heavily from the non-digital counterparts, as digital lives can be expected to be surveilled while the physical can at least in principle be somewhat free of eavesdropping if people are not in view or earshot of a digital device. This being said, a state or company may still reward behaviour that trades data of all sorts from anything happening offline – which has been the case in dictatorships throughout history. The very use and manufacturing of technology may also cost the planet more than it provides the human experience, and as long as the promises of the future drive the value of stock and investments, we are not likely to understand when to stop advancing on a frontier that is on a roll.
“Health care will likely become both better and worse – the class divide grows greater gaps – but long-term it is probably better for most people. The underlying factors generally have more to do with human individual values rather than with the technologies themselves.
“There might be artificial general intelligence by 2035. We don’t know what unintended consequences it may portend. Such AI may have great potential to be helpful. Perhaps one individual can create a value for humanity or planet that is a million times greater than the next person’s contribution. But we do not know whether this value will hold its value over time, or if the outcome will be just as bad as the one portrayed in Nick Bostrom’s ‘paper clip’ analogy.
“Most people are willing to borrow from the future; our children are meant to be this future. What do we make of it? Are children therefore multi-dimensional batteries?”
Charalambos Tsekeris: The surveillance-for-profit model can lead to more loss of privacy, cyber-feudalism and data oligarchy
Tsekeris, vice president of Greece’s Hellenic National Commission for Bioethics and Technoethics, wrote, “In a perfect world, by 2035 digital tools and systems would be developed in a human-centered way, guided by human design abilities and ingenuity. Regulatory frames and soft pressure from civil society would address the serious ethical, legal and social issues resulting from newly emerging forms of agency and privacy. And all in all, collective intelligence, combined with digital literacy, would increasingly cultivate responsibility and shape our environments (analog or digital) to make them safer and AI-friendly.
“Advancing futures-thinking and foresight analysis could substantially facilitate such understanding and preparedness. It would also empower digital users to be more knowledgeable and reflexive upon their rights and the nature and dynamics of the new virtual worlds.
“The power of ethics by design could ultimately orient internet-enabled technology toward updating the quality of human relations and democracy, also protecting digital cohesion, trust and truth from the dynamics of misinformation and fake news.
“In addition, digital assistants and coordination tools could support transparency and accountability, informational self-determination and participation. An inclusive digital agenda might help all users benefit from the fruits of the digital revolution. In particular, innovation in the sphere of AI, clouds and big data could create additional social value and help to support people in need.
“The best and most beneficial change might be achieved by 2035 only if there is a significant increase in digital human, social and institutional capital that creates a happy marriage between digital capitalism and democracy. Why?
“On the other hand, as things stand right now, by 2035 digital tools and systems will not be able to efficiently and effectively fight social divisions and exclusions. This is due to a lack of accountability, transparency and consensus in decision-making. Digital technology systems are likely to continue to function in shortsighted and unethical ways, forcing humanity to face unsustainable inequalities and an overconcentration of technoeconomic power. These new digital inequalities could amount to serious, alarming threats and existential risks for human civilization. These risks could put humanity in serious danger when combined with environmental degradation and the overcomplication of digital connectivity and the global system.
- “It is likely that no globally-accepted ethical and regulatory frameworks will be found to fix social media algorithms, thus the vicious circle between collective blindness, populism and polarization will be dramatically reinforced.
- “In addition, the fragmentation of the internet will continue (creating the ‘splinternet’), thus resulting in more geopolitical tensions, less international cooperation and less global peace.
- “The dominant surveillance-for-profit model is likely to continue to prevail by 2035, leading to further loss of privacy, deconsolidation of global democracy and the expansion of cyber-feudalism and data oligarchy.
- “The exponential speed and overcomplexity of datafication and digitalization in general will diminish the human capacity for critical reflection, futures thinking, information accuracy and fact-checking.
- “The overwhelming processes of automation and personalization of information will intensify feelings of loneliness among atomized individuals and further disrupt the domains of mental health and well-being.
- “By 2035, the ongoing algorithmization and platformization of markets and services will exercise more pressure on working and social rights, further worsening exploitation, injustice, labor conditions and labor relations. Ghost workers and contract breaching will dramatically proliferate.”
Davi Ottenheimer: An over-emphasis on automation instead of human augmentation is extremely dangerous
Ottenheimer, vice president for trust and digital ethics at Inrupt, a company applying the new Solid data protocol, predicted, “The best and most beneficial changes in digital life by 2035 by most accounts will be from innovations in machine learning, virtualization and interconnected things (IoT). Learning technology can reduce the cost of knowledge. Virtualization technology can reduce the cost of presence. Interconnected things can both improve the quantity of data for the previous two, while also delivering more accessibility.
“This all speaks mainly to infrastructure tools, however, which need a special kind of glue. Stewardship and ethics can chart a beneficent course for the tools by focusing on an improved digital life that takes those three pieces and weaves them together with open standards for data interoperability. We saw a similar transformation of the 1970s closed data-processing infrastructure into the 1990s interconnected open-standards Web.
“This shift from centralized data infrastructure to federated and distributed processing is happening again already, which is expected to provide ever higher-quality/higher-integrity data. For a practical example, a web page today can better represent details of a person or an organization than most things could 20 years ago. In fact, we trust the Web to process, store and transmit everything from personalized medicine to our hobbies and work.
“The next 20 years will continue a trend to Web 3.0 by allowing people to become more whole and real digital selves in a much safer and healthier format. The digital self could be free of self-interested moat platforms, using instead representative ones; a right to be understood, founded in a right to move and maintain data about ourselves for our purposes (including wider social benefit).
“Knowledge will improve, as it can be far more easily curated and managed by its owner when it isn’t locked away, divided into complex walled gardens and forgotten in a graveyard of consents. A blood pressure sensor, for example, would send data to a personal data store for processing and learning far more privately and accurately. Metadata then could be shared based narrowly on purpose and time, such as with a relative, coach, assistant or health care professional. People’s health and well-being benefit directly from coming improvements in data-integrity architecture, as we already are seeing in any consent-based, open-standards sharing infrastructure being delivered to transform lives for the better.
“The most harmful or menacing changes likely to occur by 2035 in digital technology are related to the disruptive social effects of domain shifts. A domain shift pulls people out of areas they are familiar with and forces them to reattach to unfamiliar technology, such as with the end of horses and the rise of cars. In retrospect, the wheel was inferior to four-legged transit in very particular ways (e.g., requirement for a well-maintained road in favorable weather, dumping highly toxic byproducts in its wake) yet we are very far away from realizing any technology-based legged transit system.
“Sophisticated or not-well-understood technology can be misrepresented using fear tactics such that groups will drive into decades of failure and harm, without realizing they’ve being fooled. We’ve seen this in the return push to driverless vehicles, which are not very new but presented lately as magically very near to being realized.
“Sensor-based learning machines are solicited unfairly at unqualified consumers to prey on their fear about loss of control; people want to believe a simple and saccharin digital assistant will make them safer without evidence. This has manifested as a form of addiction and over-dependence causing social and mental health issues, including an alarming rise in crashes and preventable deaths by inattentive drivers believing misinformation about automation.
“Even more to the point, an over-emphasis on automation instead of augmentation leaves necessary human safety controls and oversight out of the loop on extremely dangerous and centrally controlled machines. It quickly becomes more practical and probable to poison a driverless algorithm in a foreign country to unleash a mass casualty event using loitering cars as swarm kamikazes, than to fire remote missiles or establish airspace control for bombs.
“Another example, related to misinformation, is the domain shift in identity and digital self. Often referred to as deepfakes, an over-reliance on certain cues can be manipulated to target people who don’t use other forms of validation. Trust sometimes is based on the sound of a voice or based on the visual appearance of a face. That was a luxury, as any deaf or blind person can provide useful insight about. Now in the rapidly evolving digital tools market anyone can sound or look like anyone, like observers becoming deaf or blind and needing some other means of trust to be established. This erodes old domains of trust, yet it also could radically shift trust by fundamentally altering what credible sources should be based upon.”
Mauro Ríos: We must create commonly accepted standards and generate a new social contract between humanity and technology
Ríos, an adviser to the eGovernment Agency of Uruguay and director of the Uruguayan Internet Society chapter, wrote, “In 2035, advances in technology can and surely will surprise us, but they will surprise us even more IF human beings are willing to change their relationship with technology. For example, we may possibly see the emergence of the real metaverse, something that does not yet exist. We will see a clear evolution of wearable tech, and we will also be surprised at how desktop computing undergoes a remake of the PC.
“But technological advances alone do not create the future, even as they continue to advance unfailingly. The ways in which people use them are what matter. What should occupy us is to understand if we and tech will be friends, lovers or have a happy marriage. We have discovered that – from the laws of robotics to the ethics behind artificial intelligence – our responsibility as a species is that as we create technology and dominate it. It is important that we generate a new social contract between it and we.
“The ubiquity of technology in our lives must lead us to question how we relate to it. Even back in the 1970s and 1980s it was very clear that the border between the human and the non-human was quite likely to blur soon. Today that border is blurry in certain scenarios that generate doubts, suspicions and concerns.
“By the year 2035, humans should have already resolved this discussion and have adapted and developed new, healthy models of interaction with technology. Digital technology is a permanent part of our world in an indissoluble way. It is necessary that we include a formal chapter on it in our social contract. Technology incites us, provokes us, corners us and causes us to question everything. There will be more complex challenges than we can imagine.
“One of the biggest risks today emerges from the fact that the technology industry is resistant to establishing common standards. Steps like those taken by the European Community in relation to connectors are important, but technology companies continue to insist on avoiding standardization to win economic gain. In the past, most of the battles over standardization were hardware-related, today they are software-related.
“If we want to develop things like the true metaverse or the conquest of Mars, technology has to have common criteria in key aspects. Standards should be established in artificial intelligence, automation, remote or virtual work, personal medical information, educational platforms, interoperability and communications, autonomous systems and others.”
David A. Banks: If Big Tech firms don’t change their ‘infinite expansion model,’ challenging new, humane systems will arise
Banks, director of globalization studies at the University at Albany-SUNY commented, “Between now and 2035, as the tech industry will experience a declining rate of profit and individual firms will seek to extract as much revenue as possible from existing core services, thus users could begin to critically reevaluate their reliance on large-scale social media, group chat systems (e.g., Slack, Teams), and perhaps even search as we know it. Advertising, the ‘internet’s original sin’ as Ethan Zuckerman so aptly put it in 2014, will combine with intractable free-speech debates, unsustainable increases in web stack complexity and increasingly unreliable core cloud services to trigger a mass exodus from Web 2.0 services. This is a good thing!
“If Big Tech gets the reputation it deserves, that could lead to a renaissance of libraries and human-centered knowledge searching as an alternative to the predatory, profit-driven search services. Buying clubs and human-authored product reviews could conceivably replace algorithmic recommendations, which would be correctly recognized as the advertisements that they are. Rather than wring hands about ‘echo chambers,’ media could finally return to a partisan stance where biases are acknowledged, and audiences can make fully informed decisions about the sources of their news and entertainment. It would be more common for audiences to directly support independent journalists and media makers who utilize a new, wider range of platforms.
“On the supply side, up-and-coming tech firms and their financial backers could respond by throwing out the infinite expansion model established by Facebook and Google in favor of niche markets that are willing to spend money directly on services that they use and enjoy, rather than passively pay for ostensibly free services through ad revenue. Call it the ‘Humble Net’ if you like – companies that are small and aspire to stay small in a symbiotic relationship with a core, loyal userbase. The smartest people in tech will recognize that they have to design around trust and sustainability rather than trustless platforms built for infinite growth.
“I am mostly basing my worst-case scenario prognostication on how the alt right has set up a wide range of social media services meant to foster and promulgate their worldview.
“In this scenario, venture capital firms will not be satisfied with the Humble Net and will likely put their money into firms that sell to institutional buyers (think weapons manufacturers, billing and finance tools, work-from-home hardware and software and biotech). This move by VCs will have the aggregate effect of privatizing much-needed public goods, supercharging overt surveillance technology and stifling innovation in basic research that takes more than a few years to produce marketable products.
“As big companies’ products lose their sheen and inevitably lose loyal customers, they will likely attempt to become infrastructure, rather than customer-facing brands. This can be seen as a retrenchment of control over markets and an attempt to become a market arbiter rather than a dominant competitor. This will likely lead to monopolistic behavior – price gouging, market manipulation, collusion with other firms in adjacent industries and markets – that will not be readily recognizable by the public or regulators. There is no reason to believe regulatory environments will strengthen to prevent this in the next decade.
“Big firms, in their desperation for new sources of revenue, will turn toward more aggressive freemium subscription models and push into what is left of bricks-and-mortar stores. I have called this phenomenon the ‘Subscriber City,’ where entire portions of cities will be put behind paywalls. Everything from your local coffee shop to public transportation will either offer deep discounts to subscribers of an Amazon Prime-esque service or refuse direct payments altogether. Transportation services like Uber and Waze will more obviously and directly act like managers of segregation than convenience and information services.
“Western firms will be dragged into trade wars by an increasingly antagonistic U.S. State Department, leading to increased prices on goods and services and more overt forms of censorship, especially with regard to international current events. This will likely drive people to their preferred Humble Nets to get news of varying veracity. Right-wing media consumers will seek out conspiratorial jingoism, centrists will enjoy a heavily censored corporate mainstream media, and the left will be left victim to con artists, would-be journalism influencers and vast lacunas of valuable information.”
Lee Warren McKnight: ‘Good, bad and evil AI will threaten societies, undermine social cohesion … and undermine human well-being’
McKnight, professor of entrepreneurship and innovation at Syracuse University’s School of Information Studies, wrote, “Human well-being and sustainable development is likely to be greatly improved by 2035. This will be supported by shared cognitive computing software and services at the edge, or perhaps by a digital twin of each village, and it will be operating to custom, decentralized design parameters decided by each community. The effects will significantly raise the incomes of rural residents worldwide. It will not eliminate the digital divide, but it will transform it. Digital tools and systems will be nearly universally available. The grassroots can be digitalized, empowering the 37% of the world who are still largely off the grid in 2023. With ‘worst-case-scenario survival-as-a-service’ widely available, human safety will progress.
“This will be partially accomplished by low-Earth-orbit (LEO) microsatellite systems. Right now, infrastructureless wireless or cyber-physical infrastructure can span any distance. But that is just a piece of a wider shared cognitive cyber-physical (IoT) technology, energy, connectivity, security, privacy, ethics, rights, governance and trust virtual services bundle. Decentralized communities will be adapting these digital, partially tokenized assets to their own needs and in working toward the UN’s Sustainable Development Goals through to 2035.
“Efforts are progressing through the ITU [International Telecommunication Union], Internet Society and many more UN and civil society organizations and governments, addressing the huge challenge to the global community to connect the next billion people. I foresee self-help, self-organized, adaptive cloud-to-edge Internet operators solving the problem of getting access to people’s homes and businesses everywhere. They are digitally transforming themselves and they are the new community services providers.
“The market effects of edge bandwidth management innovations, radically lower edge device and bandwidth costs through community traffic aggregation, and fantastically higher access to digital services will be significant enough to measurably raise the GDP in nations undertaking their own initiatives to digitalize the grassroots, beyond the current reach of telecommunications infrastructure. At the community level, the effect of these initiatives is immediately transformative for the youth of participating communities.
“How do I know all of this? Because we are already underway with the Africa Community Internet Program, launched by the UN Economic Commission for Africa in cooperation with the African Union, in 2022. Ongoing pilot projects are educating people in local governments and other Internet community multistakeholders about what is possible. …
“The second topic I’d like to touch on in the concept of trust in ‘zero trust’ environments. Right now, it comes at a premium. It will rely on sophisticated mechanisms in 2035. Certified ethical AI developers are the new Silicon Valley elite priesthood. They are the well-paid orchestrators of machine learning and cognitive communities, and they are certified as trained to be ethical in code and by design. Some liability insurance disputes have delayed the progress of this movement, but by 2035 the practice and profession of Certified Ethical AI Developer will have cleaned up many biased-by-poor-design legacy systems. And they will have begun to lead others toward this approach, which combines improved multi-dimensional security with attention to privacy, ethics and rights-awareness in the design of adaptive complex systems.
“Many developers and others in and around the technical community suddenly have a new interest in introductory level philosophy courses, and there is a rising demand for graduates who have double-majored in computer science and philosophy. Data scientists will work for and report to them. Of course, having a certification process for ethical AI developers does not automatically make firms’ business practices more ethical. It serves as a market signal that sloppy Silicon Valley practices also run risks, including loss of market share. We can hope that, standing alongside all of the statements of ethical AI principles, certified ethical AI developers will be 2035’s reality 5D TV stars, vanquishing bad and evil AI systems. …
“I do have quite a few concerns over human-centered development of digital tools and systems falling short of advocates’ goals. Good, bad and evil AI will threaten societies, undermine social cohesion, spark suicides and domestic and global conflict, and undermine human well-being. Just as profit-motivated actors, nation-states and billionaire oligarchs have weaponized advocates for guns over people and led to skyrocketing murder rates and a shorter lifespan in the United States, similar groups – and groups manipulating machine learning and neural network systems to manipulate them – are arising under the influence of AI.
“They already have. To define terms, good AI is ethical and good by evidence-based design. Bad AI is ill-formed either by ignorance and human error or bad design. In 2035 evil AI could be a good AI or a bad AI gone bad due to a security compromise or malicious actor; or it could be bad-to-the-bone evil AI created intentionally to disrupt communities, crash systems and foster murders and death.
- “The manufacturers of disinformation, both private sector and government information warfare campaign managers, will all be using a variety of ChatGPT-gone-bad-like tools to infect societal discourse, systems and communities.
- “The manipulated media and surveillance systems will be integrated to infect communities as a wholesale, on-demand service.
- “Custom evil AI services will be preferred by stalkers and rapists for their services.
- “Mafia-like protection rackets will grow to pay off potential AI attackers as a cost of doing only modestly bad business.
- “Both retail and wholesale market growth for evil AI will have compound effects, with both cyber-physical mass-casualty events and more psychologically damaging unfair-and-unbalanced artificially intelligent evil digital twins that are perfectly attuned to personalize evil effects. Evil robotic process automation will be a growth industry through to 2035, to improve scalability.”
Frank Odasz: ‘The battle between good and evil has changed due to the power of technology’
Odasz, president of Lone Eagle Consulting, wrote “By 2035, in a perfect world, everyone will have a relationship with AI in multiple forms. ChatGPT is an AI tool to draft essays on any topic. Jobs will require less training and will be continually aided by AI helpers. The Congressional Office of Technology Assessment will be reinstated to counter the exponential abuses of AI, deepfake videos and all other known abuses. Creating trust in online businesses and secure identities will become commonplace. Four-day work weeks and continued growth in remote work and remote learning will mean everyone can make the living they want, living wherever they want.
“Everyone will have a global citizenship mindset working toward those processes that empower everyone. Keeping humankind to the same instant of progress will become a shared goal as the volume of new innovations continues to increase, increasing opportunities for everyone to combine multiple innovations to create new integrated innovations.
“Developing human talent and agency will become a global shared goal. Purposeful use of our time will become a key component of learning. There will be those who spend many hours each day using VR goggles for work and gaming that feature increasingly social components. A significant portion of society will be able to opt out of most digital activities once universal basic income programs proliferate. Life, liberty and pursuit of happiness, equality before the law and new forms of self-exploration and self-care will proliferate.
“Collective values will emerge and become important regarding life choices. Reconnecting with nature and our responsibility for stewardship of our planet’s environments, and each other, will take a very purposeful role in the lives of everyone. As more people learn the benefits of being positive, progressive, tolerant of differences and open-minded, most people will agree that people are basically good. The World Values Survey has recently recorded metrics reporting that 78% of Swedish citizens believe people are basically good, while Latin Americans give 15%, and those in Asia 5%.
“Pursuit of meaningful use of our time when we are freed from menial labor, we can create a new global culture of purpose to rally all global citizens to work together to sustain civil society and our planet.
“With all the advances in tech, what could go wrong? Well, by 2035, the vague promise of broadband for all, providing meaningful, measurable, transformational outcomes, will create a split society, extending what we already see in 2023, with the most-educated leaning toward a progressive, tolerant, open-learning society able to adapt easily to accelerating change. Those left behind without the mutual support necessary to grow to learn to love learning and benefit from accelerating technical innovation will grow fearful of change, of learning and of those who do understand the potential for transformational outcomes of motivated self-directed Internet learning and particularly of collaborating with others. If we all share what we know, we’ll all have access to all our knowledge.
“Lensa AI is an app from China that turns your photo into many choices for an avatar and/or a more compelling ID photo, requiring only that you sign away all intellectual rights to your own likeness. Abuses of social media are listed at the Ledger of Harms from the Center for Humane Tech. It is known that foreign countries continue to implement increasingly insidious methods for proliferating misinformation and propaganda. Certainly the United States, internally, has severe problems in this regard due to severe political polarization that went nearly ballistic in 2020 and 2021.
“If a unified global value system evolves, there is hope international law can contain moral and ethical abuses. Note: The Scout Law, created in 1911, has a dozen generic values for common decency and served as the basis for the largest uniformed organizations in the world – Boy Scouts and Girl Scouts. Reverence is one trait that encompasses all religions. ‘Leave no one behind’ must be used to refer to those without a moral compass; positive, supportive culture; self-esteem; and common sense.
“Mental health problems are rampant worldwide. Vladimir Putin controls more than 4,500 nuclear missiles. In the United States, proliferation of mass shootings shows us that one person can wreak havoc on the lives of many others. If 99% of society evolves to be good people with moral values and generous spirits, the reality is that human society might still end in nuclear fires due to the actions of a few individuals, or even a single individual with a finger on the red button, capable of destroying billions and making huge parts of the planet uninhabitable. How can technology assure our future? Finland has built underground cities to house their entire population in the event of nuclear war.
“The battle between good and evil has changed due to the power of technology. The potential disaster only a few persons can exact upon society continues to grow disproportionally to the security the best efforts of good folks can deliver. This dichotomy, taken to extremes, might spell doom for us all unless radical measures are taken, down to the level of monitoring individuals every moment of the day.
“A-cultural worldviews need to evolve to create a common bond accepting our differences as allowable commonalities. This is the key to sustainability of the human race, and it is not a given. Our human-caused climate changes are already creating dire outcomes: drought, sea levels rising and much more. The risk of greater divisiveness will increase as impacts of climate change continue to increase. Migration pressure is but one example.”
Frank Kaufmann: It all comes down to how humans use digital technology
Kaufmann, president of Twelve Gates Foundation and Values in Knowledge Foundation, wrote, “I find all technological development good if developed and managed by humans who are good.
“The punchline is always this: To the extent that humans are impulsively driven by compassion and concern for others and for the good of the whole, there is not a single prospective technological or digital breakthrough that bodes ill in its own right. Yet, to the extent that humans are impulsively driven for self-gain, with others and the good of the whole as expendable in the equation, even the most primitive industrial/technological development is to be feared.
“I am extreme in this view as simple, fundamental and universal. For example, if humans were fixed in an inescapable makeup characterized by care and compassion, the development of an exoskeletal, indestructible, AI-controlled, military robot that could anticipate my movements up to four miles away, and morph to look just like my loving grandmother could be a perfectly wonderful development for the good of humankind. On the other hand, if humans cannot be elevated above the grotesque makeup in which others and the greater good are expendable in the pursuit of selfish gain, then even the invention of a fork is a dangerous, even horrifying thing.
“The Basis to Assess Tech – Human Purpose, Human Nature: I hold that the existence of humans is intentional, not random. This starting point establishes for me two bases for assessing technological progress: How does technological/digital development relate to 1) human purpose and 2) human nature?
“Human purpose: Two things are the basis for assessing anything, the purpose and the nature of the agent. This is the same for whether we assess the CRISPR gene editing, or if I turn left or right at a streetlight. The question in both cases is: Does this action serve our purpose? This tells us if the matter in question is good or bad. It simply depends on what we are trying to do (our purpose). If our purpose is to get to our mom’s house, then turning left at the light is a very bad thing to do. If the development of CRISPR gene editing is to elevate dignity for honorable people, it is good. If it is to advance the lusts of a demonic corporation, or the career of an ego-insane, medical monster, then likewise breakthroughs in CRISPR gene editing are worrisome.
“Unfortunately, it is very difficult to know what human purpose is. Only religious and spiritual systems recommend what that might be.
“Human nature: The second basis for assessing things (including digital and technological advances) relates to human nature. This is more accessible. We can ask: Does the action comport with our nature? For simplicity I’ve created a limited list of what humans desire (human nature):
Original desires
- To love and be loved
- Privacy (personal sovereignty)
- To be safe and healthy
- Freedom and the means to create (creativity can be in several areas)
- Ingenuity
- Artistic expression
- Sports and leisure, physical and athletic experience
Perverse and broken desires
- Pursuit of and addiction to power
- Willingness to indulge in conflict
“Three bases to assess: In sum then, analyzing and assessing technological and digital development by the year 2035 should move along three lines of measure.
- Does the breakthrough serve the reason why humans exist (human purpose)?
- Which part of human nature does the breakthrough relate to?
- Can the technology have built-in protections to prevent perfectly exciting, wonderful breakthroughs from becoming a dark and malign force over our lives and human history?
“All technology coming in the next 15 years sits on a two-edged sword according to measures for the analysis described above. The following danger-level categories help describe things further.
“Likely benign, little danger – Some coming breakthroughs are merely exciting, such as open-air gesture technology, prosthetics with a sense of touch, printed food, printed organs, space tourism, self-driving vehicles and much more.
“Medium danger – Some coming digital and tech breakthroughs have medium levels of concern for social or ethical implications, such as hybrid-reality environments, tactile holograms, domestic service and workplace robots, quantum-encrypted-information, biotechnology and nanotechnology and much more.
“Dangerous, great care needed – Finally, there is a category of coming developments that should be put in the high concern category. These include brain-computer interfaces and brain-implant technology, genome editing, cloning, selective breeding, genetic engineering, artificial general intelligence (AGI), deepfakes, people-hacking, clumsy efforts to fix the environment through potentially risky geoengineering, CRISPR gene editing and many others.
“Applying the three bases in assessing the benefits and dangers of technological advances in our time can be done rigorously, systematically and extensively on any pending digital and tech developments. They are listed here on a spectrum from less worrisome to potentially devastating. It is not the technology itself that marks it as hopeful or dystopic. This divergence is independent of the inherent quality of the precise technology itself; it is tied to the maturation of human divinity, ideal human nature.”
Charles Fadel: Try to discover the ‘unknown unknowns’
Fadel, founder of the Center for Curriculum Redesign and co-author of “Artificial Intelligence in Education: Promises and Implications for Teaching and Learning,” wrote, “The amazing thing about this moment is how quickly artificial intelligence is spreading and being applied. With that in mind, let’s walk through some big-picture topics. On human-centered development of digital tools and systems: I do believe significant autonomy will be achieved by specialized robotic systems, assisting in driving (U.S.), (air and land) package delivery, or bedside patient care (Japan), etc. But we don’t know exactly what ‘significant’ entails. In other words, the degree of autonomy may vary by the life-criticality of the applications – the more life-critical, the less trustworthy the application (package delivery on one end, being driven safely on the other).
“On human knowledge: Foundational AI models like GPT-3 are surprising everyone and will lead to hard-to-imagine transformations. What can a quadrillion-item system achieve? Is there a diminishing return? We will find out in the next six months, possibly even before the time this is published. We’ve already seen how very modest technological changes disrupt societies. I was witness to the discussion regarding the Global System for Mobile Communications (GSM) effort years ago, when technologists were trying to see if we could use a bit of free bandwidth that was available between voice communications channels. They came up with short messages – 140 characters that only needed 10 kilohertz of bandwidth. I wondered at the time: Who would care about this?
“Well, people did care, and they started exchanging astonishing volumes of messages. The humble text message has led to societal transformations that were complete ‘unknown unknowns.’ First, it led to the erosion of commitments (by people not showing up when they said they would) and not soon afterward it led to the erosion of democracy via Twitter and other social media.
“If something that small can have such an impact it’s impossible to imagine what foundation models will have. For now, I’d recommend that everybody take a deep breath and wait to see what the emerging impact of these models is. We are talking about punctuated equilibria a la Stephen Jay Gould for AI, but we’re not sure how far we will go before the next plateauing.
“Human connections, governance and institutions: I worry about regulation. I continue to marvel at the inability of lawyers and politicians, who are typically humanities types, to understand the impact of technologies for a decade or more after they erupt. This leads to catastrophes before anyone is galvanized to react. Look at the catastrophe of Facebook and Cambridge Analytica and the 2016 election. No one in the political class was paying attention then, and there still aren’t any real regulations. There is no anticipation in political circles about how technology changes things and the dangers that are obvious. It takes two to three decades for them to react, yet regulations should come within three years at worst.”
Anonymous: To avoid bad outcomes, generative AI must be shaped to serve people, not exploit them
A director of applied science for one of the top tech companies beginning to develop generative AI wrote, “I am deeply concerned about the societal implications of the emerging generative AI paradigm, but not for the reasons that are currently in the news. Specifically, on the current path, we risk both destroying the potential of these AI systems and, quite worryingly, the business models (and employment) of anyone who generates content. If we get this right, we can create a virtuous loop that will benefit all stakeholders, but that will require significant changes in policy, law and market dynamics.
“Key to this concern is the misperception that generative AI is in fact AI. Given how these technologies work – and in particular their voracious appetite for a truly astonishing amount of textual and imagery content to learn from – they’re best understood as collective intelligence, not artificial intelligence. Without hundreds of thousands of scientific articles, news articles, Wikipedia articles, user-generated content Q&A, books, e-commerce listings, etc., these things would be dumb as a doorknob.
“The risk here is that AI companies and content producers fail to recognize that they have extensive mutual dependence with respect to these systems. If people attribute all of the value to the AI systems (and AI companies delude themselves into this), all the benefits (economic and otherwise) will flow to AI companies. This risk is exacerbated by the fact that these technologies are able to write news articles, Wikipedia articles, etc., disrupting the methods of production for these datasets. The implications of this are very serious:
- “Generative AI will substantially increase economic inequality, which is associated with terrible societal outcomes.
- “Generative AI will threaten some of society’s most important institutions: news institutions, science, organizations like the Wikimedia Foundation, etc.
- “Generative AI will eventually fail as it destroys the training data it needs to work.
“To avoid these outcomes, we urgently need a few things:
- “We must strengthen content ownership laws to make clear that if you want to train an AI on a website or document, you need permission from the content owner. This can come both via new laws and lawsuits that lead to new legal interpretations.
- “We need people to realize that they have a lot of power to stop AI companies from using all of their content without permission. There are very simple solutions that range from website owners using robots.txt, scientific authors using the copyright information they have, etc. Even expressing their wish to be opted-out has worked in a number of important early cases.
- “We need companies to understand the market opportunities in strengthened content ownership laws and practices, which can put the force of the market behind a virtuous loop. For instance, an AI company that seeks to gain exclusive licenses to particularly valuable training content would be a smart AI company and one that will share the benefits of its technologies with all the people who helped create them.”
The following four essays are reprinted with the authors’ permission from the section “Hopes for 2023” in the Dec. 28, 2022, edition of Andrew Ng’s The Batch AI newsletter.
Yoshua Bengio: Our AI models should feature a human-like ability to discover and reason with high-level concepts and relationships
Bengio, scientific director of Mila Quebec AI Institute and co-winner of the 2018 Alan Turing Award for his contributions to breakthroughs in the AI field of deep learning, wrote, “In the near future we will see models that reason. Recent advances in deep learning largely have come by brute force: taking the latest architectures and scaling up compute power, data and engineering. Do we have the architectures we need, and all that remains is to develop better hardware and datasets so we can keep scaling up? Or are we still missing something?
“I believe we’re missing something, and I hope for progress toward finding it in the coming year.
“I’ve been studying, in collaboration with neuroscientists and cognitive neuroscientists, the performance gap between state-of-the-art systems and humans. The differences lead me to believe that simply scaling up is not going to fill the gap. Instead, building into our models a human-like ability to discover and reason with high-level concepts and relationships between them can make the difference.
“Consider the number of examples necessary to learn a new task, known as sample complexity. It takes a huge amount of gameplay to train a deep learning model to play a new video game, while a human can learn this very quickly. Related issues fall under the rubric of reasoning. A computer needs to consider numerous possibilities to plan an efficient route from here to there, while a human doesn’t.
“Humans can select the right pieces of knowledge and paste them together to form a relevant explanation, answer or plan. Moreover, given a set of variables, humans are pretty good at deciding which is a cause of which. Current AI techniques don’t come close to this human ability to generate reasoning paths. Often, they’re highly confident that their decision is right, even when it’s wrong. Such issues can be amusing in a text generator, but they can be life-threatening in a self-driving car or medical diagnosis system.
“Current systems behave in these ways partly because they’ve been designed that way. For instance, text generators are trained simply to predict the next word rather than to build an internal data structure that accounts for the concepts they manipulate and how they are related to each other. But we can design systems that track the meanings at play and reason over them while keeping the numerous advantages of current deep learning methodologies. In doing so, we can address a variety of challenges from excessive sample complexity to overconfident incorrectness.
“I’m excited by generative flow networks, or GFlowNets, an approach to training deep nets that my group started about a year ago. This idea is inspired by the way humans reason through a sequence of steps, adding a new piece of relevant information at each step. It’s like reinforcement learning, because the model sequentially learns a policy to solve a problem. It’s also like generative modeling; it can sample solutions in a way that corresponds to making a probabilistic inference.
“If you think of an interpretation of an image, your thought can be converted to a sentence, but it’s not the sentence itself. Rather, it contains semantic and relational information about the concepts in that sentence. Generally, we represent such semantic content as a graph, in which each node is a concept or variable. GFlowNets generate such graphs one node or edge at a time, choosing which concept should be added and connected to which others in what kind of relation. I don’t think this is the only possibility, and I look forward to seeing a multiplicity of approaches. Through a diversity of exploration, we’ll increase our chance to find the ingredients we’re missing to bridge the gap between current AI and human-level AI.”
Douwe Kiela: We must move past today’s AI and its many shortcomings like ‘hallucinations’ They are also too easily misused or abused
Kiela, an adjunct professor in symbolic systems at Stanford University, previously the head of research at Hugging Face and a scientist at Facebook Research, wrote, “Expect less hype and more caution. In 2022 we really started to see AI go mainstream. Systems like Stable Diffusion and ChatGPT captured the public imagination to an extent we haven’t seen before in our field. These are exciting times, and it feels like we are on the cusp of something great: a shift in capabilities that could be as impactful as – without exaggeration – the Industrial Revolution.
“But amidst that excitement, we should be extra wary of hype and extra careful to ensure that we proceed responsibly. Consider large language models. Whether or not such systems really have meaning, lay people will anthropomorphize them anyway, given their ability to perform arguably the most quintessentially human thing: to produce language. It is essential that we educate the public on the capabilities and limitations of these and other AI systems, especially because the public largely thinks of computers as good old-fashioned symbol processors – for example, that they are good at math and bad at art, while currently the reverse is true.
“Modern AI has important and far-reaching shortcomings. Among them:
- “Systems are too easily misused or abused for nefarious purposes, intentionally or inadvertently.
- “Not only do they hallucinate information, but they do so with seemingly very high confidence and without the ability to attribute or credit sources.
- “They lack a rich enough understanding of our complex multimodal human world and do not possess enough of what philosophers call ‘folk psychology,’ the capacity to explain and predict the behavior and mental states of other people.
- “They are arguably unsustainably resource-intensive, and we poorly understand the relationship between the training data going in and the model coming out.
- “Lastly, despite the unreasonable effectiveness of scaling – for instance, certain capabilities appear to emerge only when models reach a certain size – there are also signs that with that scale comes even greater potential for highly problematic biases and even less-fair systems.
“In 2023 we’ll see work on improving all of these issues. Research on multimodality, grounding and interaction can lead to systems that understand us better because they understand our world and our behavior better. Work on alignment, attribution and uncertainty may lead to safer systems less prone to hallucination and with more accurate reward models. Data-centric AI will hopefully show the way to steeper scaling laws, and more efficient ways to turn data into more robust and fair models. Finally, we should focus much more seriously on AI’s ongoing evaluation crisis. We need better and more holistic measurements – of data and models – to ensure that we can characterize our progress and limitations and understand, in terms of ecological validity (for instance, real-world use cases), what we really want out of these systems.”
Alon Halevy: We can take advantage of our personal data to improve our health, vitality and productivity
Halevy, a director with the Reality Labs Research brand of Meta Platforms, wrote, “Your personal data timeline lies ahead. The important question of how companies and organizations use our data has received a lot of attention in the technology and policy communities. An equally important question that deserves more focus in 2023 is how we, as individuals, can take advantage of the data we generate to improve our health, vitality and productivity.
“We create a variety of data throughout our days. Photos capture our experiences, phones record our workouts and locations, Internet services log the content we consume and our purchases. We also record our want-to-do lists: desired travel and dining destinations, books and movies we plan to enjoy and social activities we want to pursue.
“Soon smart glasses will record our experiences in even more detail. However, this data is siloed in dozens of applications. Consequently, we often struggle to retrieve important facts from our past and build upon them to create satisfying experiences on a daily basis. But what if all this information were fused in a personal timeline designed to help us stay on track toward our goals, hopes, and dreams? This idea is not new. Vannevar Bush envisioned it in 1945, calling it a memex. In the 1990s, Gordon Bell and his colleagues at Microsoft Research built MyLifeBits, a prototype of this vision. The prospects and pitfalls of such a system have been depicted in film and literature.
“Privacy is obviously a key concern in terms of keeping all our data in a single repository and protecting it against intrusion or government overreach. Privacy means that your data is available only to you, but if you want to share parts of it, you should be able to do it on the fly by uttering a command such as, ‘Share my favorite cafes in Tokyo with Jane.’ No single company has all our data or the trust to store all our data. Therefore, building technology that enables personal timelines should be a community effort that includes protocols for the exchange of data, encrypted storage and secure processing.
“Building personal timelines will also force the AI community to pay attention to two technical challenges that have broader application. The first challenge is answering questions over personal timelines. We’ve made significant progress on question answering over text and multimodal data. However, in many cases, question answering requires that we reason explicitly about sets of answers and aggregates computed over them. This is the bread and butter of database systems. For example, answering ‘what cafes did I visit in Tokyo?’ or ‘how many times did I run a half marathon in under two hours?’ requires that we retrieve sets as intermediate answers, which is not currently done in natural language processing. Borrowing more inspiration from databases, we also need to be able to explain the provenance of our answers and decide when they are complete and correct.
“The second challenge is to develop techniques that use our timelines responsibly to improve personal well-being. Taking inspiration from the field of positive psychology, we can all flourish by creating positive experiences for ourselves and adopting better habits. An AI agent that has access to our previous experiences and goals can give us timely reminders and suggestions of things to do or avoid. Ultimately, what we choose to do is up to us, but I believe that an AI with a holistic view of our day-to-day activities, better memory and superior planning capabilities would benefit everyone.”
Reza Zadeh: Active learning is set to revolutionize machine learning, allowing AI systems to continuously improve and adapt over time
Zadeh, founder and CEO at Matroid, a computer-vision company, and adjunct professor at Stanford University, wrote, “As we enter 2023, there is a growing hope that the recent explosion of generative AI will bring significant progress in active learning. This technique, which enables machine learning systems to generate their own training examples and request them to be labeled, contrasts with most other forms of machine learning, in which an algorithm is given a fixed set of examples and usually learns from those alone.
“Active learning can enable machine learning systems to:
- Adapt to changing conditions;
- Learn from fewer labels;
- Keep humans in the loop for the most valuable, difficult examples; and
- Achieve higher performance.
“The idea of active learning has been in the community for decades, but it has never really taken off. Previously, it was very hard for a learning algorithm to generate images or sentences that were simultaneously realistic enough for a human to evaluate and useful to advance a learning algorithm. But with recent advances in generative AI for images and text, active learning is primed for a major breakthrough. Now, when a learning algorithm is unsure of the correct label for some part of its encoding space, it can actively generate data from that section to get input from a human.
“Active learning has the potential to revolutionize the way we approach machine learning, as it allows systems to continuously improve and adapt over time. Rather than relying on a fixed set of labeled data, an active learning system can seek out new information and examples that will help it better understand the problem it is trying to solve. This can lead to more accurate and effective machine learning models, and it could reduce the need for large amounts of labeled data.
“I have a great deal of hope and excitement that active learning will build upon the recent advances in generative AI. We are likely to see more machine learning systems that implement active learning techniques; 2023 could be the year it truly takes off.”