They have deep concerns about people’s and society’s overall well-being. But they also expect great benefits in health care, scientific advances and education
This report covers results from the 16th “Future of the Internet” canvassing that Pew Research Center and Elon University’s Imagining the Internet Center have conducted together to gather expert views about important digital issues. This is a nonscientific canvassing based on a nonrandom sample; this broad array of opinions about where the potential influence of current trends may lead society between 2023 and 2035 represents only the points of view of the individuals who responded to the queries.
Pew Research Center and Elon’s Imagining the Internet Center sampled from a database of experts to canvass from a wide range of fields, inviting entrepreneurs, professionals and policy people based in government bodies, nonprofits and foundations, technology businesses and think tanks, as well as interested academics and technology innovators. The predictions reported here came in response to a set of questions in an online canvassing conducted between Dec. 27, 2022, and Feb. 21, 2023. In all, 305 technology innovators and developers, business and policy leaders, researchers and activists responded in some way to the question covered in this report. More on the methodology underlying this canvassing and the participants can be found in the section titled “About this canvassing of experts.”
Spurred by the splashy emergence of generative artificial intelligence and an array of other AI applications, experts participating in a new Pew Research Center canvassing have great expectations for digital advances across many aspects of life by 2035. They anticipate striking improvements in health care and education. They foresee a world in which wonder drugs are conceived and enabled in digital spaces; where personalized medical care gives patients precisely what they need when they need it; where people wear smart eyewear and earbuds that keep them connected to the people, things and information around them; where AI systems can nudge discourse into productive and fact-based conversations; and where progress will be made in environmental sustainability, climate action and pollution prevention.
At the same time, the experts in the new canvassing worry about the darker sides of many of the developments they celebrate. Key examples:
- Some expressed fears that align with the statement recently released by technology leaders and AI specialists arguing that AI poses the “risk of extinction” for humans that should be treated with the same urgency as pandemics and nuclear war.
- Some point to clear problems that have been identified with generative AI systems, which produce erroneous and unexplainable things and are already being used to foment misinformation and trick people.
- Some are anxious about the seemingly unstoppable speed and scope of digital tech that they fear could enable blanket surveillance of vast populations and could destroy the information environment, undermining democratic systems with deepfakes, misinformation and harassment.
- They fear massive unemployment, the spread of global crime, and further concentration of global wealth and power in the hands of the founders and leaders of a few large companies.
- They also speak about how the weaponization of social media platforms might create population-level stress, anxiety, depression and feelings of isolation.
In sum, the experts in this canvassing noted that humans’ choices to use technologies for good or ill will change the world significantly.
These predictions emerged from a canvassing of technology innovators, developers, business and policy leaders, researchers and academics by Pew Research Center and Elon University’s Imagining the Internet Center. Some 305 responded to this query:
As you look ahead to the year 2035, what are the BEST AND MOST BENEFICIAL changes that are likely to occur by then in digital technology and humans’ use of digital systems? … What are the MOST HARMFUL OR MENACING changes likely to occur?
Many of these experts wrote long, detailed assessments describing potential opportunities and threats they see to be most likely. The full question prompt specifically encouraged them to share their thoughts about both kinds of impacts – positive and negative. And our question invited them to think about the benefits and costs of five specific domains of life:
- Human-centered development of digital tools and systems
- Human rights
- Human knowledge
- Human health and well-being
- Human connections, governance and institutions
They were also asked to indicate how they feel about the changes they foresee.
- 42% of these experts said they are equally excited and concerned about the changes in the “humans-plus-tech” evolution they expect to see by 2035.
- 37% said they are more concerned than excited about the changes they expect.
- 18% said they are more excited than concerned about expected change.
- 2% said they are neither excited nor concerned.
- 2% said they don’t think there will be much real change by 2035.
The most harmful or menacing changes in digital life that are likely by 2035
Some 79% of the canvassed experts said they are more concerned than excited about coming technological change or equally concerned and excited. These respondents spoke of their fears in the following categories:
The future harms to human-centered development of digital tools and systems
The experts who addressed this fear wrote about their concern that digital systems will continue to be driven by profit incentives in economics and power incentives in politics. They said this is likely to lead to data collection aimed at controlling people rather than empowering them to act freely, share ideas and protest injuries and injustices. These experts worry that ethical design will continue to be an afterthought and digital systems will continue to be released before being thoroughly tested. They believe the impact of all of this is likely to increase inequality and compromise democratic systems.
The future harms to human rights
These experts fear new threats to rights will arise as privacy becomes harder, if not impossible, to maintain. They cite surveillance advances, sophisticated bots embedded in civic spaces, the spread of deepfakes and disinformation, advanced facial recognition systems, and widening social and digital divides as looming threats. They foresee crimes and harassment spreading more widely, and the rise of new challenges to humans’ agency and security. A topmost concern is the expectation that increasingly sophisticated AI is likely to lead to the loss of jobs, resulting in a rise in poverty and the diminishment of human dignity.
The future harms to human knowledge
They fear that the best of knowledge will be lost or neglected in a sea of mis- and disinformation, that the institutions previously dedicated to informing the public will be further decimated, that basic facts will be drowned out in a sea of entertaining distractions, bald-faced lies and targeted manipulation. They worry that people’s cognitive skills will decline. In addition, they argued that “reality itself is under siege” as emerging digital tools convincingly create deceptive or alternate realities. They worry that a class of “doubters” will hold back progress.
The future harms to human health and well-being
A share of these experts said humanity’s embrace of digital systems has already spurred high levels of anxiety and depression and predicted things could worsen as technology embeds itself further in people’s lives and social arrangements. Some of the mental and physical problems could stem from tech-abetted loneliness and social isolation; some could come from people substituting tech-based “experiences” for real-life encounters; some could come from job displacements and related social strife; and some could come directly from tech-based attacks.
The future harms to human connections, governance and institutions
The experts who addressed these issues fear that norms, standards and regulation around technology will not evolve quickly enough to improve the social and political interactions of individuals and organizations. Two overarching concerns: a trend toward autonomous weapons and cyberwarfare, and the prospect of runaway digital systems. They also said things could worsen as the pace of tech change accelerates. They expect that people’s distrust in each other may grow and their faith in institutions may deteriorate. This, in turn, could deepen already undesirable levels of polarization, cognitive dissonance and public withdrawal from vital discourse. They fear, too, that digital systems will be too big and important to avoid, and all users will be captives.
The best and most beneficial changes in digital life likely by 2035
Some 18% of the canvassed experts said they are more excited than concerned about coming technological change and 42% said they are equally excited and concerned. They shared their hopes related to the following themes:
The future benefits to human-centered development of digital tools and systems
These experts covered a wide range of likely digital enhancements in medicine, health, fitness and nutrition; access to information and expert recommendations; education in both formal and informal settings; entertainment; transportation and energy; and other spaces. They believe that digital and physical systems will continue to integrate, bringing “smartness” to all manner of objects and organizations, and expect that individuals will have personal digital assistants that ease their daily lives.
The future benefits to human rights
These experts believe digital tools can be shaped in ways that allow people to freely speak up for their rights and join others to mobilize for the change they seek. They hope ongoing advances in digital tools and systems will improve people’s access to resources, help them communicate and learn more effectively, and give them access to data in ways that will help them live better, safer lives. They urged that human rights must be supported and upheld as the internet spreads to the farthest corners of the world.
The future benefits to human knowledge
These respondents hope for innovations in business models; in local, national and global standards and regulation; and in societal norms. They wish for improved digital literacy that will revive and elevate trusted news and information sources in ways that attract attention and gain the public’s interest. And they hope that new digital tools and human and technological systems will be designed to assure that factual information will be appropriately verified, highly findable, well-updated and archived.
The future benefits to human health and well-being
These experts expect that the many positives of digital evolution will bring a health care revolution that enhances every aspect of human health and well-being. They emphasize that full health equality in the future should direct equal attention to the needs of all people while also prioritizing their individual agency, safety, mental health and privacy and data rights.
The future benefits to human connections, governance and institutions
Hopeful experts said society is capable of adopting new digital standards and regulations that will promote pro-social digital activities and minimize antisocial activities. They predict that people will develop new norms for digital life and foresee them becoming more digitally literate in social and political interactions. They said in the best-case scenario, these changes could influence digital life toward promoting human agency, security, privacy and data protection.
Experts’ overall expectations for the best and worst in digital change by 2035, in their own words
Many of the respondents quite succinctly outlined their expectations for the best and worst in digital change by 2035. Here are some of those comments. (The remarks made by the respondents to this canvassing reflect their personal positions and are not the positions of their employers. The descriptions of their leadership roles help identify their background and the locus of their expertise. Some responses are lightly edited for style and readability.)
Aymar Jean Christian, associate professor of communication studies at Northwestern University and adviser to the Center for Critical Race Digital Studies:
“Decentralization is a promising trend in platform distribution. Web 2.0 companies grew powerful by creating centralized platforms and amassing large amounts of social data. The next phase of the web promises more user ownership and control over how our data, social interactions and cultural productions are distributed. The decentralization of intellectual property and its distribution could provide opportunities for communities that have historically lacked access to capitalizing on their ideas. Already, users and grassroots organizations are experimenting with new decentralized governance models, innovating in the long-standing hierarchical corporate structure.
“However, the automation of story creation and distribution through artificial intelligence poses pronounced labor equality issues as corporations seek cost-benefits for creative content and content moderation on platforms. These AI systems have been trained on the un- or under-compensated labor of artists, journalists and everyday people, many of them underpaid labor outsourced by U.S.-based companies. These sources may not be representative of global culture or hold the ideals of equality and justice. Their automation poses severe risks for U.S. and global culture and politics. As the web evolves, there remain big questions as to whether equity is possible or if venture capital and the wealthy will buy up all digital intellectual property. Conglomeration among firms often leads to market manipulation, labor inequality and cultural representations that do not reflect changing demographics and attitudes. And there are also climate implications for many new technological developments, particularly concerning the use of energy and other material natural resources.”
Mary Chayko, sociologist, author of “Superconnected” and professor of communication and information at Rutgers University:
“As communication technology advances into 2035 it will allow people to learn from one another in ever more diverse, multifaceted, widely distributed social networks. We will be able to grow healthier, happier, more knowledgeable and more connected as we create and traverse these networked pathways together. The development of digital systems that are credible, secure, low-cost and user-friendly will inspire all kinds of innovations and job opportunities. If we have these types of networks and use them to their fullest advantage, we will have the means and the tools to shape the kind of society we want to live in. Unfortunately, the commodification of human thought and experience online will accelerate as we approach 2035. Technology is already used not only to harvest, appropriate and sell our data, but also to manufacture and market data that simulates the human experience, as with applications of artificial intelligence. This has the potential to degrade and diminish the specialness of being human, even as it makes some humans very rich. The extent and verisimilitude of these practices will certainly increase as technology permits the replication of human thought and likeness in ever more realistic ways. But it is human beings who design, develop, unleash, interpret and use these technological tools and systems. We can choose to center the humanity of these systems and to support those who do so, and we must.”
Sean McGregor, founder of the Responsible AI Collaborative:
“By 2035, technology will have developed a window into many inequities of life, thereby empowering individuals to advocate for greater access to and authority over decision-making currently entrusted to people with inscrutable agendas and biases. The power of the individual will expand with communication, artistic and educational capacities not known throughout previous human history. However, if trends remain as they are now, people, organizations and governments interested in accumulating power and wealth over the broader public interest will apply these technologies toward increasingly repressive and extractive aims. It is vital that there be a concerted, coordinated and calm effort to globally empower humans in the governance of artificial intelligence systems. This is required to avoid the worst possibilities of complex socio-technical systems. At present, we are woefully unprepared and show no signs of beginning collaborative efforts of the scale required to sufficiently address the problem.”
David Clark, Internet Hall of Fame member and senior research scientist at MIT’s Computer Science and Artificial Intelligence Laboratory:
“To have an optimistic view of the future you must imagine several potential positives come to fruition to overcome big issues:
- “The currently rapid rate of change slows, helping us to catch up.
- “The Internet becomes much more accessible and inclusive, and the numbers of the unserved or poorly served become a much smaller fraction of the population.
- “Over the next 10 years the character of critical applications such as social media mature and stabilize, and users become more sophisticated about navigating the risks and negatives.
- “Increasing digital literacy helps all users to better avoid the worst perils of the Internet experience.
- “A new generation of social media emerges, with less focus on user profiling to sell ads, less emphasis on unrestrained virality and more of a focus on user-driven exploration and interconnection.
- “And the best thing that could happen is that application providers move away from the advertising-based revenue model and establish an expectation that users actually pay. This would remove many of the distorting incentives that plague the ‘free’ Internet experience today. Consumers today already pay for content (movies, sports and games, in-game purchases and the like). It is not necessary that the troublesome advertising-based financial model should dominate.”
Laurie L. Putnam, educator and communications consultant:
“There is great potential for digital technologies to improve health and medical care. Out of necessity, digital health care will become a norm. Remote diagnostics and monitoring will be especially valuable for aging and rural populations that find it difficult to travel. Connected technologies will make it easier for specialized medical personnel to work together from across the country and around the world. Medical researchers will benefit from advances in digital data, tools and connections, collaborating in ways never before possible.
“However, many digital technologies are taking more than they give. And what we are giving up is difficult, if not impossible, to get back. Today’s digital spaces, populated by the personal data of people in the real world, is lightly regulated and freely exploited. Technologies like generative AI and cryptocurrency are costing us more in raw energy than they are returning in human benefit. Our digital lives are generating profit and power for people at the top of the pyramid without careful consideration of the shadows they cast below, shadows that could darken our collective future. If we want to see different outcomes in the coming years, we will need to rethink our ROI [return on investment] calculations and apply broader, longer-term definitions of ‘return.’ We are beginning to see more companies heading in this direction, led by people who aren’t prepared to sacrifice entire societies for shareholders’ profits, but these are not yet the most-powerful forces. Power must shift and priorities must change.”
Experts’ views of potential harmful changes
Here is a small selection of responses that touch on the themes related to menaces and harms that could happen between now and 2035.
Herb Lin, senior research scholar for cyber policy and security at Stanford University’s Center for International Security and Cooperation:
“My best hope is that human wisdom and willingness to act will not lag so much that they are unable to respond effectively to the worst of the new challenges accompanying innovation in digital life. The worst likely outcome is that humans will develop too much trust and faith in the utility of the applications of digital life and become ever more confused between what they want and what they need. The result will be that societal actors with greater power than others will use the new applications to increase these power differentials for their own advantage. The most beneficial change in digital life might simply be that things don’t get much worse than they are now with respect to pollution in and corruption of the information environment. Applications such as ChatGPT will get better without question, but the ability of humans to use such applications wisely will lag.”
A computer and data scientist at a major U.S. university whose work involves artificial neural networks:
“The following potential harmful outcomes are possible if trendlines continue as they have been to this point:
- “We accidentally incentivize powerful general-purpose AI systems to seek resources and influence without first making sufficient progress on alignment, eventually leading to the permanent disempowerment of human institutions.
- “Short of that, misuse of similarly powerful general-purpose technologies leads to extremely effective political surveillance and substantially improved political persuasion, allowing wealthy totalitarian states to end any meaningful internal pressure toward change.
- “The continued automation of software engineering leads large capital-rich tech companies to take on an even more extreme ratio of money and power to number of employees, making it easier for them to move across borders and making it even harder to meaningfully regulate them.”
Erhardt Graeff, a researcher at Olin College of Engineering who is expert in the design and use of technology for civic and political engagement:
“I worry that humanity will largely accept the hyper-individualism and social and moral distance made possible by digital technology and assume that this is how society should function. I worry that our social and political divisions will grow wider if we continue to invest ourselves personally and institutionally in the false efficiencies and false democracies of Twitter-like social media.”
Ayden Férdeline, Landecker Democracy Fellow at Humanity in Action:
“There are organizations today that profit from being perceived as ‘merchants of truth.’ The judicial system is based on the idea that the truth can be established through an impartial and fair hearing of evidence and arguments. Historically, we have trusted those actors and their expertise in verifying information. As we transition to building trust into digital media files through techniques like authentication-at-source and blockchain ledgers that provide an audit trail of how a file has been altered over time, there may be attempts to use regulation to limit how we can cryptographically establish the authenticity and provenance of digital media. More online regulation is inevitable given the importance of the Internet economically and socially and the likelihood that digital media will increasingly be used as evidence in legal proceedings. But will we get the regulation right? Will we regulate digital media in a way that builds trust, or will we create convoluted, expensive authentication techniques that increase the cost of justice?”
Henning Schulzrinne, Internet Hall of Fame member and co-chair of the Internet Technical Committee of the IEEE:
“The concentration of ad revenue and the lack of a viable alternative source of income will further diminish the reach and capabilities of local news media in many countries, degrading the information ecosystem. This will increase polarization, facilitate government corruption and reduce citizen engagement.”
Robin Raskin, author, publisher and founder of the Virtual Events Group:
“Synthetic humans and robot friends may increase our social isolation. The demise of the office or a school campus as a gathering place will leave us hungry for human companionship and may cause us to lose our most-human skills: empathy and compassion. We become ‘man and his machine’ rather than ‘man and his society.’ The consumerization of AI will augment, if not replace, most of the white-collar jobs, including in traditional office work, advertising and marketing, writing and programming. Since work won’t be ‘a thing’ anymore, we’ll need to find some means of compensation for our contribution to humanity. How much we contribute to the web? A Universal Basic Income because we were the ones who taught AI to do our jobs? It remains to be seen, but the AI Revolution will be as huge as the Industrial Revolution.
“Higher education will face a crisis like never before. Exorbitant pricing and lack of parity with the real world makes college seem quite antiquated. I’m wagering that 50% of higher education in the United States will be forced to close down. We will devise other systems of degrees and badges to prove competency. The most critical metaverse will be a digital twin of everything – cities, schools and factories, for example. These twins coupled with IoT [Internet of Things] devices will make it possible to create simulations, inferences and prototypes for knowing how to optimize for efficiency before ever building a single thing.”
Jim Fenton, a veteran leader in the Internet Engineering Task Force who has worked over the past 35 years at Altmode Networks, Neustar and Cisco Systems:
“I am particularly concerned about the increasing surveillance associated with digital content and tools. Unfortunately, there seems to be a counterincentive for governments to legislate for privacy, since they are often either the ones doing the surveilling, or they consume the information collected by others. As the public realizes more and more about the ways they are watched, it is likely to affect their behavior and mental state.”
A longtime director of research for a global futures project:
“Human rights will become an oxymoron. Censorship, social credit and around-the-clock surveillance will become ubiquitous worldwide; there is nowhere to hide from global dictatorship. Human governance will fall into the hands of a few unelected dictators. Human knowledge will wane and there will be a growing idiocracy due to the public’s digital brainwashing and the snowballing of unreliable, misleading, false information. Science will be hijacked and only serve the interests of the dictator class. In this setting, human health and well-being is reserved for the privileged few; for the majority, it is completely unconsidered. Implanted chips constantly track the health of the general public, and when they become a social burden, their lives are terminated.”
Experts’ views of potential beneficial changes
Several main themes also emerged among these experts’ expectations for the best and most beneficial changes in digital life between 2023 and 2035. Here is a small selection of responses that touch on those themes.
Ben Shneiderman, widely respected human-computer interaction pioneer and author of “Human-Centered AI”:
“A human-centered approach to technology development is driven by deep understanding of human needs, which leads to design-thinking strategies that bring successful products and services. Human-centered user interface design guidelines, principles and theories will enable future designers to create astonishing applications that facilitate communication, improve well-being, promote business activities and much more. Building tools that give users superpowers is what brought users email, the web, search engines, digital cameras and mobile devices. Future superpowers could enable reduction of disinformation, greater security/privacy and improved social connectedness. This could be the Golden Age of Collaboration, with remarkable global projects such as developing COVID-19 vaccine in 42 days. The future could be made brighter if similar efforts were devoted to fighting climate change, restoring the environment, reducing inequality and supporting the 17 UN Sustainable Development Goals. Equitable and universal access to technology could improve the lives of many, including those users with disabilities. The challenge will be to ensure human control, while increasing the level of automation.”
Rich Salz, principal engineer at Akamai Technologies:
“We will see a proliferation of AI systems to help with medical diagnosis and research. This may cover a wide range of applications, such as: expert systems to detect breast cancer or other X-ray/imaging analysis; protein folding, etc., and discovery of new drugs; better analytics on drug and other testing; limited initial consultation for doing diagnosis at medical visits. Similar improvements will be seen in many other fields, for instance, astronomical data-analysis tools.”
Deanna Zandt, writer, artist and award-winning technologist:
“I continue to be hopeful that new platforms and tech will find ways around the totalitarian capitalist systems we live in, allowing us to connect with each other on fundamentally human levels. My own first love of the internet was finding out that I wasn’t alone in how I felt or in the things I liked and finding community in those things. Even though many of those protocols and platforms have been co-opted in service of profit-making, developers continue to find brilliant paths of opening up human connection in surprising ways. I’m also hopeful the current trend of hyper-capitalistic tech driving people back to more fundamental forms of internet communication will continue. Email as a protocol has been around for how long? And it’s still, as much as we complain about its limitations, a main way we connect.”
Jonathan Stray, senior scientist at the Berkeley Center for Human-Compatible AI, which studies algorithms that select and rank content:
“Among the developments we’ll see come along well are self-driving cars, which will reduce congestion, carbon emissions and road accidents. Automated drug discovery will revolutionize the use of pharmaceuticals. This will be particularly beneficial where speed or diversity of development is crucial, as in cancer, rare diseases and antibiotic resistance. We will start to see platforms for political news, debate and decision-making that are designed to bring out the best of us, through sophisticated combinations of human and automated moderation. AI assistants will be able to write sophisticated, well-cited research briefs on any topic. Essentially, most people will have access to instant-specialist literature reviews.”
Kay Stanney, CEO and founder of Design Interactive:
“Human-centered development of digital tools can profoundly impact the way we work and learn. Specifically, by coupling digital phenotypes (i.e., real-time, moment-by-moment quantification of the individual-level human phenotype, in situ, using data from personal digital devices, in particular smartphones) with digital twins (i.e., digital representation of an intended or actual real-world physical product, system or process), it will be possible to optimize both human and system performance and well-being. Through this symbiosis, interactions between humans and systems can be adapted in real-time to ensure the system gets what it needs (e.g., predicted maintenance) and the human can get what it needs (e.g., guided stress-reducing mechanisms), thereby realizing truly transformational gains in the enterprise.”
Juan Carlos Mora Montero, coordinator of postgraduate studies in planning at the Universidad Nacional de Costa Rica:
“The greatest benefit related to the digital world is that technology will allow people to have access to equal opportunities both in the world of work and in culture, allowing them to discover other places, travel, study, share and enjoy spending time in real-life experiences.”
Gus Hosein, executive director of Privacy International:
“Direct human connections will continue to grow over the next decade-plus, with more local community-building and not as many global or regional or national divisions. People will have more time and a more sophisticated appreciation for the benefits and limits of technology. While increased electrification will result in ubiquity of digital technology, people will use it more seamlessly, not being ‘online’ or ‘offline.’ Having been through a dark period of transition, a sensibility around human rights will emerge in places where human rights are currently protected and will find itself under greater protection in many more places, not necessarily under the umbrella term of ‘human rights.’”
Isaac Mao, Chinese technologist, data scientist and entrepreneur:
“Artificial Intelligence is poised to greatly improve human well-being by providing assistance in processing information and enhancing daily life. From digital assistants for the elderly to productivity tools for content creation and disinformation detection, to health and hygiene innovations such as AI-powered gadgets, AI technology is set to bring about unprecedented advancements in various aspects of our lives. These advances will not only improve our daily routines but also bring about a new level of convenience and efficiency that has not been seen for centuries. With the help of AI, even the most mundane tasks such as brushing teeth or cutting hair can be done with little to no effort and concern, dramatically changing the way we have struggled for centuries.”
Michael Muller, a researcher for a top global technology company who is focused on human aspects of data science and ethics and values in applications of artificial intelligence:
“We will learn new ways in which humans and AIs can collaborate. Humans will remain the center of the situation. That doesn’t mean that they will always be in control, but they will always control when and how they delegate selected activities to one or more AIs.”
Terri Horton, work futurist at FuturePath:
“Digital and immersive technologies and artificial intelligence will continue to exponentially transform human connections and knowledge across the domains of work, entertainment and social engagement. By 2035, the transition of talent acquisition, onboarding, learning and development, performance management and immersive remote work experiences into the metaverse – enabled by Web3 technologies – will be normalized and optimized. Work, as we know it, will be absolutely transformed. If crafted and executed ethically, responsibly and through a human-centered lens, transitioning work into the metaverse can be beneficial to workers by virtue of increased flexibility, creativity and inclusion. Additionally, by 2035, generative artificial intelligence (GAI) will be fully integrated across the employee experience to enhance and direct knowledge acquisition, decision-making, personalized learning, performance development, engagement and retention.”
Daniel Pimienta, leader of the Observatory of Linguistic and Cultural Diversity on the Internet:
“I hope to see the rise of the systematic organization of citizen education on digital literacy with a strong focus on information literacy. This should start in the earliest years and carry forward through life. I hope to see the prioritization of the ethics component (including bias evaluation) in the assessment of any digital system. I hope to see the emergence of innovative business models for digital systems that are NOT based on advertising revenue, and I hope that we will find a way to give credit to the real value of information.”
Guide to the Report
- Overarching views on digital change: In Chapter 1, we highlight the remarks of experts who gave some of the most wide-ranging yet incisive responses to our request for them to discuss human agency in digital systems in 2035.
- Expert essays on the impact of digital change: Following that in Chapter 2, we offer a set of longer, broader essays written by leading expert participants.
- Key themes: That is followed with additional sections covering respondents’ comments organized under the sets of themes about harms and benefits.
- Closing thoughts on ChatGPT: And a final chapter covers some summary statements about ChatGPT and other trends in digital life.