Experts who doubt significant improvement will be made in the digital democratic sphere anytime soon say the key factor underlying the currently concerning challenges of online discourse is the ways in which people, with their varied and complicated motivations and behaviors, use and abuse the digital spaces that are built for them. Those who think the situation is unlikely to change say it is because “humans will be human.” They say digital networks and tools will continue to amplify human frailties and magnify malign human intent.

Two quotes frame this theme:

Alejandro Pisanty,
professor of internet and information society at the National Autonomous University of Mexico (UNAM), wrote, “Most major concerns for humanity’s future stem from deeply rooted human conduct, be it individual, corporate, criminal or governmental.”

A professor of psychology at a major U.S. technological university whose specialty is human-computer interaction said, “One can imagine a future in which digital life is more welcoming of diverse views, supportive to those in need, and wise. Then we can look at the nature of human beings, who have evolved to protect their own interests at the expense of the common good, who divide the world into ‘us’ and ‘them’ and justify their actions by self-deception and proselytizing. Nothing about the digital world provides a force toward the first vision. In fact, as now constituted – with no brakes on individual posts and virtually no effort by platforms to weed out evil-doers – all of the impetus is in the direction of unmitigated expression of the worst of human nature. So, I am direly pessimistic that the digital future is a benevolent one.”

These experts point out that current technology design exploits the very human characteristics that trigger humans’ most-troublesome online behaviors. Some expect that this will worsen in the future due to expected advances in: the hyper-surveillance of populations; datafication that is turning people’s online activities into individualized insights about their behaviors; and predictive technology that can anticipate what they may do next. The experts noted that these characteristics of digital tech aid authoritarians, magnify mis/disinformation and enable psychological and emotional manipulation.

A number of respondents’ views about why it will be difficult to improve the digital public sphere by 2035 were included in earlier theme sections of this report. In this section we showcase scores of additional expert comments, organized under five themes: 1) Humans are self-centered and short-sighted, making them easy to manipulate; 2) The trends toward more datafication and surveillance of human activity are unstoppable; 3) Haters, polarizers and jerks will gain more power; 4) Humans can’t keep up with the speed and complexity of digital change; 5) Reform cannot arise because nation-states are weaponizing digital tools.

Humans are self-centered and short-sighted, making them easy to manipulate

Many respondents to this canvassing wrote about humans’ hard-wired “survival instinct” to protect themselves and meet personal goals. They noted that these motivations in the hair-trigger, global public sphere have fostered divisiveness even to the point in some cases of genocide and violence against governments. When human dispositions and frailties can be manipulated in a digitally networked world, danger is intensified. And, these experts note, this explosive environment can worsen when those in digital spaces can be surveilled.

Toxicity is a human attribute, not an element inherent to digital life. Unless we design spaces to explicitly prohibit/penalize and curate against toxicity, we will not see an improvement.”


Zizi Papacharissi, professor of political science and professor and head of communication at the University of Illinois-Chicago

Zizi Papacharissi, professor of political science and professor and head of communication at the University of Illinois-Chicago, observed, “We enter these spaces with our baggage – there is no check-in counter online where we enter and get to leave that baggage behind. This baggage includes toxicity. Toxicity is a human attribute, not an element inherent to digital life. Unless we design spaces to explicitly prohibit/penalize and curate against toxicity, we will not see an improvement.”

Alexa Raad, chief purpose and policy officer at Human Security wrote, “Fundamentally, the same aspects of human nature that have ruled our behavior for millennia will continue to dictate our behavior, albeit with new technology. For example, our need for affiliation and identity – coupled with our cognitive biases – has led to and will continue to breed tribalism and exacerbate divisions. Our limbic brains will continue to overrule rational thought and prudent action when confronted with emotional stimuli that generate fear.”

Paul Jones, emeritus professor of information science at University of North Carolina-Chapel Hill, said, “Authors Charles F. Briggs and Augustus Maverick wrote in their 1858 book ‘The Story of the Telegraph,’ ‘It is impossible that old prejudices and hostilities should longer exist while such an instrument has been created for the exchange of thought between all nations of the earth.’ The telegraph was supposed to be an instrument of peace, but the first broad use was to suppress anti-colonial rebellion in India. I’m not sure why we talk about digital spaces as if they were separate from, say, telephone spaces or shopping mall spaces or public park spaces. In many ways, the social performance of self in digital spaces is no different. Or it is? Certainly, anonymous behaviors when acted out in public spaces of any kind are more likely to be less constrained and less accountable. Digital spaces can and do act to accelerate and maintain cohesion and cooperation of real-world activities. We see how affinity groups support communitarian efforts – cancer and rare-disease support groups, Friends of the Library. We also are aware that not all affinity groups are formed to serve the same interests in service of democracy and society – see Oath Keepers for example.”

Art Brodsky, communications consultant and former vice president of communications for Public Knowledge, responded, “It’s unfortunate that the digital space has been so thoroughly polluted, but it’s also unlikely to change for one reason – people don’t change. We can ruin anything. Most new technologies started out with great promise to change society for the better. Remember what was being said when cable was introduced? There is a lot that’s good and useful in the digital space, but the bad drives out the good and causes more harm. Do we have to talk these days about Russian interference, the Big Lie of the election or the fact that people aren’t getting vaccinated against Covid? It’s not all the online space – cable contributed also. Technology will never keep up with all the garbage going in.”

Charles Ess, emeritus professor in the department of media and communication at the University of Oslo, predicted, “While there may be significant changes in what will amount to niche sectors for the better, my strong sense is that the conditions and causes that underlie the multiple negative affordances and phenomena now so obvious and prevalent will not change substantially. This is … about human selfhood and identity as culturally and socially shaped, coupled with the ongoing, all but colonizing dominance of the U.S.-based tech giants and their affiliates. Much of this rests on the largely unbridled capitalism favored and fostered by the United States.

“The U.S., for all of its best impulses and accomplishments, is increasingly shaped by social Darwinism, the belief that humans are greedy, self-interested atomistic individuals thereby caught up in the Hobbesian war of each against all, ruthless competition as ‘natural’ – and that all of this is somehow a good thing as it allegedly generates greater economic surplus, however unequally distributed it is (as a ‘natural result’ of competition).

“All of this got encoded into law, starting in early 1970s regulation of networking and computer-mediated communication industries as ‘carriers’ instead of ‘content-providers’ (i.e., newspapers, radio and TV) regulated vis-à-vis rights to freedom of expression as importantly limited with a view toward what contributes to fruitful democratic debate, procedures and norms.”

Daniel S. Schiff, a Ph.D. student at Georgia Tech’s School of Public Policy, where he is researching artificial intelligence and the social implications of technology, commented, “I would not expect the quality of public discourse to improve dramatically on average. While companies may have some incentives to remediate the worst offenses (violent speech), my concern is that human nature and emergent behavior will continue to lead to activities like bullying, uncharitable treatment of others and the formation of out-groups. I find it unlikely that more positive, pluralistic and civil platforms will be able to outcompete traditional digital spaces financially and in terms of audience desire. Given that regulation is unlikely to impose such dramatic changes and that users are unlikely to go elsewhere, I suspect there are not sufficient incentives for the leading firms to transform themselves beyond, for example, protections to privacy and efforts to combat misinformation.

“Overall, while some of the worst growing pains of digital spaces may be remediated in part, we can still expect outcomes like hostility, polarization and poor mental health. Progress then, may be modest, and limited to areas like privacy rights and combating misinformation and hate speech – still tremendously important advances. Further, my skepticism about broader progress is not meant to rule out the tremendous benefits of digital spaces for connection, education, work and so on. But it stretches my credulity, in light of human nature and individual and corporate incentives, to believe that the kind of transformations that could deeply change the tenor of digital life are likely to prevail in the near future.”

Marc Brenman, managing partner of IDARE LLC, observed, “Human nature is unlikely to change. There is little that is entrenched in technology that will not change much. The interaction of the two will continue to become more problematic. Technology enables errors to be made very quickly, and the errors, once made, are largely irretrievable. Instead, they perpetuate, extend and reproduce themselves. Autonomy becomes the possession of machines and not people. Responsibility belongs to no one. Random errors creep in.

“We, as humans, must adjust ourselves to machines. Recently I bought a new car with state-of-the-art features. These include lane-keeping, and I have been tempted to take my hands off the steering wheel for long periods. This, combined with cruise controls and distance regulation come close to self-driving. I am tempted to surrender my will to the machine and its sensors and millions of lines of code. The safety features of the car may save my life, but is it worth saving? Similarly, the technology of gene-splicing enables the creation of mRNA vaccines, but some people refuse to take them. We legally respect this ‘Thanatos,’ as we legally respect another technology: guns.”

Neil Richards, professor of law at Washington University in St. Louis and one of the country’s foremost academic experts on privacy law, wrote, “Right now, I’m pretty pessimistic about the ability of venture capital-driven tech companies to better humanity when our politics have two Americas at each other’s throats and there is massive wealth inequality complicated by centuries of racism. I’m confident over the long term, but the medium term promises to be messy. In particular, our undemocratic political system (political gerrymandering, voting restrictions and the absurdity of the Senate, where California has the same power as Wyoming and a dozen other states with a fraction of its population), tone-deaf tech company leaders and viral misinformation mean we’re likely to make lots of bad decisions before things get better. We’re human beings. The history of technological advancements makes pretty clear that transformative technological changes create winners and losers, and that even when the net change is for the better, there are no guarantees, and, in the short term, things can get pretty bad. In addition, you have to look at contexts much broader than just technology.”

“I’m pretty pessimistic about the ability of venture capital-driven tech companies to better humanity when our politics have two Americas at each other’s throats and there is massive wealth inequality complicated by centuries of racism.”


Neil Richards, professor of law at Washington University in St. Louis and one of the country’s foremost academic experts on privacy law

Randall Gellens, director at Core Technology Consulting, said, “We have ample evidence that significant numbers of humans are inherently susceptible to demagogs and sociopaths. Better education, especially honest teaching of history and effective critical-thinking skills, could mitigate this to some degree, but those who benefit from this will fight such education efforts, as they have, and I don’t see how modern, pluralistic societies can summon the political courage to overcome this. I see digital communications turbocharging those aspects of social interaction and human nature that are exploited by those who seek power and financial gain, such as groupthink, longing for simplicity and certainty, and wanting to be part of something big and important. Digital media enhances the environment of emersion and belonging that, for example, cults use to entrap followers. Digital communications, even such primitive tools as Usenet groups and mailing lists, lower social inhibitions to bad behavior. The concept of trolling, for example, in which people, as individuals or as part of a group, indulge in purely negative behavior, arose with early digital communications. It may be the lack of face-to-face, in-person stimuli or other factors, but the effect is very real. During the pandemic shutdown of in-person activities, digital replacements were often targeted for attack and harassment. For example, some school classes, city council meetings, addiction and mental health support groups were flooded with hate speech and pornography. Access controls can help in some cases (e.g., school classes) but is inimical in many others (e.g., city council meetings, support groups).

“Throughout history and in current years, dictators have shown how to use democracy against itself. Exploiting inherent human traits, they get elected and then consolidate their power and neutralize institutions and opposition, leaving the facade of a functioning democracy. Digital communications enhance the effectiveness of the mechanisms and tools long used for this. It’s hard to see how profit-driven companies can be incentivized to counter these forces.”

An expert in how psychology, society and biology influence human decision-making commented, “People are people; tech might change the modality of communication, but people drive the content/usage, not the reverse.”

An expert at helping developing countries to strategically implement ICT solutions wrote, “Technologies continue to amplify human intention and behaviour. As long as people are not aware of this, the digital space will not be a safe place to be. People with power will continue to misuse it. The digital divides between north and south, women and men, rich and poor, will not be closed because digitalisation exacerbates polarisation.”

Eileen Rudden,
co-founder of LearnLaunch, responded, “In the mid-1990s, during the birth of the internet, we rejoiced in the internet’s possibility to enable new voices to be heard. That possibility has been realized, but the bad of human nature as well as the good has been given a broader platform. Witness how varied the media brands are, from Breitbart to The New York Times. The root cause is that we social human beings are structured to be interested in difference and changes. Tech social spaces amplify the good and the bad of human nature. An issue I expect to see remain unsolved by 2035 is bad actors exploiting the slowness of the public’s responses to emerging challenges online.”

An educator based in North America predicted, “Seems like there will be less discourse and more censorship, mass hysteria, group-think, bullying and oppression in 2035.”

An anonymous respondent said, “The lack of a single shared physical space in which real people must work toward coming to a mutual understanding and the reduced need for more than a few humans to be in agreement to coordinate the activity of millions has reduced the countervailing forces that previously led cults to remain isolated or to fade over time. The regular historical difficulties that have often resulted from such communication trends in the past and in the present (but to date only in isolated regions, not globally) include the suppression of and destruction of science, of histories, of news, and the creation and enshrining of artificial histories as the only allowed narrative. It also leads to a glorification of the destruction of people, art, architecture and many of the real events of human civilization. Today’s public platforms have almost all been designed in a way that allows for the fast, creative generation of fake accounts. The use of these platforms’ automated tools for discussion and interaction is the dominant way to be seen and heard, and the dominant way to be perceived as popular and seek approval or agreement from others. As a result, forged social proof has become the most common form of social proof. Second-order effects convert this into ‘real’ social proof, erasing the record of the first. This is allowing cult-forming techniques that were once only well understood in isolation to become mainstream.”

A North American strategy consultant wrote, “There will always be spin-offs of the Big Lie. Negativity wins over truth, especially when the volume is loud. Plus, there’s far too much money involved here for the internet companies to play ball.”

An educator who has been active in the Second Life community responded, “Human egos, nature and cognitive dissonance will continue to prevail. Political, marketing and evangelistic agendas will continue to prevail.”

An American author, journalist and professor said, “Attention-seeking behavior won’t change, nor will Skinnerian attention rewards for extreme views. It’s possible that algorithms will become better at not sending people to train wreck/extreme content. It is also possible that legislation will change the relationship between the social media sites and the content they serve up.”

A professor of informatics based in Athens, Greece, predicted, “There will not be significant improvement by 2035 due to greed, lack of regulation, money in politics and corruption.”

A futurist and cybercrime expert responded, “The worst aspects of human nature, its faults, flaws and biases are amplified beyond belief by today’s tech and the anticipated technologies still to come. There is always a subset of people ‘hoping’ for humans’ kindness and decency to prevail. That’s a nice idea but not usually the smart way to bet.”

A business professor researching smart cities and artificial intelligence wrote, “I am very fearful about the impact of AI [artificial intelligence] on digital spaces. While AI has been around for a while, it is only in the last decade that, through its deployment in social media, we have started to see its impact on, inter alia, human nature (for those who have access to smart technology, it has become an addiction), discourse (echo chambers have never been more entrenched), and consent/agency (do I really hold a certain belief, or have I been nudged toward it?). Yes, I do think that there are ways to move our societal trajectory toward a more optimistic future. These include meaningful and impactful regulation; more pervasive ethical training for anybody involved in creating, commercializing or using ‘smart’ technologies; greater educational efforts toward equipping students of all ages with critical-thinking tools; and less capture by – bitter and divisive – political interest.”

An analytics director for a social media strategies consultancy commented, “I don’t think digital spaces and digital life have the capacity to experience a substantial net increase until we change how we operate as a society. The technology might change, but – time after time – humans seem to prove that we don’t change. The ‘net’ amount of change in digital spaces and digital life will not be substantially better. Certainly, there will be some positive change, as there is with most technological developments. I can’t say what those changes will be, but there will be improvements for some. However, there’s always the other side of the coin, and there will certainly be people, organizations, institutions, etc., that have a negative impact on digital spaces/life.”

People are not capable of coming together to solve problems like these

A share of experts were fairly confident that people will simply not be able to find a way to come together to accomplish the goal of designing effective approaches to public digital spaces. Coming to consensus isn’t easy, they say, since everyone has different motivations; people will not get their act together effectively enough to truly make a difference.

Leah Lievrouw, professor of information studies at the University of California-Los Angeles, argued, “Despite growing public concern and dismay about the climate of and risks of online communication and information sources, no coherent agenda for addressing the problems seems to have yet emerged, given the tension between an appropriate reluctance to let governments (with wildly different values and many with a penchant for authoritarianism) set the rules for online expression and exchange, and the laddish, extractive ‘don’t blame us, we saw our chances and took ’em’ attitude that still prevails among most tech industry leadership.

“It’s not clear to me where the new, responsible, really compelling model for ‘digital spaces’ is going to come from. If the pervasive privatization, ‘walled garden’ business models and network externalities that allowed the major tech firms to dominate their respective sectors – search, commerce, content/entertainment, interpersonal relations and networks – continue to prevail, things will not improve, as big players continue to oppose meaningful governance and choke off any possible competition that might challenge their incumbency.”

Clifford Lynch, director of the Coalition for Networked Information, commented, “The digital public sphere has become the target of all kinds of entities that want to shape opinion and disseminate propaganda, misinformation and disinformation. It has become an attack vector in which to stage assaults on our society and to promote extremism and polarization. Digital spaces in the public sphere where large numbers of sometimes anonymous or pseudoanonymous entities can interact with the general public have become full of all of the worst sort of human behavior: bullying, shaming, picking fights, insults, trolling – all made worse by the fact that it’s happening in public as part of a performance to attract attention, influence and build audience. I don’t expect that the human behavior aspects of this are likely to change soon; at best we’ll see continued adjustments in the platforms to try to reduce the worst excesses.

“Right now, there’s a lot of focus on these issues within the digital public sphere and discussions on how to protect it from those bad actors. It is unclear how successful these efforts might be. I am extremely skeptical they’ve been genuinely effective to this point. One thing that is really clear is that we have no idea of how to do content moderation at the necessary scale, or whether it’s even possible. Perhaps in the next 5 to 10 years we’ll figure this out, which would lead to some significant improvements, but keep in mind that a lot of content moderation is about setting norms, which implies some kind of consensus. There is, as well, the very difficult question over deciding what content conforms to those norms.”

One thing that is really clear is that we have no idea of how to do content moderation at the necessary scale, or whether it’s even possible.

Clifford Lynch, director of the Coalition for Networked Information

Ian O’Byrne, an assistant professor of Literacy Education at the College of Charleston, said, “We do not fully understand the forces that impact our digital lives or the data that is collected and aggregated about us. As a result, individuals use these texts, tools and spaces without fully understanding or questioning the decisions made or being made therein. The end result is a populace that does not possess or chooses not to employ the basic skills and responsibilities needed to engage in digital spaces. In the end, most users will continue to choose to surrender to these digital, social spaces and all of their positive and negative affordances. There will be a small subset that chooses to educate themselves and use digital tools in a way that they believe will safely allow them to connect while obfuscating their identity and related metadata. Tech leaders and politicians view the data collection and opportunities to influence or mislead citizens as a valuable commodity.

“Digital spaces provide a way to connect and unite communities from a variety of ideological strains. Online social spaces also provide an opportunity to fine-tune propaganda to sway the population in specific contexts. As we study human development and awareness, this intersects with ontology and epistemology. When technologies advance, humans are forced to reconcile their existing understandings of the world with the moral and practical implications said technologies can (or should) have in their lives. Post-Patriot Act era – and in light of Edward Snowden’s National Security Administration whistleblowing – this also begets a need to understand the role of web literacies as a means of empowering or restricting the livelihood of others. Clashes over privacy, security and identity can have a chilling impact on individual willingness to share, create and connect using open, digital tools, and we need to consider how our recommendations for the future are inevitably shaped by worries and celebrations of the moment.”

A professor of political science based in the U.S. observed, “The only way things might change for the better is if there is a wholesale restructuring of the digital space – not likely. The majority of digital spaces are serving private economic and propaganda needs, not the public good. There is no discernible will on the part of regulators, governmental entities or private enterprise to turn these spaces to the public good. News organizations are losing their impact, there is no place for shared information/facts to reach a wide audience. Hackers and criminal interests are threatening economic and national security and the protection of citizens.”

An associate dean for research in computer science and engineering commented, “I am very worried there will not be much improvement in digital spaces due to the combination of social division, encouragement of that social division by any and all nondemocratic nations, the profit focus of business interests, individuals protecting their own interests and the lack of a clearly invested advocate for the common good. Highly interested and highly motivated forces tend to always win over the common good because the concept of what constitutes the common good is so diffuse among people. There may be ways things could improve. I see promise in local digital spaces in connecting neighbors. But I have yet to see much success in connecting them across the political spectrum. I see potential for better-identifying falsehoods and inflammatory content. But I don’t see a national (or global) consensus or a structure for actually enforcing social good over profits and selfish/destructive interests.”

An internet pioneer predicted, “Our societal descent into truth decay – which threatens the world like no other ill – will not be solved by digital savants, some different form of internet governance, nor new laws/regulations/antitrust actions. Truth decay is first a symptom; its seeds were planted long ago in jarring market transitions across the economy, in employment, in political action and rhetoric. The internet – an intellectual buffet that begins and ends with dessert – has accelerated and amplified the descent, but cannot be reshaped to stop it, let alone reverse it.”

A principal architect for a major global technology company responded, “I wish I could have more techno-optimism, because we in tech keep thinking up creative improvements of the users’ options in digital spaces, allowing for better control over one’s data. SOLID, the work on decentralizing social applications by Tim Berners-Lee, is an interesting project in this realm. And we are working toward better algorithm equity and safety (there are multiple efforts in this area). But at the broad level, the digital space being bad for people isn’t sufficiently addressed by such improvements. Our complex tech ideas might not even be necessary if the companies operating the digital spaces committed to and invested in civic governance. For the companies to do so, and for it to be a consensual approach with the users, requires them to change their values for real. They would have to commit to improving the product quality of experience because it’s worth investing in for the long term even if it lowers the growth rate of the company.

“Companies have spent decades not investing in defenses from security attacks, and even now investments in that are often driven by regulations rather than sincere valuation of security as a deliverable. That’s one reason the security space continues to be hellish and damaging. That’s an analogy, in my opinion, to explain why there are likely to only be ineffective and incremental technical and governance measures for digital spaces. There may be a combination of good effort by regulatory push and some big tech pull, but it would be nothing like enough to significantly change the digital-space world.”

A UK-based expert on well-being in the digital age observed, “Social media speaks to our darkest needs: for games, for validation and for the hit of dopamine. This isn’t discerning. In 2035 there will still be people who abuse online spaces, finding ways to do so beyond the controls. Too often we focus on helping the child, helping the bully and not on kicking those who exercise certain behaviours off social media altogether.”

A director of strategic relationships and standards for a global technology company noted, “Digital spaces and digital life have dramatically reduced civility and kindness in the world. I honestly don’t know how to fix this. My hope is that we will continue to talk about this and promote a desire to want to fix it. I worry that a majority won’t want to fix it because it is not in their interest. There are two driving reasons for the incivility. 1) In the U.S., First Amendment rights are in conflict with promoting civility and mitigating attempts to control cruelty and facts. 2) A natural consequence of digital spaces is a lack of physical contact which, by definition, facilitates cruelty without penalty.”

A computer science and engineering professor at a major U.S. technological university said, “Things will not and are not changing significantly, just moving from one platform (newspaper, radio) to another (internet). Past human history indicates that politics creates hot emotions, wild claims and nasty attacks, whatever the platform. Attempts to curtail expression by legislation can sometimes have a useful dampening effect, but most are not widely supported because of infringing on free speech.”

A professor of computer science and data studies wrote, “The damage done by digital spaces seems irreparable. Society is fractured in regard to basic truths, so leaders cannot even make changes for the better because factions can’t agree on what ‘better’ means.”

A professor emerita of informatics and computing responded, “Most people have seen the impact of individualistic efficacy on the internet and are likely to be resistant to government attempts to regulate content such that controls individuals. We have seen so much affective polarization in recent years in this country and around the world that it will be difficult to roll that back through policies. As for technological changes that might effect change, I don’t have a crystal ball to tell me how those might interact with governments and citizens. We have also witnessed the rise of online hate groups that have wielded power and will also resist being controlled.”

A professor of internet studies observed, “The internet’s architecture will always allow end-runs around whatever safeguards are put in place. There is not enough regulation in place to deal with the misinformation and echo chambers, but I doubt there will ever be enough regulation.”

The trends toward more datafication and surveillance of human activity are unstoppable

A number of respondents focused on the growth of increasingly pervasive and effective surveillance technologies – the bread-and-butter business model of online platforms and most digital capitalism – and said they expect that upgrades in them will worsen things. They said monitoring of users and “datafying” people’s activities for profit are nearly inescapable and extremely susceptible to abuse. This underlies widening societal divisions in democracies in addition to furthering the goals of authoritarian governments, even, at times, to the point of facilitating genocide, according to these experts. They said digital spaces are often used for the types of psychographic manipulation that can cleave cultures, threaten democracy and stealthily stifle people’s agency, their free will.

Seth Finkelstein, principal at Finkelstein Consulting and Electronic Frontier Foundation Pioneer Award winner, commented, “Currently, our entire social media environment is deliberately engineered throughout to promote fear, hatred, division, personal attacks, etc., and to discourage thought, nuance, compromise, forgiveness, etc. And here I don’t mean the current moral panic over ‘algorithms,’ which, contrary to hype, I would say are a relatively minor aspect of the structural issues. Rather, the problem is ‘business models.’ Fundamentally, the simplest path of status-seeking in one’s tribe is treating opponents with sneering trashing, inflammatory mischaracterization or even outright lying. That’s quick and easy, while people who merely even take a little time to investigate and think about an issue will tend to find themselves drowned out by the outrage-mongering, or too late to even try to affect the mob reaction (or perhaps risking attack themselves as disloyal).

“These aren’t original, or even particularly novel observations. But they do imply that the problems have no simple technical fix in terms of promoting good information over bad or banning individual malefactors. Instead, there has to be an entire system of rewarding the creation of good information and not bad. And I’m well aware that’s easier said than done. This is a massive philosophical problem. But if one believes there is a distinction between the ‘public interest’ (truth) versus ‘what interests the public’ (popularity), having more of the former rather than the latter is not ever going be accomplished by getting together the loudest screamers and putting advertising in the pauses of the screaming.

“I want to stress how much the ‘algorithms’ critique here is mostly a diversion in my view. ‘If it bleeds, it leads’ is a venerable media algorithm, not just recently invented. There has a been a decades-long political project aimed at tearing down civic institutions that produce public goods and replacing them with privatized versions that optimize for profits for the owners. We can’t remedy the intrinsic failures by trying to suppress the worst and most obvious individual examples which arise out of systemic pathology. I should note even in the most dictatorial of countries, one can still find little islets of beauty – artists who have managed to find a niche, scientists doing amazing work, intellectuals who manage to speak out yet survive and so on. There’s a whole genre of these stories, praising the resilience of the human spirit in the face of adversity. But I’ve never found these tales as inspiring as others do, as they’re isolated cherry-picking in an overall hellscape.”

Ellery Biddle, projects director at Ranking Digital Rights, wrote, “I am encouraged by the degree to which policymakers and influential voices in academia and civil society have woken up to the inequities and harms that exist in digital space. But the overwhelming feeling as I look ahead is one of dread. There are three major things that worry me:

  1. Digital space has been colonized (see Ulises Mejias and Nick Couldry’s definition of data colonialism) by a handful of mega companies (Google, Facebook, Amazon) and a much broader industry of players that trade on people’s behavioral data. Despite some positive steps toward establishing data-protection regimes (mainly in the EU), this genie is out of the bottle now and the profits that this industry reaps may be too enormous for it to change course any time soon. This could happen someday, but not as soon as 2035.
  2. While the public is much more cognizant of the harms that major social media platforms can enable through algorithmic content moderation that can supercharge the spread of things like disinformation and hate speech online, the solutions to this problem are far from clear. Right now, three major regimes in the global south (Brazil, India and Nigeria) are considering legislation that would limit the degree to which companies can moderate their own content. Companies that want to stay competitive and continue collecting and profiting from user data will comply, and this may drive us to a place where platforms are even more riddled with harmful material than in the past and where government leaders dominate the discourse. The scale of social platforms like Facebook and Twitter is far too large – we need to work toward a more diverse global ecosystem of social platforms, but this may necessitate the fall of the giants. I don’t see this happening before 2035.
  3. Although the pandemic has laid bare the inequities and inequalities derived from access to digital technologies, it is difficult to imagine our current global internet (to say nothing of the U.S. context) infrastructure morphing into something more equitable any time soon.”

David Barnhizer, a professor of law emeritus, human rights expert and founder/director of an environmental law clinic, said, “In the decades since the internet was commercialized in the mid-1990s it has turned into a dark instrumentality far beyond the ‘vast wasteland’ of the kind the FCC’s [Federal Communications Commission’s] Newton Minow accused the television industry of having become in the early 1960s. A large percentage of the output flooding social platforms is raw sewage, vitriol and lies. In 2018, in a public essay in which he outlined ‘Three Challenges for the Web,’ Tim Berners-Lee, designer of the World Wide Web, voiced his dismay at what his creation had become compared to what he and his colleagues sought to create. He warned that widespread collection of people’s personal data and the spread of misinformation and political manipulation online are a dangerous threat to the integrity of democratic societies. …

“He noted that the internet has become a key instrument in propaganda and mis- and disinformation has proliferated to the point that we don’t know how to unpack the truth of what we see online, even as we increasingly rely on internet sites for information and evidence as traditional print media withers on the vine. Berners-Lee said it is too easy for misinformation to spread on the web, particularly because there has been a huge consolidation in the way people find news and information online through gatekeepers like Facebook and Google, which select content to show us based on algorithms that seek to increase engagement and learn from the harvesting of personal data. He wrote: ‘The net result is that these sites show us content they think we’ll click on – meaning that misinformation, or fake news, which is surprising, shocking or designed to appeal to our biases can spread like wildfire.’ This allows people with bad intentions and armies of bots to game the system to spread misinformation for financial or political gain.

“The current internet business model, with its expanding power and sophistication of AI systems, has created somewhat of a cesspool. It has become weaponized as an instrumentality of political manipulation, innuendo, accusation, fraud and lies, as well as a vehicle for shaming and sanctioning anyone seen to be somehow offending a group’s sensitivities. When people are subjected to a diet of such content they may become angry, hostile and pick ‘sides.’ This leads to a fragmentation of society and encourages the development of aggressive and ultra-sensitive identity groups and collectives. These tend to be filled with people convinced they have been wronged and people who are in pursuit of power to advance their agendas by projecting the image of victimhood. The consequence is that society is fractured by deep and quite possibly unbridgeable divisions. This allows the enraged, perverted, violent, ignorant and fanatical elements of society to communicate, organize, coordinate and feel that they are not as reprehensible as they appear. There are hundreds of millions of people who, as Tim Berners-Lee suggests, lack any filters that allow an accurate evaluation of what they are receiving and sending online. Illegitimate online speech legitimizes, for some, hate, stupidity and malice, while rendering the absurdity and viciousness nurtured by the narrowness of these groups’ agendas and perceptions.”

A professor of sociology based in Italy predicted, “Unless we break down the workings of platform and surveillance capitalism, no positive outlook can be imagined.”

A futures strategist and lecturer noted, “There is no incentive structure that would lead to improvement in digital spaces except ones that regard the lubrication of commerce.”

An online security expert based in New York City observed, “The problem is that the financial incentives of the internet as it has evolved do not promote healthy online life, and by now there are many large entrenched corporate interests that have no incentive to support changes for the better. Major platforms deny their role in promoting hate speech and other incendiary content, while continuing to measure success based on ham-fisted measures of ‘engagement’ that promote a race to the bottom with content that appeals to users’ visceral emotions. Advertising networks are also harnessed for disinformation and incendiary speech as well as clickjacking. (One bright spot is the great work the Global Disinformation Index is doing to call out companies benefitting from this promotion of dangerous garbage.) The expanding popularity of cryptocurrencies, built on a tremendous amount of handwaving and popular unfamiliarity with the technologies involved, poses threats to environment and economy alike.

“We have also failed to slow the roll of technologies that profile all of us based on data gathering; China’s large-scale building of surveillance tools for their nation-state offers few escapes for its citizens, and with the United States struggling to get its act together in many ways, it seems likely more and more countries around the world will decide that China’s model works for them. And then there’s the escalation of cyberwarfare, and the ongoing lack of Geneva Convention-like protections for everyday citizens. I do hold out hope that governments will at least sort out the latter in the next 5-10 years.”

Sonia Livingstone, a professor of social psychology and former head of the media and communications department at the London School of Economics and Political Science, wrote, “Governments struggle to regulate and manage the power of platforms and the data ecology in ways that serve the public interest while commerce continues to outwit governments and regulators in ways that undermine human rights and leave the public playing catch-up. Unless society can ensure that tech is ethical and subject to oversight, compliance and remedy, things will get worse. I retain my faith in the human spirit, so some things will improve, but they can’t win against the power of platforms.”

Rob Frieden, retired professor of telecommunications and law at Penn State University, responded, “While not fitting into the technology determinist, optimist or pessimist camps, I worry that the internet ecosystem on balance will generate more harms than benefits. There is too much fame, fortune, power, etc., to gain in overreach in lieu of prudence. The need to generate ever-growing revenues, enhance shareholder value and pad bonuses/stock options creates incentives for more data mining and pushing the envelope negatively on matters of privacy, data security, corporate responsibility. While I am quite leery of government regulation, the almost libertarian deference facilitates the overreach.”

Courtney C. Radsch, journalist, author and free-expression advocate, wrote, “Digital spaces and digital lives are shaped by and shape the social, economic and political forces in which they are embedded. Unfettered surveillance capitalism coupled with the proliferation of public and private surveillance, whether through pervasive facial and sentiment recognition systems and so-called ‘smart’ cities is creating a new logic that governs every aspect of our lives. Surveillance capitalism is a powerful forcing logic that compels other systems to adapt to it and become shaped by its logic. Furthermore, the datafication of every aspect of human experience and existence, coupled with the potential for behavioral modification and manipulation, make it difficult to see how the world will come together to rein in these forces since it would require significant political will and regulatory effort to unwind the trajectory we are on. There is not political will to do so. It’s hard to imagine what a different future alternative logic would look like and how that would be implemented, given that American lawmakers and tech firms are largely uninterested in meaningful regulation or serious privacy or oversight.

“Furthermore, surveillance, and the proliferation of facial- and sentiment-recognition systems, sophisticated spyware and tracking capabilities are being deployed by authoritarian and democratic company countries alike. So, it’s hard to see how the future does not end up being one in which pervasive surveillance is the norm and everyone is watched and trackable at all times, whether you’re talking about China and its model in Xinjiang and its export of its approach to countries around the world through the Belt and Road initiative, or American and Five Eyes mass surveillance, or approaches like ClearView AI and so-called ‘smart cities.’ These pervasive surveillance-based approaches to improving life or safety and security are likely to expand and deepen rather than become less concerning over this time period.

“It’s hard to see how the future does not end up being one in which pervasive surveillance is the norm and everyone is watched and trackable at all times.”

Courtney C. Radsch, journalist, author and free-expression advocate

“Politics is now infused by the logic of surveillance capitalism and by microtargeting, individual targeting and behavioral manipulation, and this is only going to become more prevalent as an entire industry is already evolving to serve campaigns around the world. We’re going to see insurance completely redefined from collective risk to individualized, personalized risk, which could have all sorts of implications for cost and viability.

“Digital spaces are also going to expand to include the inside of our bodies. The wearable trend is going to become more sophisticated, and implantables that offer the option to better monitor health data are unlikely to have sufficient oversight or safety given how much further ahead the market is from the legal and regulatory frameworks that will be needed to govern these developments. Constant monitoring and tracking and surveillance will be ubiquitous, inescapable and susceptible to abuse. I don’t see how the world is going to move away from surveillance when every indication is that more and more parts of our lives will be surveilled whether it’s to bring us coupons and savings or whether it’s to keep us safe, or whether it’s to deliver us better services.”

Nicholas Proferes, assistant professor of information science at Arizona State University, said, “There is an inherent conflict between the way for-profit social media platforms set up users to think about the platforms as ‘community’ but also must commodify information flows for user-content vastly exceeding what normally exist in a ‘community.’ Targeted ads, deep analysis of user-generated content (such as identification of brands or goods in photos/videos uploaded by users), facial recognition, all pose threats to individuals. As more and more social media platforms become publicly traded companies (or plan to), the pressure to commodify will only intensify. Given the relatively weak regulation of social media companies in the past decade in the U.S., I am pessimistic.”

A teacher based in Oceania wrote, “It has become so people are almost being forced to own and maintain a smartphone in order to conduct their daily lives. I cannot conceive of any scenario where this trajectory will improve our lives in the areas of social cohesion – more likely digital spaces will continue to be marshalled in order to divide and rule. Many people are unaware of how they are being either manipulated or exploited or both. Some of them are not interested in key issues of the internet, its governance and so on. They are online as a matter of course and their lives are dependent on connectivity. They are not interested in how data is collected or whether everything they do with IT is either already being tracked or could be given to some entity that might want to use such data for their own ends.

“The most difficult issue to be surmounted is the increasing division between ‘camps’ of users. Social media has already been seen to enhance some users’ feelings of entitlement while others have been reported to feel unable to speak out in digital public due to the chilling effects of what some are policing. I believe this sort of fragmentation of society is not going to be improved, but only enhanced in the future – most obviously by those with digital ‘power’ (large companies such as Google, Facebook, Amazon, TikTok, etc.). It also seems as if nation-states are getting on board with widespread surveillance and law-making to prevent anyone from sticking their heads above the parapet and whistleblowing – we already have seen many imprisoned or being harassed for reporting online. Social fragmentation is also exemplified in areas such as online dating and the fact that many people don’t even know any more how to simply meet others in real life due to utter dependence on their mobile technology.”

An academic based in France commented, “Human nature in each of us seeks power, money and domination, which are such strong attractors that they are very difficult to give up. Buddhists describe futility and the need to give up any desire for possessions responsible for the suffering of all men and all species in the ecosystem who suffer the hegemony of man on Earth. Powerful people find new ways to dominate the weakest on the internet.”

Data surveillance used against individuals’ best interests will remain on ongoing, unstoppable threat

A futurist and transformational business leader commented, “As long as digital spaces are controlled by for-profit companies they will be continue to focus on clicks and visibility. What is popular is not necessarily good for our society. And increased use of algorithms will drive increased micro-segmentation that further isolates content that is not read by ‘people like me,’ however that is defined. The only way to combat this is to:

  1. Provide consumers with full control over how their data is used both at the macro and micro levels.
  2. Provide full transparency of the algorithms that are used to pre-select content, rate consumers for eligibility for services, etc., otherwise bias will creep in and discriminate against profiles that don’t drive high-value consumption patterns.
  3. Provide reasonably priced, paid social platforms that do not collect data.
  4. Provide clear visibility to users of all data collection, uses (including to whom the personal data is being routed), and the insights derived from such data.”

Andy Opel, professor of communications at Florida State University, responded, “Markets only work when citizens have a range of products to choose from, and currently the major media products most people interact with online – social media, dominant news and entertainment sites, search engines – track and market their every move, selling granular, detailed profiles of the public that they are not even allowed to access. Right now, there is a very active and dynamic struggle over transparency, access and personal data rights. The outcome of this struggle is what will shape the future of our digital lives. As the ubiquitous commercialization of our digital spaces continues, audiences have grown increasingly frustrated and resistant. This frustration is fueling a growing call for a political and regulatory response that defends individual rights and restores balance to a system that currently does not offer non-commercial, anonymous, transparent alternatives.”

David Barnhizer, professor of law emeritus and founder/director of an environmental law clinic, wrote, “Despots, dictators and tyrants understand that AI and the internet grant to ordinary people the ability to communicate with those who share their critical views, and to do so anonymously and surreptitiously threatens these controllers’ power and must be suppressed. Simultaneously, they understand that, coupled with AI, the internet provides a powerful tool for monitoring, intimidating, brainwashing and controlling their people. China has proudly taken the lead in employing such strategies: the power to engage in automated surveillance, snooping, monitoring and propaganda can lead to intimidating, jailing, shaming or otherwise harming those who do not conform. This is transforming societies in heavy handed and authoritarian ways. This includes the United States. China is leading the way in showing the world how to use AI technology to intimidate and control its population. China’s President Xi Jinping is applauding the rise of censorship and social control by other countries. Xi recently declared that he considers it essential for a political community’s coherence and survival that the government have complete control of the internet.

“A large critical consideration is the rising threat to democratic systems of government due to the abuse of the powers of AI by governments, corporations and identity group activists who are increasingly using AI to monitor, snoop, influence, invade fundamental privacies, intimidate and punish anyone seen as a threat or who simply violated their subjective ‘sensitivities.’ This is occurring to the point that the very ideal of democratic governance is threatened. Authoritarian and dictatorial systems such as China, Russia, Saudi Arabia, Turkey and others are being handed powers that consolidate and perpetuate their oppression. Recently leaked information indicates that as many as 40 governments of all kinds have gained access to the Pegasus spyware system that allows deep, comprehensive and detailed monitoring on the electronic records of anyone, and that there have been numerous journalists targeted by individual nations.

“Reports indicate that the Biden administration has forged a close relationship with Big Tech companies related to the obtaining of citizens’ electronic data and online censorship. An unfortunate truth is that those in power – such as intelligence agencies like the NSA, politicized bureaucrats, and those who can gain financially or otherwise – simply cannot resist using AI tools to serve their interests.

“The authoritarian masters of such political systems have eagerly seized on the surveillance and propaganda powers granted them by the AI and the internet. Overly broad and highly subjective interpretations about what constitutes ‘hate’ and ‘offense’ are destructive grants of power to identity groups and tools of oppression in the hands of governments. They create a culture of suspicion, accusation, mistrust, resentment, intimidation, abuse of power and hostility. The proliferation of ‘hate speech’ laws and sanctions in the West – formal and informal, including the rise of ‘cancel culture’ – has created a poisonous psychological climate that is contributing to our growing social divisiveness and destroying any sense of overall community.”

A distinguished engineer at one of the world’s leading technology companies noted, “There are always bad players and, sadly, most digital spaces design security as an afterthought. Attackers are getting more and more sophisticated, and AI/ML [machine learning] is being overhyped and over-marketed as a solution to these problems. Security failures and hacks are happening all over the place. But of bigger concern to me is when AI/ML do things that single out individuals incorrectly. It often makes not just mistakes but serious blunders that are often completely overlooked by the designers of applications that use it. This is likely to have increasingly negative consequences for society in general and can be very damaging for innocent individuals who are incorrectly targeted. I foresee this turning into a legal mess moving forward.”

A professor of digital economy and culture commented, “We are creating huge commercial organizations with large repositories of data that are not politically accountable. These organizations possess quasi-extralegal powers through data that we need to regulate now.”

An enterprise software expert with one of the world’s leading technology companies said, “There are two disturbing trends occurring that have the potential to dramatically reduce the benefits of the internet. The first is a trend toward centralized services controlled by large corporations and/or governments. Functions and features that are attractive to many users are being controlled more and more by fewer and fewer distinct entities. Diversity is falling by the wayside. This centralization:

  • Limits choices for everyday users.
  • Concentrates large amounts of personal information under the control of these near monopolies.
  • Creates a homogeneous environment, which tends to be more susceptible to compromise.

“The second trend is balkanization within the internet ecosystem. Countries like China and Russia are making or have made concerted efforts to build capabilities that will allow them to segment their national networks from the global internet. This trend is starting to be propagated to other countries as well. Such balkanization:

  • Reduces access to global information.
  • Creates a vector for controlling the information consumed by a country’s citizens.
  • Facilitates tracking of individuals within the country.”

An advocate for free expression and open access to the internet wrote, “While it is true that the internet and digital spaces are empowering people, governments around the world are equally threatened by the liberation the internet provides and tend to impose or adopt policies in order to control information. Increasingly, governments are weaponizing internet shutdowns, censorship, surveillance and the exploitation of data, among others, to have control. These practices in the next few years will negatively impact democracies and provide avenues for governments to violate fundamental human rights of the people with impunity. Other stakeholders including internet service providers and technology companies are also complicit when it comes to the deterioration we are seeing in digital spaces. The recent revelation of how NSO Group’s spyware tool Pegasus was implemented in mass human rights violations around the world through surveillance, as well as the involvement of Sandvine in facilitating the Belarus internet shutdowns last year, brings to bear some of these concerns.”

A professor based in Oceania said, “I see the increasing encroachment of states through amplification of narrow political messaging, control through regulation and adoption of technical tools that are less transparent/visible. The justification for increased surveillance to keep people safe – safe from threats from others who might threaten local livelihood, threat from viruses – will open up broader opportunities for state control of populations and their activities (much like 9/11 changed the public comfort levels with some degree of surveillance, this will be amplified even further by the current pandemic). Global uncertainty and migration as a result of climate change and threat will also accentuate inequity and opportunities to harness dissatisfaction. Increasing conservatism as a result of uncertainties such as COVID-19, climate change, digital disruption and changes in higher education toward an increased focus on job skilling rather than also developing critical thought and social empathy/citizenship understood in the broadest sense do not inspire much confidence in a brighter future.”

A professor of architecture and urban planning at a major U.S. university wrote, “Attention is the coin of the realm. Alas, the kinds of attention that support trustful, undivided participation in civic and institutional contexts fall by the wayside. Perhaps the most important concern is the loss of ability to debate nuances of issues, to hold conflicting and incomplete positions equally in mind, or to see deeper than the callow claims of technological solutionism. Embodied cognition and the extended mind emphasize other, more fluent, more socially situated kinds of attention that one does not have to ‘pay.’ Per Aristotle – and still acted out in the daily news cycle – embodiment in the built spaces of the city remains the main basis for thoughtful political life. Disembodiment seems unwise enough, but when coupled with distraction engineering, it becomes quite terrifying. China shows how. In America, a competent tyrant would find most of the means in place. Factor in some shocks from climate, and America’s future has never seemed so dire. (On the other hand, to do the world some good right now, today, just give an East African a phone).”

A French professor of information science observed, “Technological tools and the digital space are primarily at the service of those who master the technologies, the specifications of these tools and even the ethical charters through the lobbying that these companies organize. … Hell is paved with good intentions. Digital ethical charters strongly influenced by digital companies do not make digital spaces ethical. At the beginning of the internet years (1980-1990), this digital technology was at the service of science and researchers and made for knowledge-sharing and education. Today, the internet is 95% at the service of marketing and customer profiling, and the dominant players recursively feed on profits and the recurring influence of influencers followed on the net (most of the time because they benefit from a superficial positive image). The internet has become a place of control and surveillance over all people. It has become a threat to democracy and the government institutions that become themselves controlled and influenced by digital companies. … A genuine internet that is only dedicated to art, sciences and education, free of advertising, should be developed.”

Toby Shulruff, senior technology safety specialist at the National Network to End Domestic Violence, wrote, “Digital spaces are the product of the interplay between social and technical forces. From the social side, the harms we’re seeing in terms of harassment, hate and misinformation are driven by social dynamics and actors that predate digital spaces. However, those dynamics are accelerated and amplified by technology. While a doctrine of hate (whether racialized, gendered or along another line) might have had a smaller audience on the fringe in previous decades, social media in particular among digital spaces has been pouring fuel on the flames, attracting a wider audience and disseminating a much higher volume on content.

“On the technological side, the business models and design strategies for digital spaces have given preference to content that generates a reaction (whether positive or negative) at a rapid pace. This therefore discourages thoughtful reflection, fact-checking and respectful discourse. Legal and regulatory frameworks have not kept pace with the rapid emergence of digital spaces and the platforms that host them, with policymakers left without adequate assessment or useful options for governance. Digital spaces are accelerating existing, complex deeply entrenched inequalities of access and power rather than shaping more pro-social, respectful, cooperative forms of social interaction.

“In sum, these trends lead me to a pessimistic outlook on the quality of digital spaces in 2035. I do think that a combination of shifts in social attitudes, wider acceptance of concepts of equality and human rights, dissemination of more cooperative and respectful ways of relating with each other in person and a deliberate redesign of digital spaces to promote pro-social behavior and add friction and dissuasion of hateful and violent behavior holds a possibility for improving not only digital spaces, but human interaction IRL (in real life).”

A number of respondents did point out the positives of data applications. One was Brock Hinzmann, co-chair of the Millennium Project’s Silicon Valley group and a 40-year veteran of SRI International. He wrote, “Public access to online services and e-government analysis of citizen input will continue to evolve in positive ways to democratize social function and to increase a sense of well-being. The Internet of Things will obviously vastly increase the amount of highly detailed data available to all. Analytics (call it AI) will improve the person-system interface to help individuals to understand the veracity of the information they see and to help the system AI to understand what the people are experiencing. Small business and other socially beneficial organization formation will become easier and more sustainable than they are today. Nefarious users, criminals and social miscreants will continue to be a problem; this will require continuous upgrades in security software.”

Theresa Pardo, senior fellow at the Center for Technology in Government at University at Albany-SUNY said, “There is an increasing appreciation of the need for sophisticated data management practices across all sectors. Leaders at all levels appear to have moved beyond the theoretical notion that data-informed decision making can create public value; they are now actually seeking more and more opportunities to draw on analytics in decision making. They are, as a consequence, becoming more aware of the pervasive issues with data and the need for sophisticated data governance and management capabilities in their organizations. As they seek also to fully integrate programs and services across the boundaries of organizations at all levels and sectors building, among other assets, data collaboratives, they are also recognizing the need for leadership in the management of data as a government asset.”

Haters, polarizers and jerks will gain more power

The human instinct toward self-interest and fear of “the other” or the unfamiliar has led people to commit damaging acts in every social space throughout human history. One difference now, though, is that digital networks enable instantaneous global reach at low cost while affording anonymity to spread any message. Many expert respondents noted that digital networks are being wielded as weapons of personal, political and commercial manipulation, innuendo, accusation, fraud and lies, and that they can easily be leveraged by authoritarian interests and the general public to spread toxic divisiveness.

Chris Labash, associate teaching professor of information systems management at Carnegie Mellon, responded, “My fear is that negative evolution of the digital sphere may be more rapid, more widespread and more insidious than its potential positive evolution. We have seen, 2016 to present especially, how digital spaces act as cover and as a breeding ground for some of the most negative elements of society, not just in the U.S., but worldwide. Whether the bad actors are from terror organizations or ‘simply’ from hate groups, these spaces have become digital roach holes that research suggests will only get larger, more numerous and more polarized and polarizing. That we will lose some of the worst and most extreme elements of society to these places is a given. Far more concerning is the number of less-thoughtful people who will become mesmerized and radicalized by these spaces and their denizens: people who, in a less digital world, might have had more willingness to consider alternate points of view. Balancing this won’t be easy; it’s not simply a matter of creating ‘good’ digital spaces where participants discuss edgy concepts, read poetry and share cat videos. It will take strategies, incentives and dialogue that is expansive and persuasive to attract those people and subtly educate them in approaches to separate real and accurate inaccurate information from that which fuels mistrust, stupidity and hate.”

My fear is that negative evolution of the digital sphere may be more rapid, more widespread and more insidious than its potential positive evolution.”


Chris Labash, associate teaching professor of information systems management at Carnegie Mellon

Adam Clayton Powell III, executive director of the Election Cybersecurity Initiative at the University of Southern California, commented, “While I wish this were not the case, it is becoming clear that digital spaces, even more than physical spaces, are becoming more negative. Consider as just one example the vulnerability of female journalists, many of whom are leaving the profession because of digital harassment and attacks. In Africa, where I have worked for years, this is a fact of life for anyone opposing authoritative regimes.”

Danny Gillane, an information science professional, wrote, “People can now disagree instantaneously with anybody and with the bravery of being out of range and of anonymity in many cases. Digital life is permanent, so personal growth can be erased or ignored by an opponent’s digging up some past statement to counter any positive change. Existing laws that could be applied to large tech companies, such as antitrust laws, are not being applied to these companies nor to their CEOs. Penalties imposed in the hundreds of millions of dollars or euros are a drop in the bucket to the Googles of the world. Relying on Mark Zuckerberg to do the right thing is not a plan. Relying on any billionaire or wannabe billionaire to do the right thing to benefit the planet as opposed to gaining power or wealth is not a plan. It is a fantasy. I think things could change for the better but they won’t. Elected officials, especially in the United States, could place doing what’s best for their constituencies and the world over power and reelection. Laws could be enforced to prevent the consolidation of power in the few largest companies. Laws could be passed to regulate these large companies. People could become nicer.”

William L. Schrader, board member and advisor to CEOs, previously co-founder of PSINet Inc., said, “Democracy is under attack, now and for the next decade, with the help and strong support of all digital spaces. The basic problem is ignorance (lack of education), racism (anti-fill-in-the-blank) and the predilection of some segments of society to listen to conspiracy theories A through Z and believe them (stupid? or just a choice they make due to bias or racism?). To quote the movie ‘Red October,’ ‘We will be lucky to live through this’ means more now than before the 2016 U.S. election. I think things could change for the better but not likely before 2035. The delay is due to the sheer momentum of the social injustice we have seen since humankind populated the earth. That plus the economic- and life-extinguishing climate change that has pitted science against big money, rich against poor, and, eventually, the low-land countries against the highlanders. Hell is coming and it’s coming fast with climate change. Politics will have no effect on the climate but will on the money. The rich get richer, and the poor get meaner. Riots are coming and not just at the U.S. Capitol. Meanwhile, the digital space will remain alive and secure in parts and insecure mostly. It will assist and not fully replace the traditional media.”

An Ivy League professor of science and technology studies responded, “Overall, voices of criticism and disenchantment are rising, and one can hope for a reckoning. The questions remain: ‘How soon, and what else will have become entrenched by then?’ Things don’t look good. There is near-monopolistic control by a few firms, there are complex and opaque privacy protections and then there is the addictive power of social media and an increasing reliance on digital work solutions by institutions that are eager to cut back on the cost and complications of having human employees. Things might get somewhat better. Even a single case can resonate, like Google vs. Spain, which had ripple effects that can be seen in the GDPR [General Data Protection Regulation in Europe] and California’s privacy law. But people’s understanding of what is changing – including impacts upon their own subjectivity and expectations of agency – is not highly developed. The buzz and hype surrounding Silicon Valley has tamped down dissent and critical inquiry to such an extent that it will take a big upheaval – bigger than the Jan. 6, 2021, insurrection – to fundamentally alter how people see the threats of digital space.”

A professor of information technology and public policy based at a major U.S. technological university predicted, “Similar to the likely outcome for humanity of the doleful predictions we are seeing regarding climate change, the deleterious influences on society that we have put in place through novel digital technologies could keep gaining momentum until they reach a point of irreversibility – a world with no privacy, of endemic misinformation, and of precise, targeted, intentional manipulation of individual behavior that exploits and leverages our own worst instincts. My hope (it’s not an expectation) is that recognition of the negative effects of human behavior in digital spaces will lead to a collective impetus for change and, specifically, for regulatory interventions that would promote said change (in areas including privacy, misinformation, exploitation of vulnerable communities and so forth). It is entirely possible in fact that the opposite will happen.”

A futurist based in North American commented, “I anticipate plenty of change in digital life, however not so much in human beings. Almost all new and improved technologies can, and will be, used for bad as well as good ends. Criminality and the struggle for advantage are always with us. If we can recognize this and be willing to explore, understand, and regulate digital life and its many manifestations we should be okay.”

The director and co-founder of a nonprofit organization that seeks social solutions to grand challenges responded, “We seem woefully unconcerned about the fact that we are eating the seed corn of our civilization. I see no sign that this will change at the moment, though we’ve had civic revivals before and one may be brewing. Our democracy, civic culture and general ability to solve problems together is steadily and not so slowly being degraded in many ways, including through toxic and polarizing ‘digital spaces.’ This will make it difficult to address this issue, itself, not to mention any challenge.”

Humans can’t keep up with the speed and complexity of digital change

A share of these respondents make the case that one of the largest threats to a better future arises from the fact that digital systems are too large, too fast, too complex and constantly morphing. They say this accelerating change cannot be reined in, and new threats will continue to emerge as tech advances. They say the global network is too widespread and distributed to possibly be “policed” or even “regulated” effectively, that humans and human organizations as they are structured today cannot address this appropriately.

Alexa Raad, chief purpose and policy officer at Human Security and host of the TechSequences podcast, commented, “Transformation and innovation in digital spaces and digital life have often outpaced the understanding and analysis of their intended or unintended impact and hence have far surpassed efforts to rein in their less-savory consequences.”

A professor of digital economics explained, “There is often a time lag between the appropriation of technologies and the ramifications of these on social life, public/private life, ethics and morality. Due to this lag between the point at which extensive usage is reached and the recognition of moral/social consequences and because there is a human dimension along with its interplay with a capitalist agenda in the appropriation of technologies, we will often only remedy social and ethical ills after a period of time has lapsed. Our evaluations of technologies at a social and ethical level are not in sync with the arrival and uses of technologies as a platform for economic enterprise and the glorifications of these by nation-states and neoliberal economies. The ascendency of data empires attests to this.”

The founding director of an institute for analytics predicted, “The changes that need to be made – which reasonable people would probably debate – won’t matter, because they won’t be made soon enough to stop the current trajectory. Technology is moving too fast, and it is uncontrollable in ways that will be increasingly destructive to society. Still, it is time for the internet idealists to leave the room and for a serious conversation to begin about regulating digital spaces and fast, otherwise we may not make it to 2035. Digital spaces have to be moved from an advertising model into either a subscriber model or a utility model with metered distribution. Stricter privacy laws might kill the advertising model instantly.”

Rick Doner, emeritus professor wrote, “My concern is that, as so often happens with innovation/technology, changes in the ‘marketplace’ – whether financial, commercial or informational – outpace the institutions that theoretically operate to direct and/or constrain the impact of such innovations. I view digital developments almost as a sort of resource curse. There are, to be sure, lots of differences, but we know that plentiful, lucrative natural resource endowments tend to be highly destructive of social stability and equity when they emerge in the absence of ‘governance’ institutions, and here I’m talking not just about formal rules of government, but also institutions of representation and accountability. And we now have a vicious cycle in which the digital innovations are undermining both the existing institutions (including informal trust) and the potential for stronger institutions down the road.”

Richard Barke, associate professor in the School of Public Policy at Georgia Tech, commented, “Laws and regulations might be tried, but these change much more slowly than digital technologies and business practices. Policies have always lagged technologies, but the speed of change is much greater now.”

Ian O’Byrne, an assistant professor of Literacy Education at the College of Charleston, responded, “One of the biggest challenges is that the systems and algorithms that control these digital spaces have largely become unintelligible. For the most part, the decisions that are made in our apps and platforms are only fully understood by a handful of individuals. As machine learning continues to advance, and corporations rely on AI to make decisions, these processes will become even less understood by the developers in control let alone the average user interacting in these spaces.”

Oscar Gandy, an emeritus scholar of the political economy of information at the University of Pennsylvania, said, “Much of my pessimism about the future of digital spaces is derived from my observations regarding developments that I have seen lately, and on the projections of critical observers who project further declines in these directions. While there are signs of growing concern over the growth in the power of dominant firms within the communications industry and suggestions about the development of specialized regulatory agencies with the knowledge, resources and authority to limit the development and use of data and analytically derived inferences about individuals and members of population segments or groups, I have not got much faith in the long-term success of such efforts, especially in the wake of more widespread use of more and more sophisticated algorithmic technologies to bypass regulatory efforts.

“There is also a tendency for this communicative environment to become more and more specialized, or focused upon smaller and smaller topics and perspectives, a process that is being extended through algorithmically enabled segmentation and targeting of information based upon assessments of the interests and viewpoints of each of us.

“In addition, I have been struck by the nature of the developments within the sphere of manipulative communication efforts, such as those associated with the so-called dark psychology, or presentational strategies based upon experimental assessments of different ways of presenting information to increase its persuasive impact.”

A North American entrepreneur wrote, “Technology is advancing at a rapid pace and will continue to outpace policy solutions. I am concerned that a combination of bad actors and diminishing trust in government and other institutions will lead to the continued proliferation of disinformation and other harms in digital spaces. I also am concerned that governments will ramp up efforts to weaponize digital spaces. The one change for the better is that the next generation of users and leaders may be better equipped to counter the negative trends and drive improvements from a user, technical and governance perspective.”

Barry Chudakov, founder and principal at Sertain Research, proposed that a “Bill of Integrities” might be helpful in adjusting everything to the speed of digital. He observed, “There is one supremely important beneficial role for tech leaders and/or politicians and/or public audiences concerning the evolution of digital spaces. Namely, understanding the drastically different logic digital spaces represent compared to the traditional logic (alphabet and text-centric logic) that built our inherited traditional physical spaces. Our central institutions of school, church, government and corporation emerged from rule-based, sequential alphabetic logic over hundreds of years; digital spaces follow different rules and dynamics.

“A central issue fuels, possibly even dwarfs that consideration: We are in the age of accelerations. Events and technologies have surpassed – and will soon far surpass – political figures’ ability to understand and make meaningful recommendations for improvement or regulation. In the past, governments had a general sense of a company’s products and service. Car manufacturers made cars with understandable parts and components. But today, leading technologies are advancing by inventing and applying new, esoteric, little-understood (except by creators and a handful of tech commentators) technologies whose far-reaching consequences are either unknown, unanticipated, or both. The COVID-19 pandemic has revealed colossal ignorance among some politicians regarding the basics of public health. What wisdom could these same people bring to cyber hacking? To algorithm-mediated surveillance? To supporting, enhancing and regulating the metaverse? At its most basic, governance requires a reasonable understanding of how a thing works. Who in government today truly understands quantum computing? Machine intelligence and learning? Distributed networks? Artificial intelligence?

“We now need a technology and future-focused aristos: a completely neutral, apolitical body akin to the Federal Reserve focused solely on the evolution of digital spaces. In lieu of an aristos, education will need to refocus to comprehend and teach new technologies and the mounting ramifications of these technologies – in addition to teaching young minds how perceptions and experiences change in evolving digital spaces.

“Digital spaces expand our notions of right and wrong; of acceptable and unworthy. Rights that we have fought for and cherished will not disappear; they will continue to be fundamental to freedom and democracy. But digital spaces and what Mary Aiken called the cyber effect create different, at times alternate, realities. Public audiences have a significant role to play by expanding our notion of human rights to include integrities. Integrity – the state of being whole and undivided – is a fundamental new imperative in emerging digital spaces which can easily conflate real and fake, fact and artifact. Identity and experience in these digital spaces will, I believe, require a Bill of Integrities which would include:

  • Integrity of Speech | An artifact has the right to free expression as long as what it says is factually true and is not a distortion of the truth.
  • Integrity of Identity | An artifact must be, without equivocation, who or what it says it is. If an artifact is a new entity it can identify accordingly, but pretense to an existing identity other than itself is a violation of identity sanctity.
  • Integrity of Transparency | An artifact must clearly present who it is and with whom, if anyone, it is associated.
  • Integrity of Privacy | Any artifact associated with a human must protect the privacy of the human with whom the artifact is associated and must gain the consent of the human if the artifact is shared.
  • Integrity of Life | An artifact which purports to extend the life of a deceased (human) individual after the death of that individual must faithfully and accurately use the words and thoughts of the deceased to maintain a digital presence for the deceased – without inventing or distorting the spirit or intent of the deceased.
  • Integrity of Exceptions | Exceptions to the above Integrities may be granted to those using satire or art as free expression, providing that art or satire is not degraded for political or deceptive use.”

Reform cannot arise because nation-states are weaponizing digital tools

A share of the respondents who worried about the unmanageable speed of change said they are concerned about the weaponization of digital tools by nation-states. There is no incentive to improve digital spaces, according to these experts, when nations use them as part of their global and domestic policies. A digital arms race among nations will encourage the use of digital tools to mount physical and social attacks, they claim. Some respondents predicted that the technological advances will always have humans playing a game of catch-up.

David Barnhizer, professor of law emeritus and founder/director of an environmental law clinic, wrote, “We are in a new kind of arms race we naively thought was over with the collapse of the Soviet Union. We are experiencing quantum leaps in AI/robotics capabilities. Sounds great, right? The problem is that these leaps lead to include vastly heightened surveillance systems, amazing military and weapons technologies, autonomous self-driving vehicles, massive job elimination, data management and deeply penetrating privacy invasions by governments, corporations, private groups and individuals. The Pentagon is investing $2 billion in the Defense Advanced Research Projects Agency (DARPA) ‘AI Next Campaign,’ focusing on increased AI research and development. The U.S. military is committed to creating autonomous weapons and is in the early stages of developing weapons systems intended to be controlled directly by soldiers’ minds. Significant AI/robotics weaponry and cyber warfare capabilities are being developed and implemented by China and Russia, including autonomous tanks, planes, ships and submarines, tools that can also mount dangerous attacks on nation-states’ grids and systems.”

Charles Ess, emeritus professor in the department of media and communication at the University of Oslo, said, “The tech giants can point to the more-ruthless competitors out there – Russia and China as a start – to stoke further fear of any sort of government intrusion as hobbling a global competition with such high stakes (i.e., superpower dominance).”

Carl Frey, director of the Future of Work project at Oxford University, responded, “While I am optimistic about the long-run, I think it will take some time to reverse the political polarization that we are currently seeing. In addition, I worry about the surveillance state that China is building and exporting.”

Sam Punnett, retired owner of FAD Research, commented, “It’s difficult to read a book such as Nicole Perlroth’s ‘This is How They Tell Me the World Ends’ [a book about the cyberweapons market] and then think we are not doomed. It’s like trying to negotiate a mutually-assured-destruction model with several dozen nation-states holding weapons of mass destruction. I’d guess many Western legislators aren’t even aware of the scope of the problem. Any concerns about social media and consumer information are trivial compared to the threats that exist for intellectual property and intelligence theft and damage to infrastructure.”

Zak Rogoff, a research analyst at the Ranking Digital Rights project, said, “New problems will continue to keep appearing at the margins with the newer tech. Social media and driverless cars, for example, as they have emerged have been good for most people most of the time, but eventually they caused unforeseen systemic problems. I suspect we’ll see a continuing cycle where, as more elements of life become at least partially controlled by machines, new problems arise and they are later at least partially addressed. By 2035 there will probably be newly popular forms of always-on wearables that interface with our sensorium, or even brain-computer interfaces, and these will be the source of some of the most interesting problems.”

Dweep Chand Singh, professor and director/head of clinical psychology at Aibhas Amity University in India, predicted, “Communication via digital mode will advance, evolving to an addition of non-physical means, i.e., brain-to-brain transmission/exchange of information. Biological chips will be prepared and inserted in people’s brains to facilitate non-physical communication. Artificial neurotransmitters will be developed in neuroscience labs for an alternative mode of brain-to-brain communication.”