One current public debate centers on whether it is enough to expect people to simply evolve to avoid unhealthy tech habits or whether the only effective solution is for the tech business to evolve different approaches. Nir Eyal advocates in his new book “Indistractible” that people can apply the concepts behind tech addiction – motivation, trigger and ability – to disconnect from unhealthy tech habits. Venture capitalist Roger McNamee spoke for those who believe that isn’t enough when he said, “The best way would be for founders of these companies to change their business model away from advertising. We have to eliminate the economic incentive to create addiction in the first place.” Canadian journalist Eric Andrew-Gee summed up many concerns in an article titled “Your smartphone is making you stupid, antisocial and unhealthy. So why can’t you put it down?” writing, “Billions of people continue to be distracted and turned away from loved ones thanks to their smartphones. And untold billions of dollars wielded by some of the world’s biggest companies are devoted to keeping it that way.”
Changes can be made to mitigate potential harms of digital life, but, depending on what those harms are, the responses will require a complex combination of public education, government activity and corporate agreement.
Joseph Turow
Respondents to this canvassing were asked what might be done to diminish any threats to individuals’ well-being that are now emerging due to people’s choices in creating digital systems and living digital lives. Whether they answered that digital life will be mostly helpful or mostly harmful, a majority of respondents said there are existing and foreseeable downsides that deserve attention. They discussed ways in which adjustments might be made to build a better future.
One particularly comprehensive answer came from Aram Sinnreich, an associate professor at American University’s School of Communication, who listed several ideas: “The most important thing we can do to mitigate the negative social effects of the internet is to draw on social scientific and communication research to understand the multifaceted roles it plays in public and private lives, and to use both state and market regulatory measures to address these different dimensions separately, while maintaining a holistic understanding of its transformative potential overall. In practice, this means measures including but not limited to: 1) Holding algorithms, and the companies responsible for them, accountable for their role in shifting and shaping social and political power dynamics. 2) Developing a ‘digital bill of rights’ that privileges human dignity over the profit motive. 3) Involving multiple stakeholders on a global scale in internet governance. 4) Integrating digital media literacy more deeply into our educational systems. 5) Regulating internet communications in [a] way that privileges diversity of participation at every level and requires accountability and transparency to consumers and citizens. 6) Investing heavily in post-fossil fuel energy sources.”
There are those who expect that interventions may have a bit of influence but not enough.
Eric Allman, research engineer at the University of California, Berkeley, commented, “I do think there exist actions that can (and will) be taken to mitigate problems, but I am not confident that those mitigations will be enough to solve the problems.”
Joseph Turow, professor of communication at the University of Pennsylvania’s Annenberg School of Communication, wrote, “Changes can be made to mitigate potential harms of digital life, but, depending on what those harms are, the responses will require a complex combination of public education, government activity and corporate agreement. Some of the harms – for example, those relating to issues of surveillance and privacy – unfortunately result from corporate and government activities in the political and business realms. Moreover, government and corporate actors often work together in these domains. Their vested interests will make it extremely difficult to address privacy and surveillance practices so that they match the public interest, but advocacy groups will keep trying and they may make some progress with increasing public awareness.”
In the next few sections we share respondents’ ideas about the potential interventions that might help bring a better future for people living digital lives. They are organized under these commonly occurring themes: reimagine systems; reinvent tech; regulate; recreate media literacy; recalibrate expectations; and fated to fail.
Reimagine systems: Societies can revise both tech arrangements and the structure of human institutions – including their composition, design, goals and processes
A large share of respondents said human systems tapping into human nature are to blame for many of the downsides of digital life. They argue that fixing those problems can make a difference for the better.
Alejandro Pisanty, a professor at Universidad Nacional Autónoma de México and longtime leading participant in the activities of the Internet Society, wrote, “An open, public, civil, rational discussion of principles guiding systems design and implementation will become critical. All stakeholders must be availed a chance to participate meaningfully, in a timely and relevant manner. The most important intervention is to help, nudge or even force people to THINK, think before we click, think before we propagate news, think before we act. Some regulatory actions inviting information disclosure by corporations and government may be helpful but will fall on fallow ground if people are not awake and aware. Second: transparency to a reasonable extent will continue to be necessary, so the basis of decisions made by systems can be understood by people, and people and organizations can in turn test the systems and adjust their responses.”
We can’t simply rely on technology to mitigate the harms of technology; rather, we must look at our educational systems, our political and economic systems – therein lie the solutions.
Jillian C. York
Giacomo Mazzone, head of institutional relations at the European Broadcasting Union, shared a number of specific targets for improving systems, writing, “1) New antitrust rules on a global scale need to be defined, and corporations that have reached far beyond their boundaries have to break up. The internet giants that immediately take over any innovation arriving into the market are becoming an obstacle to change and progress. 2) The open internet needs to be preserved at any price. If we have separate internet for the rich and the poor, the reasons we have granted special status and exceptional treatment to the internet revolution have gone. 3) Disruptive social impacts need to be addressed quickly – as the disruption process is identified and not afterward. Educational processes need to be redesigned, taking into account the notion of digital citizenship and the need for lifelong learning processes. 4) A brand new ‘social contract’ should be defined and signed between ruling classes, business community, citizens; the notions of salaries, jobs, pensions and social security need to be redesigned from scratch.”
Anita Salem, a human systems researcher based in North America, commented, “Potential risks can be mitigated by reframing the role of technology and reducing the power of corporations. Technology needs to focus on the whole system, minimize unintended consequences and support big lives rather than big corporations. In addition to marketability, technology should be valued by how well it strengthens human relationships, preserves our planet, bridges inequalities and provides a livable wage, gives voice to the marginalized, develops creativity, supports mental and physical health, and increases opportunities for leading a meaningful life. This however, requires a cataclysmic shift in our economic system.”
Jillian C. York, director for international freedom of expression at the Electronic Frontier Foundation, said, “Interventions to mitigate the harms of digital life are possible, but they require a commitment to holistic solutions. We can’t simply rely on technology to mitigate the harms of technology; rather, we must look at our educational systems, our political and economic systems – therein lie the solutions.”
An anonymous retired consultant and writer said, “The digital environment enables platforms of near costless coordination – the benefits of which will require a ‘re-imagining’ of work and society in order the harness these benefits. Thus, while every technology can be weaponized and incumbent rent-seekers will fight to remove protections and capture regulation for their own profiteering, the real power of the digital environment will require new forms of institutional innovation, new institutional frameworks and public infrastructures and more.”
Sy Taffel, senior lecturer in media studies at Massey University, wrote, “Moving away from the corporate model of platform capitalism towards commons and public alternatives that are driven by a desire to build a more equitable and fair society rather than profiteering from the commodification of communication and systematic dataveillance would be a good start at addressing the systemic issues that currently exist. There are a huge number of areas where legislative activity to curb the behaviour of tech corporations can help, and the European Union has recently taken a lead in doing this in numerous cases, ranging from prohibiting the use of toxic substances in digital devices to how personal data can be used. The social harm that results from tech corporations’ pervasive tax avoidance cannot be overstated either.”
David J. Krieger, director of the Institute for Communication & Leadership located in Lucerne, Switzerland, observed, “Generally society and its organizations should proactively move away from the established solutions to problems as they were defined in the industrial age and try innovative forms of networking, sharing and management of information.”
Darlene Erhardt, senior information analyst at the University of Rochester, commented, “We certainly can create awesome, cool tech toys but we also need to pay closer attention to the moral/ethical/societal implications, benefits and effects. If that’s not at the very core, the foundation, then the cool new stuff that gets created has a greater likelihood of being used for negative things.”
Jodi Dean, a professor of political science said, “Internet giants (Google, Facebook, Apple, etc.) can be collectivized, turned into public utilities so that capitalist dynamics don’t guide the way they develop.”
An anonymous respondent said, “An increasing focus on the role of the Big-Five tech companies will shape how they behave in the years to come. With increased pressure, these companies will address their responsibility for the content on their platforms along with other critical issues such as privacy, access and the potentially addictive nature of product design.”
Mike Silber, general counsel at Liquid Telecom South Africa, wrote, “We need partnerships to deal with content issues. No one entity can accept responsibility; there needs to be a form of co-regulation between content creators, content users, platforms and governments to ensure that the freedom and openness allowed by digitalisation is preserved, while malicious actions can be mitigated. … We run the risk of perpetuating digital echo chambers where independent thought will gradually disappear.”
Some said that the teams of technologists who are creating the products of digital life lack the appropriate diversity, and that the people constructing the ways of knowing and accessing knowledge and human connection should represent all of humanity.
Brenda M. Michelson, an executive-level technology architect based in North America, commented, “We need to improve how we build and introduce digital products, services, information and overall pervasiveness. On building, we need to diversify the teams creating our digital future. 1) These future builders must reflect society in terms of race, gender, age, education, economic status and so on. 2) As digital is integrative – technology, data, arts, humanities, society, ethics, economics, science, communication – the teams must be composed of individuals from across professions and backgrounds, including artists, scientists, systems thinkers and social advocates. On introduction, we need – desperately – to build information literacy and critical-thinking skills across the population and improve curation tools without impinging on free speech.”
An anonymous futurist commented, “Awareness is changing and non-tech expertise is being integrated into the planning of technology being developed. There will still be unintended side effects, but with diverse perspectives from the start we have a better chance of minimizing – and even foreseeing – the potential ill effects and working toward better solutions.”
Digital life is built from code-based technologies that are protected as intellectual property and thus their structures are generally not made public. This is seen as a danger by some who say there should be algorithmic transparency and openness to how and why tech tools are built as they are.
An anonymous distinguished technologist at a major tech company in the U.S. wrote, “As AIs [artificial intelligence systems] become more common and important, we need to have visibility to how algorithms are making decisions and what happens to our data.”
Peter and Trudy Johnson-Lenz, principals of Pathfinding Smarter Futures, wrote, “Scientists need to find ways of listening to and valuing more diverse forms of public knowledge and social intelligence. Only by opening up innovation processes at an early stage can we ensure that science contributes to the common good. Debates about risk are important. But the public also wants answers to the more fundamental questions at stake in any new technology: Who owns it? Who benefits from it? To what purposes will it be directed? (See ‘See-through science: Why public engagement needs to move upstream’ by James Wilsdon and Rebecca Willis.) Those advocating redesign and different ways of using these technologies must be given a platform to share their thinking so new products and services can be developed, tested and adopted. Ultimately, we need to have more ‘see-through science,’ to involve the public upstream in the development process to make sure science and technology contributes to the common good.”
Some suggested that tech design can be mindfully built to lift individuals’ experiences to be more beneficial to well-being just as easily as it can be designed to be addictive.
Brad Templeton, software architect, civil rights advocate, entrepreneur, internet pioneer and chair emeritus for the Electronic Frontier Foundation, wrote, “The key action is to identify when things are not working well, do research, and then work to fix it in the design of the next generation of products. First generations will continue to tend to have unintended consequences. You can’t have innovation without that.”
The initial hurdle in all such challenges will be to overcome technological determinism. This is the modern-day religion of acquiescence that stifles reason, choice and freedom.
Marc Rotenberg
Jerry Michalski, founder of the Relationship Economy eXpedition, said, “User-experience (UX) design dictates most of what we do. Place a big source of addictive content in the focus of attention and most people will slip into that trap. If our UX designers wise up, they can just as easily design wellness, mindfulness, self-control and other features into the devices we use. It’s possible, but the business models that fuel these companies make such steps unlikely.”
Micah Altman, director of research and head scientist for the program on information science at MIT, said, “Information technology is often disruptive and far faster than the evolution of markets, norms and law. This increases the uncertainty of predicting the effects of technological choices but doesn’t render such predictions useless, nor prevent us from observing these effects and reacting to them. … We know enough to effectively design substantial elements of privacy, security, individual control, explainability and audibility into technical systems if we choose to do so. How will specific technology choices affect individuals and society? We do not always know the answers to technology questions in advance. But we can choose how to design into our systems now, the ability for society and individuals to ask these questions and receive meaningful answers.”
Salvatore Iaconesi, an entrepreneur and business leader based in Europe, said, “Bring in arts and design to work not only on providing information and skills, but also to work on the dynamics of desire, imagination and emotion, which are the real behavior-changers.”
Some respondents aren’t so sure that progress in the ethical design and use of technology can overcome the influence of base human nature. Frank Kaufmann, a scholar, educator, innovator and activist based in North America, commented, “People are constantly improving, so technology naturally supports that. Unfortunately our race is blocked from true progress until people embrace the secret to dissolving and removing dominating self-interest. Tragically technology exacerbates that.”
The overarching sentiment among these respondents is that people have to take action, not simply step back and let an avalanche of technology overwhelm human reason.
Marc Rotenberg, director of a major digital civil rights organization, wrote, “The initial hurdle in all such challenges will be to overcome technological determinism. This is the modern-day religion of acquiescence that stifles reason, choice and freedom.”
An anonymous respondent commented, “We are ruled by a dysfunctional worldview that values profit over people; it skews what the internet does and what it can do. The internet has the power to be much more positive in people’s lives but that requires a different political framework.”
A sampling of additional comments about the “reimagine systems” theme from anonymous respondents:
- “A new model of education for our technologists and engineers should incorporate ethics and public policy. Better investigative journalism should be directed at tech.”
- “Companies can’t be allowed to just shrug their shoulders and say that people’s safety on the internet is not their concern.”
- “We need empowered technology ethicists. Profit should not be the only driver for technology-driven change.”
- “Providers should be able to better control security and safety for users.”
- “We need to provide strategies for disconnecting, which is as important as connecting.”
- “A substantive rethinking of design principles and the true potential of these technologies, beyond the limiting visions of Internet of Things and social media, is necessary.”
- “Companies like Facebook, Google and even Twitter need to recognize that with their power comes great social responsibility. This will be even more true as companies like Uber merge digital and physical worlds so that the risks people face are not just nasty messages but immediate physical danger.”
- “We can apply experience and knowledge to keep us grounded in the physical world and continue the advancement of technology. An essential component of this is how we maintain the inherent democratic nature of a non-hierarchical internet.”
- “Stopping gamification of everything is an obvious first step.”
- “The fact that there are possible interventions for good does not guarantee that they will be effected or that they will not be countered by forces against good.”
Reinvent tech: Things can change by reconfiguring hardware and software to improve their human-centered performance and by exploiting tools like artificial intelligence (AI), virtual reality (VR), augmented reality (AR) and mixed reality (MR)
A number of respondents said technology fixes and emerging tech tools can be called upon to mitigate many current challenges to individuals’ well-being.
Daniel Weitzner, principal research scientist and founding director of MIT’s Internet Policy Research Initiative, commented, “When interacting online, we need to know whether we are dealing with real people, and those people need to be held accountable (sometimes socially, sometimes legally) for the truth and integrity of their words and actions. As an alternative to censoring speech or controlling individual associations, we should look to increasing accountability while recognizing that sometimes anonymity is necessary, too. And, when platform providers (i.e., advertisers and others) operate platforms for profit, we should consider what mix of social and legal controls can provide the right measure of accountability.”
As always, information and education are key. … Rather than building in limitations such as ‘maximum allowed screen time,’ digital tools should inform their users of good usage practices, allowing for considered choices.
Alf Rehn
Dan Ryan, professor of arts, technology and the business of innovation at the University of Southern California, wrote, “I would like to see a low-transaction-cost method for tagging ownership of personal information that would allow individuals to up-license use of their data (including the ability to withdraw the license) and potentially collect royalties on it. A blockchain-like technology that leaned in the direction of low transaction cost by design rather than trying to be a currency might allow this to work. Alternatively, third-party clearing houses that operate as consortia could control good/bad behavior of information users (e.g., if you continue to use personal info when license has been revoked you will be denied access to further information) could make something like this possible. An extension of this to permanent transportable identity and credit ratings could make a big difference in parts of the world where those things are a challenge.”
Bart Knijnenburg, assistant professor at Clemson University, said, “An important side effect of our digital life is that it is observable and amenable to research. This aspect is slowly but steadily revolutionizing the fields of psychology, sociology and anthropology. The available data is so vast that we can now study subtle phenomena and small sub-populations (e.g., underserved minorities) in increasing detail. If insights from the ‘digital humanities’ can be fed back into the development of online technologies, this can help mitigate the potential harms of digital life.”
Sam Lehman-Wilzig, retired chair at Bar-Ilan University’s school of communication and the department of political studies, wrote, “Social media will be forced by regulation, legislation and/or public pressure to limit some of the more deleterious elements within their platforms – this will involve artificial intelligence to aid in ‘surveying’ the constant, vast flow of communication, a small part of which is harmful and even illegal.”
An anonymous distinguished advocate for the World Wide Web and policy director based in Europe said, “Technologies such as artificial intelligence and blockchain have the possibility to greatly improve how we navigate through the world and how the world is structured. If these technologies are developed in a way that aims at increasing the greatest social good, then they have the potential to have an extremely positive impact on our economies, societies and politics. This would mean placing the individual at the center of concern and the problems that technologies are being developed to solve.”
Alf Rehn, a professor of innovation, design and management at the University of Southern Denmark, wrote, “As always, information and education are key. … Rather than building in limitations such as ‘maximum allowed screen time,’ digital tools should inform their users of good usage practices, allowing for considered choices.”
Morihiro Ogasahara, associate professor of sociology at Kansai University, said, “Because users of platforms (e.g., Google, Facebook) hope for these actions, platforms will have to respond to the huge demand. Of course the definition of benefits/harms sometimes depends on people’s habits or cultural context and these have been shifting, therefore the actions will be necessarily temporal symptomatic treatments.”
George Strawn, director of the U.S. National Academies of Science, Engineering and Medicine Board on Research Data and Information, said, “‘Interventions’ will be among the new tools and services that will continue the evolution of the internet.”
A sampling of additional comments about the “reinvent technology” theme from anonymous respondents:
- “As AI makes digital applications easier to learn, fix and adapt to us, it will greatly reduce the time learning how to use new applications.”
- “Future technologies (e.g., AI, semantic technologies) have the potential to assure greater information/data provenance.”
- “New technologies can mitigate harmful effects of digital technology. For example, dual authentication can enhance security. That said, good and evil will always be in a race.”
- “A technology self-limiter needs to be pervasive, not app by app, or site by site, but rather something that’s embedded in our culture.”
- “The Web can generally move toward more human-centric designs that celebrate individuality rather than attempt to put people in pre-defined categories for ad targeting purposes. … Advertisers themselves can demand it, as it would reduce the propensity toward trolling and extremism that we see today.”
- “Moving away from incentive-based features that require constant check-ins is a good start.”
- “Security could be fundamentally improved, sparing everyone a ton of annoyance. But it won’t be, because that would require a fundamental change in the architecture of the internet.”
- “Our digital ‘diet’ will become more apparent with new guidelines for healthy patterns of use. New apps will become more analytic, alerting us to the health of our financial affairs, personal health and well-being and in so doing liberate more time for personal enrichment, exercise, time with family and friends.”
- “Tech is both our best and worst friend. Ways to make it our best friend: Make it stop if over-used. Initiate self-governing rules and self-learning AI rules to avoid things like bullying, etc. Deep-learning fact-checking to avoid fake news. Create social citizenship as part of any action relevance.”
Regulate: Governments and/or industries should create reforms through agreement on standards, guidelines, codes of conduct, and passage of laws and rules
A number of people said they do not expect change without some sort of industry, government and public interventions – requirements, professional codes, rules, laws or other guiding structure that works to elevate the public good and individuals’ well-being over profits without stifling helpful innovation.
As the largest communication platforms begin to function as monopolies, we may need to depend more on regulation than competition to curtail the most anti-consumer behaviors.
Justin Reich
Seth Finkelstein, consulting programmer at Finkelstein Consulting, observed, “It’s too common to have any harms excused as an inevitable consequence of technology, when it’s really a matter of policy. That is, a net benefit can be composed of many large positives and negatives. … ‘Digital life’ can mean easily connecting with someone sharing your particular problem. But it also means an easy connection for anyone who has a problem with *you*. The flip side of ‘supportive community forum’ is ‘social-media hate mob.’ Having a world of knowledge at your fingertips also means having the world’s distractions a click away. Doing business all over the globe brings being able to be scammed from foreign lands. Consulting with experts in another country means offshoring labor is practical. All of these effects, and more, do not take place in isolation, but are profoundly affected by governmental actions.”
Rob Frieden, a professor of telecommunications and law at The Pennsylvania State University, commented, “Leaving technology introduction and integration to an unregulated marketplace diminishes the benefits, because most stakeholders do not operate as charities. If governments conscientiously embrace their consumer-protection and public-interest advocacy roles – a big if – society can integrate new technologies accruing measurable benefits.”
Tom Wolzien, chairman at The Video Call Center LLC, was among those who proposed specific steps: “1) Provide plain and simple notice to the consumer of the [owner responsible] for each site, app, stream or other material reaching that consumer on that web/app page or event. 2) This is a legal editorial responsibility for the content presented (consistent with current libel, slander, defamation and rights laws covering legacy print and mass media). 3) Application of anti-trust law to vertical and horizontal integration across all media, including all online media.”
Narelle Clark, deputy CEO of the Australian Communications Consumer Action Network, said, “Increasingly regulators are finding ways to enforce previously accepted norms of requisite content quality – in areas such as unrealistic health claims on health apps, for example. Data-governance regimes are also becoming more widely accepted and enforced. While we will continue to see poor (and even appalling) examples of data mismanagement and misuse, new products and product-development approaches are starting to take privacy and good data management principles into account. With the regulators discovering better ways to enforce these matters we should start to see improvements in product quality, and, as a result, better outcomes for consumers of digital products. The booming industry of mental health apps illustrates the desperate need for broader availability of mental health care. Many of the current apps fail to contain appropriate attributions to their creators or to the evidence (if any) of their effectiveness, yet many make extraordinary claims. These apps also have the ability to prey upon vulnerable people through in-app purchases, inappropriate treatment and so forth. I welcome advances in apps that work, and in the efforts of health practitioners and regulators to act against the predatory ones. If we can promote the effective ones, these apps and related services have the potential to deliver real benefits to society.”
Justin Reich, assistant professor of comparative media studies at MIT and the executive director of the MIT Teaching Systems Lab, said, “As the largest communication platforms begin to function as monopolies, we may need to depend more on regulation than competition to curtail the most anti-consumer behaviors.”
Oscar Gandy, professor emeritus of communication at the University of Pennsylvania, wrote about requiring companies take user well-being into account, “I have suggested that the market needs an aide to self-management in the area of news and information, where ‘balanced diets’ can be evaluated and improved by a trusted agent. In my view, Facebook is not a trusted agent, and its influence over our information diets is not healthy, in part because of its conflict over whose interests are supposed to be served. In the absence of the emergence of a successful information platform, regulatory oversight that includes assessments of individual and collective harms will have to evaluate the performance of market leaders and exact compensatory payments to support the development of such agents/services. I am hopeful that really smart people are raising questions and seeking policy responses to limit the harms that come from captured transaction-generated information. Time will tell, of course, whether the regulatory developments in the European Union will influence, let us say, counter-balance those in the U.S. and China.”
An anonymous respondent said, “More regulation of online companies is needed to provide transparency into the algorithms that shape the information that we are fed.”
Anne Collier, consultant and executive at The Net Safety Collaborative, said, “Regulators and governments need to show greater responsibility in three ways: 1) Grow their understanding of how digital media work, of algorithms, machine learning and other tools of ‘big data,’ including the pace of change and innovation. 2) Begin to acknowledge that, given the pace of innovation, regulation can’t continue to be once and for all, but rather needs a ‘use by’ date. 3) Develop more of a multi-stakeholder rather than a top-down, hierarchical model for regulation. In fact, we all need to think about how regulation needs to be multi-dimensional (including self- and peer-to-peer) and how all the stakeholders need to collaborate rather than work from an adversarial approach.”
More regulation of online companies is needed to provide transparency into the algorithms that shape the information that we are fed.
Anonymous respondent
Dozens of comments mentioned the net neutrality rules established by the U.S. Federal Communications Commission during the Obama administration that have since been slated for repeal by the FCC of the Trump administration. All who commented on net neutrality said such rules are necessary for a positive future. Ian Peter, an internet advocate and co-founder of the Association for Progressive Communications, commented, “There are regulatory measures that can assist with many other problems, such as fake news, algorithmic injustices, privacy breaches and market domination via breakdowns in Net neutrality or unregulated market dominance. All these things can be improved by regulatory measures; whether they will be is another matter.” Michael Everson, publisher at Evertype, commented, “The one intervention which is important is the guarantee of Net neutrality worldwide.”
Organizations are beginning to work together to possibly effect some positive change. New alliances are now being formed between non-governmental organizations and government entities, joining to address challenges raised by rapidly advancing digital technologies.
Sonia Jorge, executive director of the Alliance for Affordable Internet and head of digital inclusion programs at the Web Foundation, said, “There are many actions that can be taken to mitigate potential harms of digital life/interactions, and many organizations are working towards ensuring that those are designed thoughtfully and implemented correctly, including the Alliance for Affordable Internet, the Web Foundation, the Internet Society, the Association for Progressive Communications, some corporations and governments (with a number of Scandinavian countries and the European Union being good examples). Such actions include, for example, comprehensive data protection laws (the EU General Data Protection Regulation being a good example), or corporate transparency and accountability standards to increase consumer trust. Some examples include: 1) A4AI has published suggested policy guidelines to make public WiFi work for users. 2) The Web Foundation has published a whitepaper series titled ‘Opportunities and risks in emerging technologies’ which addresses some of these issues and suggests some actions. Other areas of concern are around legal frameworks to ensure that internet-based violence against women is addressed by law enforcement and other agencies. Without such frameworks in place to increase privacy and protection, women will increasingly question the benefit to participate in digital life, as the costs of access may be far too high for many. This is unacceptable, therefore, leaders MUST develop policy solutions to address such situations.”
Like the technologies they may be created to rein in, legal actions can lead to some unintended negative consequences.
Shel Israel, CEO of the Transformation Group, said, “The issue becomes one of public policy and government regulation. My concern is the quality of such policies is dependent upon the quality of government, which at this moment in time is pretty discouraging.”
Daphne Keller, a lawyer who once worked on liability and free-speech issues for a major global technology company, pointed out some potential negatives of regulation, commenting, “If European Union law compels platforms to build online content filters, for example, that will: 1) Foreseeably lead to lots of erroneous suppression of lawful information. 2) Speed the day when filtering technologies are easily available to oppressive regimes around the world. 3) Entrench incumbent platforms at the expense of new market entrants.” She added, “Interventions to shape the law can mitigate harms to digital life. So can pressures on private companies and other powerful actors in the space.”
Several respondents said codes of ethics and professional guidelines should be written and reforms should be suggested by industry and health associations.
Alan Tabor, an internet advocate based in North America, said, “We need something like credit reports for digital advertising … so we can see what our profiles are on the various media and who is using them and why.”
Antoinette Pole, an associate professor at Montclair State University, commented, “[There should be a set of guidelines for] recommended usage by the American Medical Association for adults.”
Some suggested that finding a way to eliminate complete anonymity online might reduce many types of damage to well-being.
Bill Lehr, a research scientist and economist at MIT, wrote, “Anonymous commentary has done great damage, on balance, to the quality of public discourse. Things like cyberbullying and fake news would be less of a problem if those who offer opinions were more often held accountable for their thoughts. I am fan of First Amendment protections and recognize the importance of anonymity in protecting privacy, but I think we will have to give up on some of this. This is just one example of something immediate that could be done to improve digital life.”
Some say regulation (and regulation in combination with other approaches) may come too slowly to match accelerating technological change. And some say regulators cannot be trusted to help society moderate connectivity to its benefit. An anonymous longtime leader with the Internet Society and the Internet Engineering Task Force commented, “While there are interventions that can be made, most of them are likely to be worse than the disease, particularly putting more power into the hands of demagogues, those with no interest in listening to others, etc.”
Garland McCoy, president of the Technology Education Institute, said, “As with everything, moderation is key; you want to avoid total immersion in what will clearly be an always-on environment linking your brain directly to the internet. So you will need to enable some ‘off’ switches – which may or may not be legal to obtain in the future. Obviously from the government and private-sector perspective they would like to keep you connected at all times to monitor your every thought and move or to sell you something you just thought about.”
A sampling of quotes tied to this theme from anonymous respondents:
- “As experimental technologies continue to break our ‘body barriers’ and become more biologically invasive, tech will need to be held up to rigorous standards and testing for health implications.”
- “Governments need to take seriously the risks of cyberwar by governments and terrorism by non-governmental agents. Invest. Research. Prosecute.”
- “Reinstitute something like the Fairness Doctrine. Or require labeling/standards for actual news.”
- “Legislation should apply a minimum journalistic standard to social media companies to force them to track and rein in the worst abuses, or social media as we know it has to collapse and be re-invented.”
- “Eliminate anonymity and the use of aliases on the internet. Make sure that everybody is as visible and known as in the real life. Uphold libel laws and hate laws in every country similar to those of France and Germany.”
- “An international online code of conduct with some enforcement or rating scale would be useful, but that can of worms is so big, it almost breaks my brain.”
- “Regulatory actions will be essential to continue to protect human rights online … this includes regulation of monopolies and of anti-competitive and anti-consumer behaviour.”
- “Society needs to adjust to technological changes; this will come with time and experience, and hopefully not through regulation or over-reaction.”
- “Like all market systems, the negative externalities require either social or regulatory action to prevent unaccounted costs to society.”
- “Government intervention should place countervailing pressure on platform monopolists.”
Redesign media literacy: Formally educate people of all ages about the impacts of digital life on well-being and the way tech systems function, as well as encourage appropriate, healthy uses
A large share of respondents said people have to take direct action to cope with the impact of technology. They noted, however, that many users need help and that doing this well is vital to individual and societal well-being. They say education efforts are not fostering the appropriate depth of knowledge of the systems behind digital life or teaching methods so that people can mitigate problems.
Jon Lebkowsky, CEO of Polycot Associates, said, “It’s a ‘training issue’ – our dependence on various technologies is way ahead of our comprehension. It’ll probably take a generation or two to catch up with accelerating change.”
It’s a ‘training issue’ – our dependence on various technologies is way ahead of our comprehension. It’ll probably take a generation or two to catch up with accelerating change.
Jon Lebkowsky
Charles Ess, professor in the department of media and communication at the University of Oslo, said, “As a humanist and as an educator I think the central question is … us. That is, it seems very clear that as these technologies become more comprehensive and complex, they require ever greater conscious attention and reflection on our part in order to ascertain what uses and balances in fact best contribute to individual and social well-being and flourishing. In some ways, this is ancient wisdom – and specifically at the core of the Enlightenment: if we are to escape bondage, we must have the courage to critically think (and feel) and act out of our own (shared) agency. This is the virtue ethics approach taken up by Norbert Wiener at the beginning of computing and cybernetics. … Fairly simply put: The more these technologies both enhance my capabilities and threaten my freedom (e.g., the infinite surveillance possible through the Internet of Things), the more I am required to be aware of their advantages and threats, and to adjust my usage of them accordingly, whether in terms of close attention to, e.g., privacy settings on social media platforms, software and software enhancements (such as browsers and browser extensions, PgP apps, etc.), and/or simple decisions as to whether or not some technological conveniences may simply not be worth the cost in terms of loss of privacy or ‘deskilling’, as in the case of offloading care to carebots. But as these examples suggest, such awareness and attention also require enormous resources of time, attention and some level of technical expertise. How to help ‘the many’ acquire these levels of awareness, insight, technical expertise? The Enlightenment answer is, of course, education. A version of this might be ‘media literacy’ – but what is needed is something far more robust than ‘how to use a spreadsheet’ (as important and useful as spreadsheets are). Rather, such a robust media literacy would include explicit attention to the ethical, social, and political dimensions that interweave through all of this – and highlight how such critical attention and conscious responsibility for our technological usages and choices is not just about being more savvy consumers, but, ultimately, engaged citizens in democratic polities and, most grandiosely, human beings pursuing good lives of flourishing in informed and conscious ways. All of that is obviously a lot to demand – both of educational systems and of human beings in general.”
Annette Markham, professor of information studies and digital design at Aarhus University in Denmark, said, “We can help mitigate some of this stress and anxiety by engaging people to be more conscious of what’s happening as well as – and this latter part is critical – more deliberate in establishing and maintaining better habits of digital media consumption. This means more work to develop effective media literacy (media, digital and data literacy), through strategic educational efforts or more informal consciousness raising, using feminist models of the women’s liberation movements in the ‘60s and ‘70s. I’ve been wanting to figure out a way to have an international holiday called ‘memory day,’ where we spend time sorting through our own personal ‘big data’ to see what we’ve collected and generated throughout the year, to clean up our files and throw away junk, but to also more carefully curate what matters to us. This sort of regular reflection help people recognize how much they click, store, and share, which can in turn help people reflect on what those activities mean to them. Sorting through one’s data to commemorate what matters is something that social media platforms like Facebook are happy to do, but are they the best curators for our memories? Tracing, remembering, and commemorating can help us slow down, be more deliberative about our digital lives, and be more reflexive about the impact of the internet overall.”
Justin Reich, assistant professor of comparative media studies at MIT and the executive director of the MIT Teaching Systems Lab, wrote, “Just as earlier generations of media-literacy practices explained to students how advertising strategies work, we’ll need similar education to folks about how consumer technologies are designed to capture and maintain attention, to surveil consumers and other network actors to harvest vast amounts of data, and … to organize that data for targeted advertising.”
Greg Shannon, chief scientist for the CERT Division at Carnegie Mellon University’s Software Engineering Institute, commented, “Here are some education interventions that already show promise: *Digital literacy *Critical thinking in the digital age *Trust in a digital world. Society needs to demand a digital world that is more secure, private, resilient and accountable.”
Lisa Nielsen, director of digital learning at the New York City Department of Education, said, “People are becoming more and more aware of how to successfully manage their digital lives. In particular this is also being addressed more frequently in schools with curriculum from Common Sense Education, EVERFI’s Ignition, and Google’s Be Internet Awesome. Additionally, the International Society for Tech in Education has standards aligned to this goal and supports students in becoming ‘empowered digital learners.’ There is also a parenting component that accompanies many of these programs. There is more awareness and mindfulness of what it takes to have a successful digital life. … There are plenty of programs now to address the potential harms of digital life. These are being implemented in schools with programs that address cyberbullying and mindfulness. They are also being addressed more and more in the mental health world. People are learning techniques for being upstanders when they see others not treating someone right. Online spaces are getting much better at setting ground rules.”
Frank Feather, a business futurist and strategist with a focus on digital transformation, commented, “Digital technology itself helps us to be more educated about its safe and productive use and application.”
A sampling of additional comments about “redesigning media literacy” from anonymous respondents:
- “We need better education and people (mentally) healthy enough to withstand the seductions of immediate gratification.”
- “We all need to be taught to be better consumers.”
- “Digital literacies should be taught as a part of children’s educational development, with a passing grade required.
- “A comprehensive understanding of how it all ‘works’ is essential. VR/MR/AR can be adapted as both teaching and wellness tools.”
- “12-step programs and services to help people cut the cord, so to speak, may help.”
- “Employers should institute electronic communication vacations for the health of their employees.”
- “Early education regarding the effects of physical inactivity is required. A reward system that encourages more activity even while using the internet would be great.”
Recalibrate expectations: Human-technology coevolution comes at a price; digital life in the 2000s is no different. People must gradually evolve and adjust to these changes
While all respondents agreed there are some concerns and most suggested that attention must be paid and solutions pursued when it comes to individuals’ well-being and the future of digital life, many have confidence that humans can and should also take the initiative to evolve and adapt.
Stowe Boyd, futurist, publisher and editor-in-chief of Work Futures, said, “One of my abiding beliefs is that we are better off when we take an active and intentional approach to living digitally. Rather than being just a passive ‘consumer’ of digital streams, I feel people are better off through activity. To comment, argue, share and curate. Then, instead of being buffeted by the storms raging online, you can use the blowing winds to fill your sails and set a course.”
Every new technology goes through a phase of euphoria, followed by a phase of retrenchment. Automobiles were a fantastic replacement for horses, but as their numbers increased it became clear that they had their own health and cleanliness issues. The same is true of the internet.
Hal Varian
Vint Cerf, Internet Hall of Fame member and vice president and chief internet evangelist at Google, commented, “We need to help people think more critically about what they encounter in information space (film, radio, TV, newspaper, magazines, online sources, personal interactions …). This needs to be a normal response to information: Where did it come from? Who is providing it? Is there a motivation for the particular position taken? Is there corroborating evidence? We can’t automatically filter or qualify all the data coming our way, but we can use our wetware (brains) to do part of that job.”
Stuart Elliott, a visiting scholar at the National Academies of Sciences, Engineering and Medicine, said, “As with any powerful new technology, the internet brings important new benefits but also various risks and side effects. As a society, we’re still in the process of understanding and reacting to the risks and negative side effects. We would expect this to take time – on the order of a decade or more. As we understand the risks and negative side effects, we’ll develop ways of addressing them, ranging from individual behaviors to group norms to government regulations. In general, it’s reasonable to expect these various reactions will allow the technology to have a net positive effect.”
Yasmin Ibrahim, an associate professor of international business and communications at Queen Mary University of London, said, “The problem is that as digital technologies become seamlessly part of our everyday engagement and mode of living we may not question actions or decisions we make online. Making the internet a healthy space means analysing our modes of being and everyday engagements in the digital realm, and this itself can be stressful. But keeping the internet a space of ideals requires us to do precisely that; to question every action and think about the internet architecture and how our activities are connected to a wider digital ecology of producing and consuming.”
Mark Patenaude, vice president and general manager of cloud technologies at ePRINTit, said, “Digital transference over the last decade had little guidance or mentors to help modulate the overabundance of useless, immoral and fake information. Laws, governments and society in general are starting to understand the past effects of social media and mass media marketing techniques. Society will advance to a stage that new technologies will provide us with significant advances in security, privacy and content that becomes believable. … The perceived dangers of advancing digitization are very real and people should be wary and cautious. Being afraid and skeptical will push our technologists to come up with ways that protect what we need protecting.”
Hal Varian, chief economist at Google, commented, “Every new technology goes through a phase of euphoria, followed by a phase of retrenchment. Automobiles were a fantastic replacement for horses, but as their numbers increased it became clear that they had their own health and cleanliness issues. The same is true of the internet. A few years ago, freedom of the press went to those who owned one. Now everybody has a platform, no matter how crazy they are. But we will learn to live with this by developing better technology, better media and better critical awareness.”
Dana Klisanin, futurist and psychologist at Evolutionary Guidance Media R&D Inc., wrote, “We are now entering a phase when a larger number of people are beginning to take seriously the various impacts of digital technologies for good and ill. This ‘being conscious’ is the first step to taking control over our digital lives. The coming decade will see the advent of more ‘digital detoxing’ and ‘mindful unplugging’ but people will also be learning how to use digital technologies to benefit their lives. By the end of the next decade we will see a more balanced approach in our digital lives – that, all on its own will be an improvement.”
Pamela Rutledge, director of the Media Psychology Research Center, said, “With every new technology, we have to learn the new rules of engagement. This only comes from understanding what the technology can and can’t do and how that impacts our goals, behaviors and choices. To benefit from cars, we had to learn to drive, establish rules for the road and understand the benefits and dangers of such technology-enabled power. Today’s technologies are no different. There are inherent and undeniable benefits, such as increased productivity, wider access to information, healthcare and education, greater and more resilient social connections independent of time and distance, the inability to hide bad behavior for those who abuse power, and the psychological sense of empowerment that derives from increased agency. This does not mean that there aren’t challenges to be managed, like equal access, privacy, misinformation and new avenues for criminal behaviors. Technology isn’t going anywhere and it is without agenda. The choice of what and how to use technology is our own. As with cars, we need to learn to be good drivers. We need to develop new social literacies and behavioral rules that are adaptive to a digital world. However, these are recurring problems with every type of social change. Well-being is a psychological state that comes from feeling like you have the ability to take action, have impact, that you are capable of navigating your environment to meet your basic needs, and that you have meaningful social connection. Technology enhances all of these.”
Laura M. Haas, dean of the College of Information and Computer Sciences at the University of Massachusetts, Amherst, wrote, “People will adapt, learning to avoid negative use of technology. I see, for example, many younger people choosing to shut off their phones in social settings, or dramatically reducing their use of Facebook, etc. While not everyone will change, today’s issues will be addressed in a variety of ways. I am also a realist, though: I believe as technology advances, new harms will develop. Any tool can be used for good or for ill, and today’s technology is so complex that we cannot anticipate all uses or side effects. … I expect the positives and negatives in 10 years may be quite different than they are today.”
Gina Neff, an associate professor and senior research fellow at the Oxford Internet Institute, said, “Technology did not create the vast economic inequality that is shredding the social fabric of American life, but it can amplify it. If we don’t address inequality then the potential harms of digital life will only worsen.”
We will certainly develop new ways to adapt to the digital environment. The key question: What is the balance of the real and the virtual that will keep us healthy in every sense?
Michael Rogers
Claudia L’Amoreaux, a digital consultant, commented, “We’ve passed through the naive phase of internet optimism and utopian thinking. Issues are on the table. That’s a good thing. I am encouraged by the work of people like Tristan Harris, Eli Pariser, Ethan Zuckerman, Sherry Turkle, Yalda Uhls, [and] Zeynep Tufekci to identify and present solutions to the potential harms of digital life facing us – harms to children and in the family, and harms to civil society and democracy. I do think more individuals are becoming aware of the challenges with 24/7 digital life. More people are calling for transparency – in particular, with algorithms. Some solid investigative reporting is happening (e.g., ProPublica’s recent piece on discriminatory housing ads on Facebook). The fake-news crisis has sounded an alarm in education that young people today need critical digital literacy, not just digital literacy. And the hearings in Washington post-election with the leaders in the digital industry have exposed deep problems in the way business has been conducted.”
Jim Hendler, an artificial intelligence researcher and professor at Rensselaer Polytechnic Institute, wrote, “There is much discussion starting around the ethical issues in new technologies, especially artificial intelligence, and in ‘algorithm accountability.’ I believe that as more algorithms gain some measure of transparency and people’s awareness grows there will be a growing awareness that new technologies depend on people who deploy them and the public response, not just on the technologies themselves.”
Daniel Berleant, author of “The Human Race to the Future,” commented, “When human groups encounter new environments they must adapt. … The process of adaptation will result in problems that arise, including maladjustments that people must learn to overcome as well as other challenges. Some people will be harmed but few will return to their old environment. As societies learn to exist in this new environment, humans will become better able to live in it. We will learn to cope with the new aspects while using the new opportunities it presents to enjoy improved quality of life. Thus there will be pluses and minuses, but over time the minuses will diminish while the pluses will increase.”
Michael Rogers, a futurist based in North America, said, “We will certainly develop new ways to adapt to the digital environment. The key question: What is the balance of the real and the virtual that will keep us healthy in every sense? Example: I know one large company that now has a ‘remedial social skills course’ for certain new hires. Growing up with asynchronous communication methods like IM and texting means that some adolescents don’t have as much practice with real-time face-to-face communication as did their parents. Thus, for some, tips on how to start a conversation, and how to know a conversation is over, and a bit of practice are helpful. It’s not the fault of the technology; it’s rather that we didn’t realize this might now be a skill that needs to be taught and encouraged. I think we’ll ultimately develop and teach other ways to overcome negative personal and social impacts. The challenge for older people in this process will be to ask ourselves whether, in these interventions, are we protecting important human skills and values, or are we simply being old fogies?”
Valerie Bock, principal consultant at VCB Consulting, wrote, “I see social norms developing to help us use technology in a way that serves our human connections rather than detracting from them. … Just as families of a generation ago learned to employ the home answering machine to preserve the dinner hour, families of today are creating digital-free zones of time and place to manage our strong attraction to digital devices and social media and build their connections to one another. This is not to say that there are not real threats to well-being posed by the erosion of privacy, which is a central feature of current digital developments. The total-surveillance society described in Orwell’s ‘1984’ has been packaged by corporate digital interests as a consumer convenience and is being welcomed into our homes rather than imposed on them by a hostile and oppressive government. The more-pinpoint targeting of consumer desires enabled by these technologies threatens to overwhelm the defenses against over-consumption that we developed in the TV age.”
Marshall Kirkpatrick, product director at Influencer Marketing, said, “We can all help create a culture that celebrates thoughtfulness, appreciation of self and others and use of networked technologies for the benefit of ourselves and the network. We can create a culture that points away from the exploitive mercenary cynicism of ‘Hooked’ growth-hacking.”
An anonymous respondent wrote, “The adult work environment should be refocused to reduce the speed at which life is expected to travel. When everyone is meant to be ‘on’ and in frantic motion 24 hours a day, there is little time to rest, recover and/or allow valuable free-form thought and brainstorming. Stress has a myriad negative effects on human health and when stress lives in your pocket with an expectation that you will respond to it 24 hours of the day and within minutes, health and well-being will not benefit.”
Nathaniel Borenstein, chief scientist at Mimecast, said, “Most obviously, rigorously enforced Net neutrality would prevent many of the worst outcomes. More positively, I think we can develop spiritual and philosophical disciplines that will help people get the most out of these technologies, and will help people develop in ways that minimize the chances that they become cyberbullies or other cybermisfits.”
Matthew Tsilimigras, a research scientist at the University of North Carolina, Charlotte, said, “There is a huge personal and career-related cost to you if you are unable or unwilling to participate in digital life. … Workplace protections need to be enforced so that employers do not feel like they have 24-hour access to employees, which many use as a crutch for their own poor management skills. It is also the responsibility of online forums themselves to moderate content produced and exchanged on their platforms so as to police bullying and other threatening behavior.”
A sampling of additional comments related to “recalibrating expectations” from anonymous respondents:
- “A deeper understanding through additional research and scholarship of the socio-cultural and psychological effects of digital technology will inform our use of these technologies in the years to come.”
- “Put the phone down.”
- “You could unplug, but at a cost.”
- “I hope places that jam cell phones become popular, that unplugging gets to be a draw due to popular pressure. Not counting on it!”
- “We need to propagate the idea that disconnecting, being more aware of one’s uses and balancing activities is of social value.”
- “The solution is not more technology, but the responsibility of the individual to navigate and decipher information and use it as a powerful tool to benefit themselves.”
- “Social norms will push back trash talk, fake news and other click-bait into their own ghettos.”
- “There will be a resurgence of people rejecting the overwhelming pervasiveness of digital in our day-to-day lives.”
- “There are things that can be done but it won’t be easy and it will require deliberate effort. I don’t think our society will take the tough route. The lull of the easy road will lead them to harm.”
Fated to fail: A share of respondents say interventions may help somewhat, but – mostly due to human nature – it is unlikely that these responses will be effective enough
When asked the yes-or-no question “Are there any possible interventions that can help overcome the negatives of digital life’s impacts on well-being?” a small share of respondents said “no.” Some expressed a lack of faith in the capability of humans’ and human systems to effect the changes or fixes that might make individuals’ well-being paramount. Another fear expressed by those who answered “no” to this question is that attempts to effect improvements may create unintentional negative effects or be appropriated to further certain agendas that are not in the public’s best interests.
We all have free will, and if someone wants to do something we cannot stop them, not digitally.
Alice Tong
Cliff Zukin, a professor and survey researcher at Rutgers University, commented, “Simply put, I believe the technology governs. It is a variant of McLuhan’s ‘media is the message.’ It continues the argument of Neil Postman’s in ‘Amusing Ourselves to Death.’ People send the pictures and go on Facebook because they can, not because there is any real content involved. Over time, that becomes the communication and a new normal evolves.”
Mark Richmond, an internet pioneer and systems engineer for the U.S. government, wrote, “I’m concerned that the more people try to fix things, the more problems are caused. Regulation, deregulation, censorship, openness, filtering, verifying, no matter what you call it. With the best of intentions, people have proposed requiring real identification for online posters, for example. The downside is the risk of repression, censorship, discrimination and marginalization. To make it worse, overcoming such a requirement is a trivial matter for anyone determined. It just makes it harder on the honest. Protections against the misuse of the technology must continue to be developed. Financial transactions, privacy concerns, all of those of course Revival (sic). But that’s a transactional change, not a foundational change. The foundation of the internet really must remain one of providing a billion soap boxes for a billion points of view.”
Heywood Sloane, partner and co-founder of HealthStyles.net, said, “The risk of unintended consequences is higher than we can possibly understand or appreciate. Learning to use the best of it and avoid the worst of it – with experience over time – is quite possible.”
Some replied that people-plus-technology is a threat that can’t be completely conquered. Colin Tredoux, a professor of psychology at the University of Cape Town, commented, “Digital technology is just about uncontrollable. There are myriad examples. The internet was designed to be robust to local disruption (or control), and the many many examples of hacked banking, government, health, [and] education sites show it is not possible to provide meaningful control except at the cost of draconian measures as in Iran or China, and even those will likely fail. Some military protocols now require computers to be offline. We will have to live with the bad while enjoying the good. It is not clear we can do anything meaningful to ensure that the good outweighs the bad.”
Thad Hall, research scientist and co-author of the forthcoming book “Politics for a Connected American Public”, commented, “My concern is that the battle over digital life is a competition where one side is using addiction-psychology models to get people addicted to their devices and the apps on them and the ability of people to resist these temptations is questionable. In addition, the ability of people to use the technology for nefarious purposes – creating fake information, especially high-level information like video and audio – and the internet to spread this information is going to create ongoing problems that will be very difficult to address.”
There were those who said most individuals will not make the adjustments necessary in their personal lives to rein in the habits that are causing them to suffer from nomophobia, fear of missing out (FOMO), eyestrain, sleeplessness, isolation, deepening lack of social skills, Instagram-inspired envy, stress, anxiety and other effects.
Tom Massingham, a business owner based in North America, wrote, “I just can’t think of a possible intervention. It seems like a creature growing, and out of control.”
Alice Tong, a writer based in North America, said, “We all have free will, and if someone wants to do something we cannot stop them, not digitally. What will be important is to promote the idea of non-digital life to people starting at a young age. Make it known that also living a non-digital lifestyle is a must for balance.”
An anonymous professor at a major university in Australia said, “I do not think we have the capacity to act as we need to. Ultimately this is not about what harm technology might represent to us but it is about what our capacity is for self-harm.”
And some took issue with the idea of “intervention.” Chris Morrow, a network security engineer, said, “I don’t think that trying to ‘intervene’ is the right view. People need to realize that balance in their lives is important. Access and information at a wide scale enables people to see, hear, [and] change many things, but at the end of the day they still need to interact with actual people and perform basic tasks in their lives. Trying to force this behavior will not work in the long term, people must realize that they need to balance their use of anything (digital access, food, exercise, etc.).”
An anonymous professor based in North America said, “The techno-libertarian philosophy is the lens through which people make sense of issues, so that collective goods like a balanced democracy or a vibrant community simply don’t make sense. When coupled to a political system in which tribal political loyalties and campaign contributions erode even policies that have vast political approval (like Network neutrality) there aren’t many effective institutions that can counterbalance problems created by policies that generate profits. Google would like to believe it does no evil, but when tens of billions of dollars of revenue are at stake, the social and political problems resulting from reinforcing polarizing social divisions will be ignored by the company, government and media.”
An anonymous information science professional wrote, “We are, in the United States, a people who believe in our free will to live as we choose. There would be incredible resistance to any large-scale attempt to help people moderate their use of technology. Technology is so linked to commerce that suggesting people use it less would be decried as harmful to the economy. We are in a cycle where the ends justify the means that justify the end. We want what we want, and, from most appearances, personal risk or harm is not an acceptable reason to limit our access to what we want. Those who make money from our behavior are certainly not going to help us change it.”
A sampling of comments supporting the “fated to fail” theme from anonymous respondents:
- “The ship has left the harbor. Digital providers have too much power and control information. Technologists also naturally push capabilities without worries about negative impacts.”
- “The corporations who stand to make money off these devices and services will not be working to lose eyeballs in the name of what may be better for us.”
- “Perhaps the demise of Net neutrality and onset of associated volume-based costing for use may provide a positive unintended consequence.”
- “All you could do is make access more difficult, slower or unpleasant.”
- “There is no political motivation to make changes that would help the majority of people. The recent decision against Net neutrality is just one example. Short-term profit and stockholders’ interests are driving policy-making, innovation and regulation.”
- “There is a huge push from the economic side to use ever-more-digital tools in your life, and the means of regulators are really limited because of the global nature of such companies and activities. That is the biggest threat because needed regulation is extremely hard to enforce.”
- “The responsibility for using a digital service in the right manner, with the right intent and in a reasonable way lies with the individual.”