Paulo Pacheco

AI Anxiety

June, 2023

Artificial Intelligence (AI) anxiety refers to the prevailing unease and apprehension surrounding the ongoing development and expanding influence of AI in daily life. This concern has gained prominence in recent years as AI technologies continue to advance, regarding its potential repercussions, especially those related to employment.[1, 2]

The history of AI anxiety dates back to the development of modern computers, a period marked by widespread trepidation about this new technology, viewing it as a threat to the essence of being human.[2] However, it's essential to differentiate AI anxiety from computer anxiety. According to Li & Huang (2020), while a computer operates with mechanical precision, AI has the capacity for autonomous decision-making and functions independently from human control. Moreover, AI raises ethical issues in humans-machine interactions, something not typically associated with computer anxiety.[3]

Researchers contextualize AI anxiety by situating it within the broader concept of technophobia, defined as an “irrational fear of technology characterized by negative attitudes toward technology, anxiety about the future impacts of advancing technology, and self-admonishing beliefs about their ability.”[4] It is further distinguished in two aspects: computer anxiety and robot anxiety. However, AI anxiety is recognized as a distinct and independent variable, defined as an overall affective response characterized by anxiety or fear that inhibits an individual's ability to interact with AI.[4, 8]

The fear of the unknown has long been associated with technological advancements. Within the context of AI, anxiety arises not only from the prospect of mass unemployment but also from concerns related to machine intelligence, super-intelligence, and the responsible use of AI’s power.[6] For instance, a survey conducted by Harris Poll and MITRE in February 2023 revealed that 78 percent of Americans were concerned regarding the potential malicious use of AI.[7]

As individuals grapple with a rapidly changing present and an uncertain future, AI anxiety is emerging as a recognized phenomenon.[4] The increased accessibility of AI generative tools like ChatGPT and the proliferation of headlines speculating about robots taking over jobs contribute to concerns among workers about their future and the relevance of their skills in an evolving labor market.[5] Psychologically, these concerns often stem from the realization that, for many individuals, a job is more than just a means of earning a livelihood; it represents an integral part of their identity and a source of purpose.[1]

Factors Contributing to AI Anxiety

Several studies have investigated the underlying factors contributing to AI anxiety. Lemay et al. (2020) explored the relationship between an individual’s beliefs about AI and the extent of anxiety related to their technology-based predispositions. Their findings indicate that “AI anxiety runs through a spectrum and is influenced by real, practical consequences of immediate effects of increased automation but also influenced by popular representations and discussions of the negative consequences of artificial general intelligence and killer robots.”[4]

In the following sections, the specific elements that contribute to this phenomenon will be explored, referencing insights derived from the investigations conducted by Johnson & Verdicchio (2017) and Li & Huang (2020).

Sociotechnical Blindness

Johnson & Verdicchio (2017) observed that AI anxiety generally involves an abstraction of AI technology out of the context of its real-world use and existence. It ignores the human beings, social institutions and arrangements that provide AI with its functional capacity and meaning. Therefore, the behavior of AI programs gain significance only in relation to the technological systems that serve specific human purposes.[2]

The authors draw a distinction between AI programs and AI sociotechnical systems. While programs are just lines of code, AI systems consist of code and the context in which it is used. Therefore, AI anxiety generally emerges from a focus on AI programs, “leaving out of the picture the human beings and human behavior that create, deploy, maintain, and assign meaning to AI program operations.” Sociotechnical blindness results in an incomplete understanding of AI as a system that operates in conjunction with people and social institutions. Consequently, it can lead to the development of unrealistic scenarios by omitting the essential human dimension within the broader system.[2]

Confusion About Autonomy

Considering AI as autonomous without defining the parameters of autonomy is another factor contributing to AI anxiety. The concept of autonomy differs significantly when applied to humans as compared to computational entities. In the human context, autonomy is associated with the capacity to make decisions, choices, and take actions, all of which are linked to the concept of human freedom. Traditionally, these attributes have been used to distinguish humans from both living and nonliving entities and have served as a basis for morality. As highlighted by Johnson and Verdicchio (2017), “only beings with autonomy can be expected to conform their behavior to rules and laws.”[2]

Another noteworthy point made by the researchers is that when non-experts hear about machine autonomy, they equate it with the same characteristics as human autonomy, including the freedom to choose behavior. This misperception becomes relevant in discussions about “autonomous” AI. However, computational autonomy—where the programmer cannot know in advance the precise outcomes of the programs—is rooted in software and hardware processes, not directly correlating to human autonomy unless computer scientists accept computationalism.[2]

Technological Development

Additionally, the authors point out that an inaccurate conception of technological development contributes to AI anxiety. Specifically, they note that “futuristic AI scenarios jump to the endpoint of a path of technological development without thinking carefully about the steps necessary in order to get to that endpoint.”

The path of AI development in the future is uncertain, with each step along the way requiring human decisions. Although human decision making might decrease as AI advances, it will be part of the overall process in one way or another. Neglecting the role of human involvement in technological development can lead AI futurists to focus their narratives around superintelligent AI that evolves into a dangerous entity disconnected from human activities.[2]

Eight Anxieties

Finally, Li & Huang (2020) identified eight specific anxieties that may contribute to the overarching issue of AI anxiety.

Identifying and Managing AI Anxiety

Recognizing AI anxiety can be challenging since its symptoms can overlap with those of occupational stress and general anxiety. However, there are some indicators that signal this particular type of technological anxiety.[1]

Several strategies can effectively address AI anxiety. A comprehensive approach involves acknowledging that humans have consistently adapted to major changes or shifts due to technological development.[6] Other recommended measures include:

As technology advances, acquiring more human-like characteristics and abilities, a tension arises between creation and creator, manifesting itself as anxiety in the latter. This phenomenon echoes previous technological developments, with artificial intelligence standing as its latest expression. Similar to past instances (e.g. computers and robotization), the anxiety experienced by individuals stems from a perception of diminished value and fear of job loss, neglecting the potential for new opportunities that technology can bring forth. To ameliorate this apprehension surrounding AI, it is essential to shift focus toward the human element, contextualize the technology, and understand its true capabilities to navigate future developments with a more informed and resilient perspective.

References

  1. Mind Help (2023). AI Anxiety: Why People Fear Losing Their Jobs to AI and ChatGPT? National Anxiety Month. Mind Help. https://mind.help/news/can-ai-anxiety-have-consequences-on-our-mental-health
  2. Johnson, D. G. and Verdicchio, M. (2017). AI Anxiety. Journal of the Association for Information Science and Technology, 68(9): 2267-2270. https://doi.org/10.1002/asi.23867
  3. Li, J and Huang, J-S (2020). Dimensions of Artificial Intelligence Anxiety Based on the Integrated Fear Acquisition Theory. Technology in Society, 63. https://doi.org/10.1016/j.techsoc.2020.101410
  4. Lemay, DJ, Basnet, RB and Doleck, T (2020). Fearing the Robot Apocalypse: Correlates of AI Anxiety. International Journal of Learning Analytics and Artificial Intelligence for Education, 2(2): 24-33. https://doi.org/10.3991/ijai.v2i2.16759
  5. Cox, J (2023). AI anxiety: The Workers Who Fear Losing Their Jobs to Artificial Intelligence. BBC. https://www.bbc.com/worklife/article/20230418-ai-anxiety-artificial-intelligence-replace-jobs
  6. Schmelzer, R (2019). Should We Be Afraid of AI? Forbes. https://www.forbes.com/sites/cognitiveworld/2019/10/31/should-we-be-afraid-of-ai/
  7. Wijayaratne, S (2023). Are the Headlines About How AI Is Changing the World Stressing You Out? Everyday Health. https://www.everydayhealth.com/columns/my-health-story/are-the-headlines-about-ai-changing-the-world-stressing-you-out/
  8. Wang, Y-Y and Wang, Y-S (2019). Development and Validation of an Artificial Intelligence Anxiety Scale: an Initial Application in Predicting Motivated Learning Behavior. Interactive Learning Environments, 30(4): 619-634. https://doi.org/10.1080/10494820.2019.1674887
  9. Brower, T (2023). People Fear Being Replaced By AI and ChatGPT: 3 Ways to Lead Well Amidst Anxiety. Forbes. https://www.forbes.com/sites/tracybrower/2023/03/05/people-fear-being-replaced-by-ai-and-chatgpt-3-ways-to-lead-well-amidst-anxiety
--<>--<>--<>--<>--<>--<>--<>--<>--

Return to Technical Writing