Technological change is by no means a new phenomenon. What is new, however, is the accelerating pace of change. Between the early development of writing (c. 6000 BC) and the invention of the printing press (c. 1500), more than 7000 years elapsed. Between the discovery of gunpowder (c. 1000) to nuclear energy (c. 1945), nearly 1000 years passed. Between the advent of photography (c. 1825) and television (c. 1925), 100 years elapsed. Between the discovery of the structure of DNA (c. 1955) and the development of CRISPR gene-editing technology (1987), over 30 years passed. Between the introduction of mobile telephony (c. 1977) and the development of the internet (c. 1991), less than 15 years elapsed.
More recently, it seems that new high-impact technologies are being introduced every few months. In a matter of few years, data analytics has become one of the most in-demand skills on the global job market. Blockchain appeared poised to render traditional currencies and investment instruments obsolete. Non-Fungible Tokens (NFTs) created new digital art markets. Messenger RNA (mRNA) techniques enabled the rapid development of COVID-19 vaccines.
Uncertainty and apprehension accompany any major technological change. Since its emergence in the mid-twentieth century, computer technology has been one of the most feared emerging sciences. One branch of computing, artificial intelligence, has even prompted existential fears about the future of humankind. These concerns have been amplified recently by the rapid development of generative AI, which has stunned the world with applications that seem to understand and create (or draw) as if they were human.
Some people (including Alan Turing, the father of modern computing; Elon Musk, the owner of Tesla and Twitter; and Sam Altman, the creator of ChatGPT) believe that we should be wary about machines that can think and act faster than humans, as this could lead to an “extinction event.” According to this view, humans dominate all other species due to their superior intelligence. Our position as the dominant species would be threatened if machines more intelligent than humans were to emerge.
While the risks of human extinction may seem exaggerated, there are real risks that are much more concrete. For instance, there is the risk that AI responses or actions may exhibit racial, religious, ideological, gender, or other biases that are not easily identifiable and could impact everything from staff selection to matchmaking to facial recognition. There is also the potential for creating video or simulating voices, which could lead to more widespread misinformation and manipulation. Or the potential challenge posed to intellectual property as we know it.
For me, the most significant disruption we will face in the era of generative AI is an epistemological one. Generative AI, with its ability to write (as well as summarize, translate, and edit), could bring profound changes to how we relate to knowledge.
For example, there will be less need to read through texts as people will be able to automatically generate summaries and get answers to specific questions. If full-text reading falls into disuse, it is difficult to predict the repercussions on human comprehension. It could exacerbate the “knowledge illusion,” a dangerous cognitive bias widely studied in psychology, in which people who access superficial and fragmented information about a topic (such as social media posts) believe they know more than they actually do.
In a world where generative artificial intelligence is accessible to everyone, writing will become less necessary, as machines will automatically produce the text we need. This may seem useful, but it is not without its dangers.
Writing and thinking are two intertwined cognitive activities. By revising what we write, we also revise and refine what we think. It is not possible to write without thinking. Writing allows us to shape, organize, and confirm our thoughts, which may initially seem superficial and unconnected. Writing gives structure to thoughts by creating sentences, paragraphs, pages, and stories.
Writing skills are an expression of our ability to organize thoughts, evaluate ideas, reason, and reflect. By delegating the task of writing to machines, we are also handing over the process by which we construct our thinking, generate our learning, and develop our critical faculties. If our ability to think is weakened, what will happen to our originality and imagination? Earlier technologies devalued human skills: muscle power was devalued by mechanical or electrical power, speed was devalued by motor vehicles, mental arithmetic skills were devalued by calculators, and the ability to read maps was devalued by GPS systems.
In a world where writing is no longer necessary (or is only a minority craft), the greatest risk is that the overall quality of thought will decline, with consequences that are hard to predict. We should use these tools to sharpen our skills, not replace them. Ultimately, the danger is not that machines will become more sophisticated, but that humans will become less intelligent. It is this epistemological “extinction event” that should concern us.