Civic Illumination Paper

Artificial Intelligence — Promises and Risks

How to think clearly about the most consequential technology of our time

Neither Utopia Nor Apocalypse

Artificial intelligence has generated a quality of public discourse that is, by turns, breathlessly celebratory and catastrophically fearful — and not infrequently both, from the same source within the same week. This is not surprising. AI is genuinely consequential technology, and the extremes of hope and fear are natural responses to genuine uncertainty about the shape of its effects. But they are not, in their most florid forms, epistemically useful. The serious seeker who wants to understand what AI is, what it can and cannot do, and what responses to it are proportionate must navigate between the promotional hyperbole of those who build and sell it and the existential warnings of those who fear it most.

The doctrine holds that the appropriate response to any powerful technology is the same as the appropriate response to any powerful fact: disciplined engagement, honest assessment of what is known and unknown, and the willingness to hold both the genuine promise and the genuine risk without collapsing the tension prematurely into either pure celebration or pure alarm. This is harder than either extreme, and it is more useful.

What Current AI Actually Is

Contemporary artificial intelligence, in its most visible and consequential forms, is dominated by a class of systems called large language models, together with related systems for image generation, speech recognition, and multimodal processing. These systems are trained on vast quantities of data — text, images, code — using a machine learning technique called deep learning, in which artificial neural networks with billions or trillions of parameters are optimised to predict the next token in a sequence or to minimise a loss function over a training distribution.

The resulting systems are genuinely remarkable. Large language models can produce fluent, contextually appropriate text on almost any topic; can engage in multi-step reasoning about complex problems; can write, debug, and explain code; can translate languages; can summarise documents; can answer questions about specialist domains with a competence that frequently exceeds that of non-experts. These capabilities have emerged not from explicit programming but from the statistical structure of the training data — from the patterns in vast amounts of human-generated text.

What current AI systems are not is equally important to understand. They are not conscious. They do not have desires, intentions, or beliefs in any meaningful sense. They do not understand language as humans do — they model its statistical structure. They hallucinate: they produce plausible-sounding false claims with the same fluency as true ones, without reliable signals that indicate when they are doing so. Their capabilities are narrow in ways that are not always obvious: systems that perform impressively on some tasks fail in surprising ways on others. They are, in the current state of development, better characterised as powerful and versatile tools than as autonomous agents.

The Genuine Promises

The genuine promises of AI are substantial and deserve honest acknowledgement. In medicine, AI systems have already demonstrated abilities in diagnostic imaging — detecting cancers, retinal diseases, and neurological conditions — that equal or exceed specialist clinicians in specific narrow tasks. Drug discovery applications are accelerating the identification of candidate therapeutic compounds. AI-assisted protein structure prediction, exemplified by DeepMind's AlphaFold, has solved a problem in structural biology that had resisted decades of effort and will accelerate the development of new medicines.

In education, AI tutoring systems offer the prospect of genuinely personalised learning at scale — responsive to the individual learner's current understanding, patient, always available, and capable of generating practice problems and explanations adapted to the specific difficulties each learner is encountering. The potential to provide high-quality educational support to learners in contexts where human expert teachers are scarce is significant and could contribute to reducing global educational inequality.

In science and research, AI systems are accelerating the processing and analysis of data at scales and speeds that human researchers cannot match, identifying patterns in complex datasets from genomics to climate science, and assisting in the synthesis of vast scientific literature. The acceleration of discovery that AI may enable across multiple scientific domains could be among the most consequential contributions of the technology to human welfare.

The Genuine Risks

The genuine risks are also substantial and deserve the same honest engagement. The displacement of workers by AI automation is not a future scenario. It is an ongoing process, and the historical analogy to previous waves of automation — which ultimately created more jobs than they destroyed — may not hold if AI automation extends deeply and rapidly into cognitive work in ways that previous automation did not. The distributional consequences — which workers and communities bear the costs of transition — are a serious concern that requires serious policy attention.

The use of AI in surveillance, manipulation, and the generation of disinformation poses risks to epistemic freedom and democratic functioning that are already visible and are likely to intensify. The capacity to generate convincing synthetic media — deepfake video and audio, synthetic text indistinguishable from human writing — at scale creates a threat to the shared information environment on which democratic deliberation depends. The doctrine regards the corruption of public speech as among the most serious civic dangers, and AI-enabled disinformation is a powerful instrument of such corruption.

The concentration of AI capability in a small number of large corporations and nations raises questions about the distribution of economic benefit and the concentration of social power that require governance responses that are not yet adequate. The doctrine holds that knowledge gained for purposes of domination rather than service represents a moral failure — and AI capability concentrated without accountability creates exactly that risk.

The engagement this technology deserves is not awe and not panic. It is what the doctrine calls the Measures of Clarity: careful, precise, honest assessment of what is the case, what the evidence supports, what the genuine stakes are, and what disciplined response looks like in practice.

Knowledge, power, tools, and technology shall be handled with restraint, foresight, and moral gravity.