The Health of Self-Correction
Among the most important properties of any institution dedicated to the pursuit of truth is what the doctrine calls corrigibility: the capacity to receive criticism, examine itself, revise course, and acknowledge error without requiring total collapse as the price of honesty. Corrigibility is rare. It is uncomfortable. It requires the willingness to sacrifice the appearance of reliability for the reality of it — to acknowledge, publicly, that the institution has been wrong, in the conviction that honesty serves its purposes better than the performance of infallibility.
The history of science and medicine offers instructive examples of both institutional failure and institutional self-correction — of errors maintained past the point where the evidence required revision, and of the processes by which, eventually, revision was achieved. These examples are not merely historical. They are models of what the serious institution looks like when it is working well, and cautionary tales for what it looks like when it is not.
The Long Resistance to Germ Theory
The germ theory of disease — the proposal that specific infectious diseases are caused by specific microorganisms — is now so fundamental to medicine that it is impossible to imagine the field without it. But its acceptance was neither rapid nor smooth. When Ignaz Semmelweis demonstrated in the 1840s that childbed fever could be dramatically reduced if physicians washed their hands before delivering babies, the medical establishment resisted. When Louis Pasteur and Robert Koch established the germ theory in the 1860s and 1870s, significant resistance continued.
The resistance was not entirely irrational. The dominant theory — miasma theory, which attributed disease to 'bad air' and corrupt environmental conditions — had explanatory reach, and the evidence for germ theory, compelling as it was, required the abandonment of a framework that had organised medical understanding for centuries. The difficulty was not simply intellectual; it was also professional and social. Physicians who had built careers and reputations on miasmatic explanations were being asked to concede that their framework was wrong and that its practical recommendations had been, in some cases, harmful.
The self-correction came, but it came slowly and unevenly. Pasteur and Koch's work was eventually accepted. Lister's application of antiseptic principles to surgery, based on germ theory, produced dramatic reductions in post-surgical mortality that the evidence could not ignore. The institutions of medicine — the journals, the professional associations, the medical schools — eventually revised their teaching and practice. The process took decades and cost lives in the interval.
The lesson the doctrine draws from this history is not contempt for the physicians who resisted. They were working within the best framework available to them, with the cognitive tools that the sociology and psychology of their profession provided. The lesson is about institutional structure: the importance of building into institutions the mechanisms — transparent review, adversarial critique, accountability for outcomes — that accelerate the process of self-correction when it is needed.
The Replication Crisis and Contemporary Science
Contemporary science is in the midst of its own reckoning with institutional failure, in the form of what has been called the replication crisis. Beginning around 2011, researchers in psychology and medicine began systematically attempting to replicate high-profile published findings, and found that a disturbingly high proportion could not be replicated: the effects were smaller than originally reported, or absent entirely, or dependent on conditions not clearly specified in the original papers.
The crisis was not the result of widespread fraud, though fraud played a role in some cases. It was the product of a set of institutional incentive structures that systematically rewarded the publication of positive, novel, surprising findings at the expense of accuracy and replicability. Journals had strong preferences for statistically significant results, creating pressure to find positive findings. 'P-hacking' — the manipulation of analysis choices to produce nominally significant results — was widespread, often without conscious intent to deceive. The lack of pre-registration meant that hypotheses could be formulated after data collection, in the light of what the data showed, without this being disclosed.
The response of the scientific community to the replication crisis is an example of institutional corrigibility in action. Pre-registration of hypotheses and analysis plans before data collection is now standard practice in many fields. Open data and open materials requirements are being instituted by many journals. Registered Reports — a publication format in which peer review occurs before data collection, ensuring that results will be published regardless of their direction — are becoming more common. The culture of the field is, slowly, shifting toward the practices that produce more reliable knowledge.
The process is incomplete and contested. But the willingness of science — as an institution, if not always as a collection of individuals — to respond to evidence of its own failure with genuine structural reform is precisely the corrigibility the doctrine commends. Institutions that cannot do this do not deserve the trust placed in them. Those that can, earn it.
Any institution claiming to honour truth must remain corrigible.