The Uncertain State of Mortality: Reconsidering the Intricacies of Death in the Context of the First Human-to-Human Heart Transplant Surgery

The Uncertain State of Mortality: Reconsidering the Intricacies of Death in the Context of the First Human-to-Human Heart Transplant Surgery

Ashley Clark

Download PDF


This article covers the legacies of the first human-to-human heart transplant performed by Christiaan Barnard and his team on the 3rd December 1967 at the Groote Schuur Hospital in Cape Town, South Africa. The investigation into this subject is to emphasise its significance from a socio-medical stance as success in this procedure allowed for medical engineers to not only be able to perform an unthinkable surgery like a heart transplant but also encourage an innovative discipline which interprets the process of death in line with advances in medical technology and understanding. By looking at a series of ideological standpoints which theorise the multi-faceted brain death concepts, this scholarship highlights how the brain has replaced the heart as the centre-point of ascertaining death. In summary, this article gives agency to a topic which is largely unexplored but remains imperative to understanding how the concept of Brain Death has become a medically and socially accepted ideology; one that is fundamental to concluding whether an individual is dead or alive.

Keywords: Electroencephalogram, Human heart transplants, Brain death, Cape Town Symposium, Donor, Christiaan Barnard, The Harvard Committee, The Presidents Commission

Author Biography

Ashley Clark is a graduate of the University of Derby. He received a First Class Honours in History which sparked a passion in medical history. He is currently completing a Post-Graduate Diploma in Secondary Education History at the University of Birmingham before beginning his teaching career as a Secondary School History teacher at Birmingham Ormiston Academy. His interest in this field stems from his ability to produce valuable work in a significant yet unexplored area of history.


The concept of death has various interpretations that are shaped by cultural and spiritual connotations. The success of the first human heart transplant on December 3rd, 1967 experienced by recipient Louis Washkansky remains one of the most ground-breaking medical procedures of the twentieth century; leading to, as Martin S. Pernick insinuates, fraught considerations about the meaning of death because of the possibilities now posed by the successes of heart transplant surgeries. Pernick highlights how this surgical breakthrough was the latest stage in a long history of controversies about death and its interpretations.[1] The technological innovations in medicine that appeared in the post-Second World War period, particularly in the 1960s, partnered with the tremendous success of the first heart transplant in 1967, created a new discourse in the field of death; one that shifted the focus from the heart as the central system of the body to the brain having the utmost importance when determining an individual’s state. Whilst there are many different types of ‘brain death’ such as those explored in Ben Sarbey’s article on death and its definitions, this latest development in the topic of death allowed contemporaries and historians to approach the matter of death from a completely different angle.[2] This new perspective is displayed by historian Robert Veatch and his compelling summary of death stating that ‘death is the irreversible loss of that which is essentially significant to the nature of man.’[3] Despite the multiple ambivalent stances that are present about mortality, an undeniable factor is the emergence of medical technologies and their ability to reconstruct the traditional paradigm. Advances in medical science and equipment are responsible for brain death becoming a legally acknowledged alternative to the traditional cardiopulmonary definition[4], which remained unchallenged until the success of the heart transplant in 1967. This article will analyse the effects of the first human-to-human heart transplant and how this procedure acted as a driving force in the process of redefining death. It will use scholarly observations from contemporary surgeons and specialists in the cardiac field such as Dr Denton Cooley, Professor Christiaan Barnard, Dr Adrian Kantrowitz, and Dr Pierre Grondin; all of whom were present at the Cape Town Symposium in 1968 to discuss the legacies and experiences with human heart transplantations. Observations made by these professionals, amongst others, will support the need to compromise on a valid and accepted meaning of death that is both necessary and effective.[5] Addressing these features will relay the overall theme of the article which highlights the significance of the heart transplant and its ability to encourage evolved socio-medical doctrines about death and its occurrence.

The Heart Transplant and the Foundations of Death in a Modern Context

The success of Professor Christiaan Barnard and his surgical team on December 3rd, 1967, remains one of the most ground-breaking innovations in medicine.[6] A striking observation about Barnard’s success is not the technicalities required to perform it but rather the location where it was conducted. Barnard had spent years observing pioneers in the surgical world. Witnessing the legal setbacks that were apparent in the US, Barnard stated to a pump technician working in the US that ‘you have too many prohibitions to negotiate before you can find a donor. We have no such obstacles in South Africa.’[7] Barnard knew that the agreement of only two doctors was required to declare death through irreversible brain damage and he sensed an opportunity in South Africa; one that could manipulate the less restrictive legislation around death, as the regulations in the US were more established and firmer.[8]

Unorthodox requirements necessary to completing a heart transplant such as finding an appropriate number of willing donors and removing a beating heart from a patient in a persistent vegetative state who would still be traditionally labelled as ‘alive’ created a grey area in ethical considerations. The realisation that donors were needed to support the practicality of this procedure generated uncertainty in both the public and professional spheres; mainly because giving an individual a donor status is essentially synonymous with declaring death. Despite the proliferation of successful heart transplants since Christiaan Barnard’s success in 1967, this field remained largely unexplored and left contemporary professionals and members of the public anxious about the long-term effects of heart replacements. To tackle these social and ethical issues around death, transplant surgeons advocated a revised definition of death which was compatible with the revolutionary surgical innovations of the period. Medical transformations such as ventilators ensured that key organs such as the heart and lungs could now be maintained artificially. Transplant surgeons believed that the traditional criteria of death were now invalid and that a death centring its focal point on the brain was more appropriate for the evolving medical climate. Thus, the growth of human heart transplants became pivotal to debates on death and how it is defined.

The development and progress in hospital equipment throughout the twentieth century were critical to the success of heart transplants. The use of a ventilator provides patients with serious brain injuries a chance of recovery by providing the body with enough oxygen so their heart continues beating and circulating oxygenated blood.[9] In the context of heart transplants, this machine sparked mass debate as death by cardiopulmonary means was now reversible and, as David Rodriguez Arias has suggested, the ventilation machine and its presence contributed to a readjusted approach to determining death.[10] As heart transplants were now in the public domain, Martin Pernick highlights the emergence of two key concerns for both the public and professional sphere: the fear of being pronounced dead before appropriate and therefore overhastily being designated an organ donor, and the fear of being kept alive too long as a ‘vegetable’ with severe, irreversible brain damage.[11] Concerns like these were used by transplanters to highlight the ambiguities of death and because the heart transplant was used to treat people with heart failure which was the biggest killer during this period, Barnard justified the extraction of organs from brain-dead patients on life support by suggesting that they were extracting organs from people who existed in a no-man’s-land between life and death.[12] The feasibility and availability of donors were now becoming a priority as this would help protect the longevity of heart transplants, and the presence of the ventilator now served as an inadvertent obstacle that was as beneficial to the concept of heart transplants as it was disadvantageous. This is because the ventilator could now sustain the heart by allowing it to function normally. This created a difficult conundrum between deciding whether to hold out for the unlikely but potential recovery of a patient, or whether to extract a perfectly healthy heart from someone who was in an irreversible state of brain death to help others with heart failure.

The heart transplant was an interesting development in surgical advances which provided the groundwork for future medical innovations. However, reflecting on unconventional yet innovatory procedures such as heart transplants, modern observations suggest that for an organ transplant to be considered ethical then the policies about organ procurement should not ignore either the vital needs of the recipients or the dignity and interests of the donors. This is an acknowledgement that was also recognised before and immediately after the heart transplant’s debut in 1967.[13] Concerned with the ethics of donor/recipient safeguarding and the intricacies of the procedure itself, contemporary physicians and transplant surgeons realised that they had to also destroy any debatable perceptions that the public may have about surgeons and their self-indulgent nature to ravenously snatch organs from helpless victims.[14] This selfish nature is highlighted in contemporary newspapers where tabloids, such as The Times, imply that heart transplants have become a point of national rivalry, one that defies the Hippocratic approach[15] which Western medicine is built upon.[16] Furthermore, the heart transplant was regarded as an unworkable procedure because of its uncertain prospects. The operation still had the label ‘palliative’[17] and because of the huge risks that shadowed this procedure, controversies lingered around whether they should be proliferated and normalised.[18] A strong realisation that supports the anti-heart transplant notion is that despite the procedure’s initial success, criticism for this operation began to surface when the public realised that transplant surgeons could not always control organ rejection.[19] This was a valid argument as rejection served as the biggest killer in heart transplant victims. However, Lord Morris writing in The Times in 1969, believed that to withdraw from performing heart transplants because of the fear of rejection and other issues reflected a defeatist attitude, one that did not complement the revolutionary medical understandings stemming from Barnard and his team.[20] Morris believed that the success presented by Barnard’s second heart transplant victim, Dr Blaiberg, who lived on for eighteen months post-operation, served as a triumphant leap into a complex field and that it would be disreputable to halt this approach indefinitely due to fears of the unknown and other prejudices. This contemporary proposal is supported by experts David Cooper and Denton Cooley[21] who reframe that considering the unexplored area of immunosuppressive therapy and a surgical team’s inexperience in treating and diagnosing tissue rejection, Barnard’s second transplant served as a framework to support explorations into the field of heart transplants and preserve its legitimacy.[22]

Revolutionary advances in medicine witnessed in the late 1960s added pressure to creating a new definition of death; one that reflected medical innovations and understandings. The traditional criteria of cardiopulmonary-related death were no longer compatible with situations created by medical equipment that could now be used to reverse conventional means of death, and as Hershenov has implied, individuals who would have been considered dead in another era [were] now sometimes ‘returnable.’[23] The ground-breaking impact of Barnard’s procedure, partnered with the advances in medical technologies created a desire to protect the procedure’s long-term prospects. As heart transplants were now recognised as a practical option for treating patients with heart failure, the need to redefine death was imperative to organ transplant advocates as the heart and lungs could now be regulated artificially. So, not only did that make the traditional concept of death obsolete but it also created an urgency to reconsider existing medical terminologies. Professor of Social Medicine David Rothman acknowledged that once surgeons had ‘transplanted a beating heart and the feat was celebrated in the media – the need to redefine death was readily apparent.’[24] As the concept of death had now been thrown into question by transplant surgeons, waiting for heart stoppage was now surgically unacceptable. Transplant surgeons believed that waiting for the cessation of a heart to pronounce death was now medically unethical; especially with the new technologies and understandings in this field.[25]

However, whilst convincing suggestions were made about what constitutes death by heart transplant supporters, obtaining the appropriate number of donors for recipients remained challenging. This is why the meaning of death had to be carefully reassessed to gain support for transplant surgery and increase the number of people willing to donate their organs if they entered a state from which they were to be unable to recover. Desmond Smith argued that ‘as the need for donors grow larger, the definition of death must be carefully redefined. When are you dead enough to be deprived of your heart?’[26] These socio-medical and ethical questions about heart transplants were the primary motivation behind the Symposium in 1968. At this event, leading surgeons across the globe came together to discuss how their interpretations in this medical discipline could enhance progress and provide a new method of treating those with serious heart issues; also, extinguishing any critical declarations which may completely stop or halt the process of maintaining heart transplants.

Whilst heart transplants dramatically proliferated in the late 1960s, it was the success derived from Barnard’s patients that proved the most ground-breaking. Barnard’s first patient died eighteen days post-surgery and his second patient lived for a staggering eighteen months post-operation; it is the latter case which is regarded as the cornerstone of securing the future of heart transplants.[27] It was this type of success which helped to re-conceptualise the idea of death, making the topic a sensitive delineation between defining death as the cessation of brain functions or circulatory/respiratory functions.[28] However, whilst the surgery’s success did encourage the idea that the practicality of heart transplants should be taken seriously, a popular reason to reject this operation was the lack of long-term solutions that it offered. Opinions and feelings in the post-Barnard era can be witnessed in examples such as John Roper’s article where he uses the idea that:

occasional dramatized success should not oblige the health service to try to meet the disproportionate demands of surgical enthusiasts for scarce medical, technical, and nursing resources implicit in a premature attempt to establish cardiac transplantation as a practicable form of treatment before the basic scientific problems concerned have been brought nearer to solution.[29]

The experimentations in this field were unorthodox and although contemporary professionals and non-professionals were concerned about the feasibility of the transplant and the self-interests of transplant surgeons, these types of claims inspired the symposium of 1968. Arguably, to think rationally during this period would be to discourage heart transplants temporarily due to the social and medical complexities that it entailed. Contrastingly, transplant surgeons had already descended into the realms of cardiac transplantation and as Christiaan Barnard suggested, the ‘first step’ into this field has already been taken and it would, in his view, have been wrong for the patient, humanity, and the common purpose to turn back.[30]

The definition of death remained ambivalent because medicine was developing at a rate that many non-professionals and even professionals could not fully comprehend. However, in the context of heart transplants, the ‘dead-donor rule’ played a central role when justifying the procedure in a legitimately ethical manner. [31]  As certain biological functions were now able to be maintained in brain-dead patients, surgical advocates in this field thought it was necessary to amend the meaning of death because of the medical innovations that were now present.[32] This desire to reinvigorate the definition of death was acknowledged by professionals at the Symposium. Dr A. P. Rose-Innes declared that ‘if we use the conventional point of death only, time is impossibly short for proper preparation for the surgery.’ This acknowledgement is mirrored by Dr. de Villiers[33] who applies the re-visited definition of death to the application of donors by stating that, ‘it must be reasonable to state that only those patients with severe, irrecoverable brain damage who cannot maintain their respiratory function independently, should be regarded as suitable candidates for providing tissues for transplantation.’[34] These observations show the importance of a re-evaluated definition of death and its instrumentality in securing the future of heart transplantation. The importance of this change of criteria is fundamental to the procedure as it ensures that a beating heart can be extracted from a brain-dead patient without any controversy surrounding the state of the donor. According to Professor Adrian Kantrowitz[35], ‘experimentally, it can be shown if one takes a beating canine heart, that the chances of immediate success are much higher. If one waits until the heart has stopped, one can resuscitate such a heart, but it does not perform as well as the heart removed still beating.’ He concludes this observation by stating that ‘the point at which the potential donor becomes a donor is essentially the point of irreversible brain death, as diagnosed by the experts.’[36] Whilst the steps taken to perform a heart transplant were widely considered unethical and unorthodox during this period, declarations like Kantrowitz’s imply that with the wide range of medical innovations and capabilities that are now present in the medical industry at the end of the 1960s, it is now inhumane and unethical not to change traditional criteria’s as these conventional methods are now outdated. A similar point to Kantrowitz was displayed by contemporary surgeon Professor Norman Shumway who believed that the heart must be in perfect condition before it is extracted from the donor and placed into the recipient.[37] The use of artificial interventions, such as the ventilator, would help preserve the heart before it is extracted for transplantation purposes; ensuring that the organ is not weakened due to any deprivation of its functions. It salvaged life while also identifying who now became unsalvageable by medicine.[38]

Cardiac transplantation was now gaining a profile that helped the procedure become ‘normalised’ within everyday medical practice and discipline. The focus on death acting as a process rather than an event is a popular initiative that helped professionals reconsider their ideological stance. As emphasised by Professor Kantrowitz, ‘death is a process just as life is, and there are organs and tissues which die very quickly. If you deprive the brain of oxygenated blood for 3,4 or 5 minutes, it is irreversibly dead.’[39] Displaying death as a phenomenon that develops over time reflects an idea stressed by philosopher Baruch A. Brody. Brody emphasises that the search for a definition/criterion made sense when these points were always close in time to each other because medicine could not protect some of the functions when others had stopped. However, this was no longer relevant as medicine was now capable of maintaining key bodily functions for longer periods.[40] The socio-medical attitudes towards death in the 1960s and 1970s reflect a ‘renaissance-like’ development in the medical discipline. These ideologies are also witnessed as early as 1915 in an account of a surgeon from Chicago who justified his euthanasia practices through a brain-based concept of life. He asserted that ‘we live through our brains… Those who have no brains, their blank and awful existence cannot be called life.’[41] This example portrays early implementations of brain death policies. It can also suggest a framework upon which surgeons, physicians and philosophers of the following decades had built their ideology.

The evolving theories around what constitutes death were evident in the ‘President’s Commission for the Study of Ethical Problems in Medicine and Biomedical and Behavioural Research’ report on a ‘total brain’ standardised definition of death, which was published in the early 1980s. The report noted that ‘the traditional “vital signs” of heartbeat and respiration were merely surrogate signs with no significance in themselves. On this view, the heart and lungs are not important as basic prerequisites to continued life but rather because the irreversible cessation of their functions shows that the brain had ceased functioning.’[42] This acknowledgement shows how the traditional approaches to death were no longer applicable and how innovative medical machinery had helped enhance the idea of quickly adopting a modernised style that reflected significant breakthroughs in surgery.

The heart transplant and its public impact on medicine was single-handedly responsible for an immediate desire to reconsider ideas about death. A surgery that required unorthodox steps to be taken to treat terminally ill patients with heart diseases created the foundations of evaluating death in the context of contemporary medical advances.

The Concept of Brain Death and its Fundamental Ambiguity

The concept of death is a field of study with many grey areas, meaning that a concrete definition has not yet been achieved. Death’s ambiguity creates subjective interpretations amongst physicians, transplant surgeons, scholars, and the public. However, scholars tend to analyse the concept of death by attempting to agree on its definition, and then formulate a measurable, appropriate criterion to show that the definition has been fulfilled, resulting in a series of tests to prove that the criterion for death has been reached.[43]

Before attempting to define the topic of death, it must first be acknowledged that a definition and a criterion are two separate entities. A definition portrays the conditions that must be met for a thing or event to answer a certain description, and it is these conditions which must be precise before one can decide which criterion best reflects that these conditions are being achieved.[44] In the pursuit of applying brain death as a suitable way of defining death, Robert Veatch implies that death should be understood in the relevant context when defining brain death. He suggests that the question of using the brain death criterion as a valid method for death seeks to specify the signs that indicate the relevant definition of death has been met.[45] Adopting this acknowledgement is important as it is a useful method to apply when attempting to resolve a difficult subject like defining death. It does not offer an impervious definition, but it does offer a method that will provide a meaning of death in response to evolving medical developments. Whilst a definition is not concrete, it is widely acknowledged that when we use a definition of death referenced by a criterion, then this is a priori matter for which empirical evidence is not directly needed or exactly relevant when determining one’s demise.[46]

In September 1968, the ‘Ad Hoc Committee of Harvard Medical School to Examine the Definition of Brain Death’, led by Henry Beecher, produced a report in the Journal of the American Medical Association under the title ‘A definition of irreversible Coma.’ This publication put forth a criterion involving a permanently non-functioning brain for the purpose of achieving a precise definition of death called ‘irreversible coma’.[47] It was generally agreed that as heart transplants increased their social presence, it was essential to the longevity of this operation that the medical community was to be given some agency when revisiting death’s definition.[48] Whilst the concept of brain death was being integrated into conscious thought amongst professionals and non-professionals and eventually into legislation, this only created more ambiguity regarding brain-orientated definitions. Although the original focus was on a ‘total’ brain death which refers to the death of the entire brain, as neurology now served as a centre-point of labelling death, questions were raised about what areas of the brain were considered important and whether an individual is only dead when the last remaining function in the brain ceases. This opened a different avenue of brain-orientated death, one that related to higher brain functions. As it was suggested that the cerebrum is the area of the brain that performs higher-standard functions like interpreting touch, vision, hearing, speech, reasoning, emotions, learning, and control of movement, the cessation of this area was referred to as the determinant of death based on a higher standard of brain death.[49] As Robert Veatch explained, ‘higher brain death’ holds that the key functions of the brain such as memory, consciousness, and personality are what make us a person and since those functions stem from the cerebral areas of the brain, it is the cessation of these portions of the brain that should be considered equal to the death of a person.[50] These important features of human function were echoed in 1971 when Scottish neurologist J.B. Brierley urged that brain death should solely be associated with the permanent cessation of higher functions rather than the complete loss of all brain functions. He stated that death should relate to the cessation of ‘those functions of the nervous system that demarcate man from the lower primates.’[51] Innovations were now being made within the brain-oriented environment which did not make the matter of defining death any clearer than it was under the criteria of the cardiopulmonary standard. Nevertheless, these are important developments in the field of death as new theories about human cessation were evolving simultaneously with the expansion of medical capabilities and understanding.

The Committee was very influential during this period and reiterated many of the contemporary beliefs of those who supported brain death as a discipline. The Committee was a very important presence during this time as they gave legitimacy to the abstract concept of brain death which before Barnard’s success would have been unimaginable. Henry Beecher favoured brain-based criteria, but his priorities were not set on resolving the meaning of death. Rather, he advocated a neurological stance to solve practical problems that he attributed to new technologies, particularly organ transplantation, heart-lung machines, and ventilators.[52] He hoped that the criteria displayed by the Harvard Committee would not only increase the number of donors but also defend the entire medical profession against public prosecution and accusations that would label these medical professionals as ‘organ thieves’ or ‘killers.’ Although at this period many ideas around securing the future of heart transplants and obtaining large amounts of donors were unethical, Beecher was trying to evolve the ‘normal’ boundaries of death, parallel to the increasing medical changes. It can be suggested that as time and investment were placed into the heart transplant and the concept of brain death, the principles and procedures that encapsulate these disciplines could eventually be considered ethical and morally objective.[53] William Curran, a member of the Harvard Committee resonated with some of Beecher’s ideas by explaining that if the whole-brain criteria became the standard of medical practice, then the law would protect physicians who followed the criteria from any malpractice charges, particularly in the context of organ donation.[54] The Committee’s obligation to a total brain death criteria is explored by historian Karen Gervais who suggests that Beecher [and his team] may have held this belief because there were no higher-brain criteria suggested at this point and not just because he held a whole-brain definition of life.[55] Whilst brain death provided a popular alternative to the traditional definition of death relating to the stoppage of the cardiopulmonary system, the introduction of brain death which filtered into contemporary professional discussions would evolve as further concentration was focused on understanding the cessation of certain brain functions and their overall importance when contributing to death’s process.

Before theories of brain death began to encapsulate both public and professional discourse, it was socially and legally accepted that death consisted of a ‘total stoppage of the circulation of the blood, and a cessation of the animal and vital functions consequent thereupon, such as respiration, pulsation, etc.’[56] These different theories all sought to emphasise which areas of the brain are necessary for determining irreversibility. However, the term ‘irreversibility’ which was used by the Harvard Committee in 1968 to identify the death of the brain is perceived as problematic by neurologist James L. Bernat. To accommodate medical theories and practices with the law, he believed that irreversibility, which means a function that has stopped and cannot be restarted, should be replaced with permanence. This is because the definition of permanence in the relevant context of brain death specifies that a function has stopped, will not restart on its own, and no such intervention will be undertaken to restart it.[57] However, this replacement idea has been criticised by theorists such as RD Truog and FG Miller who have suggested that replacing the term irreversibility with the permanence standard is just ‘gerrymandering the definition of death.’[58] Even though these criticisms may be true to a degree, this example shows just how intricate the study of neurology is in the context of death, and how unexplored and subjective the field of death remains; despite multiple attempts by different theorists to provide a definitive measure of mortality.

Although the Harvard Committee had put forth the idea of whole-brain criteria for defining death consisting of the permanent loss of all brain functions from consciousness to primitive brainstem reflexes[59], the President’s Commission for the Study of Ethical Problems in Medicine and Biomedical and Behavioural Research presented the ‘total brain’ standard to incorporate this theory into state legislation. This is because it addressed the advances in medicine and technology which could now perform necessary bodily functions. Although the President’s Commission did not have the legal authority to change legislation and purely acted as an advisory group to the President on bioethical issues, the presence of this organisation in 1981 shows how the policy of brain death had infiltrated its way into legal matters; originating from the period which witnessed Barnard’s first heart transplant in 1967, and the Harvard Committee’s report in 1968.[60] However, another theory of brain death that became popular during this attempt to inaugurate the total-brain standard by the President’s Commission is the ‘higher-brain standard’, and while competing with the total standard, the death of the entire brain is sufficient enough for labelling brain death but not entirely necessary under the policies of this new discipline.[61] While total brain death may not be entirely necessary for death, it is applicable in the sense that all cases of total brain death are by default cases of death and in practical purposes, cases of total brain death can be referred to as cases of death by anyone holding a brain-based standard of mortality. This is because every case of total brain death would automatically be a case of higher brain death, meaning that both standards have been met.[62] This could be one of the many reasons why a total brain standard was legally accepted and not based on the cessation of higher functions; making the total standard a more politically pragmatic way of separating the state of being alive and dead. The case of total brain death was made stronger by the President’s Commission which attempted to debunk declarations of higher-brain-oriented death. They argued that these definitions which focused on the higher-end functions were too radical, too subjective, and impossible to carry out under the current climate of medical development as no operational criteria existed for supporting these theories.[63]

In response to these proposed definitions of brain death, arguments have arisen that use scientific findings and logic to either support or denounce brain death theories. Two popular theories that attempt to explain brain death are the Thermodynamics Theory and the Holistic Integration Argument. The first theory is employed by Jules Korein who emphasises that parallel to a brain being destroyed, the organism’s critical system has been destroyed and with the destruction of this system, spontaneous fluctuations will irreversibly cause the organism to deteriorate and increase systematic entropy until all functions are unable to operate. He further declares that it is the beginning of this process where certain brain functions are lost that are responsible for the critical functions of the organism.[64] A similar theory supporting brain death is the Holistic Integration Theory which declares that in the event of an individual who possesses complete, irreversible, and irrecoverable loss of brain functions, the organism has lost its central control organ and it is this centre where all different parts and functions of the body are monitored and controlled. However, this theory has been analysed in Dieter Birnbacher’s article where he suggests that if this argument is understood to mean that the brain is strictly responsible for the control of every bodily function, then this is inaccurate when we consider the capacity of bodily functions to remain intact with artificial support, such as a ventilator for up to several months. However, if this theory is interpreted in the spiritual sense that someone who is deprived of vital brain functions is no longer a ‘complete’ individual, then this adds substance to medical debates as it is true that a human without key functions of the brain can hardly be considered complete in the context of ordinary and necessary elements of life.[65] This interpretation of similar theories is compatible with both the total and higher standard of brain death as they both imply that the brain is what matters in the determination of death; whilst attempting to delineate between the areas of the brain that are responsible for functions which are considered both natural and essential to one’s existence. This acknowledgement is consistent with ideas announced by specialists in this field such as Charles Gulver, Bernard Gert, and James Bernat who all conclude that whilst removing life support from someone in a persistent vegetative state (PVS) is controversial, the answer should not be reached by deliberately ignoring the important distinction between death and loss of personhood.[66] Whilst this theory does mention acknowledging the distinction between death and loss of personhood, it is not completely dismissive of the fact that the loss of personhood is irrelevant and although it is a psychological and spiritual concept that can only die in a metaphorical sense, it is this realisation that places a sense of consideration to the idea of a person irreversibly losing their individual identity. This observation serves as another paradigm which reflects the ambiguity of death and how despite scholarly attempts to reach some form of clarity and coherent conclusion, it ultimately complicates matters further; emphasising that the realities of death rely on subjectivity and interpretation rather than objective information.

Deciding on an appropriate definition for death is difficult and the available testing to support these brain-based theories shares the same problems. As the field of neurology is so vast and largely unexplored, using an adequate test that best reflects some of the proposed theories of brain-oriented death remains controversial with professionals all struggling to create a perfect solution. The Quality Standards Subcommittee of the American Academy of Neurology suggested a criterion that was compatible with the evolution of medical capabilities to identify the occurrence of brain death. They had stressed that if a patient was not being maintained by a ventilator then it would be necessary to identify the individual’s death with the prolonged loss of circulatory and respiratory functions; however, if the patient was maintained on a ventilator then it would be appropriate to use the criterion of measuring for prolonged cessation of measurable clinical functions of the whole brain.[67] These innovative adaptations can reflect Barnard’s theories expressed at the Symposium around the extraction of beating hearts. He implied that if you removed a respirator away from a patient to see if they can breathe independently as part of the tests to determine someone’s death based on their inability to breathe autonomously, then is it necessary to wait for the beating heart to stop beating before it can be extracted?[68] Drawing parallels between Barnard’s observation and the methods displayed by the Subcommittee, we can see a continuation in medical discipline first proposed by Barnard in 1968 and by the Subcommittee in 1995. This is apparent through the realisation that the use of artificial machinery has become obligatory for someone in a critical state, and because of this, the criteria for brain death have become the closest resemblance to measuring someone’s mortal state; in a scenario where irreversible cessation has occurred and recovery is not an option due to the irreparable damage of the brain, and the artificial inability to reverse the occurrence.

Although specialists debate which definition best reflects the state of death, most medical experts suggest that the diagnosis of brain death should be confirmed by tests showing that none of the vital centres in the brain stem is still functioning, repeating this test frequently to enhance its validity. In addition, most imply that the presence of an Electroencephalogram (EEG) is not required because although it is used to monitor brain activity, this machine can track even the minutest bit of brain activity which could interfere with the determination of death when applying brain death ideology. So, brain-oriented advocates, either of the higher, total, or brainstem standards, would not want an EEG present as it could tamper with results.[69] Despite these implications present when testing for brain death, Christopher Pallis has observed that bedside tests for brain death that are performed by physicians rarely assess the functions of large portions of the brain such as the occipital lobes, basal ganglia, and the thalamus. He portrays that determination of whole brain death focuses on assessing brainstem functions, presumably because they’re much easier to test for than other ‘higher brain’ functions, their exact location within the brain, and concluding their complete cessation.[70] However, Veatch implies that while a criterion that solely accepts the higher-brain notions of death is unlikely to occur in the immediate future, he suggests that the ‘old-fashioned’ view of the total brain standard is becoming less and less popular as more people begin to realise that not all brain functions must be permanently lost to classify an individual as dead.[71] The desire to reach a completely coherent conclusion about brain death is a natural by-product of ever-evolving ideas and innovations. Its primary focus is to receive a good reception from the public as it plays a key part in the continuation and preservation of both brain death legislation and heart transplants. These issues were contemporarily reported on at the symposium by professionals such as Dr Pierre Grondin[72] who states that ‘even though it is very difficult to define the point of no return, or to find some definition of brain death, I think it is important because the public is worried about one thing and that is that we are going to use donors who could be restored.’ The approach taken here shares many similarities with theorists such as Veatch who seek to preserve their ideology practically. Grondin echoes many of the contemporary understandings about the preservation of heart transplants and the importance of brain death by concluding that experts in this field must attempt to define death to the best of their abilities as this is essential to the preservation of human heart transplants; otherwise, the public would always remain critical of this procedure and those performing it.[73] Tests must be carried out with clarity and authenticity because if a test fails to present any of the key features consistent with brain death, then this standard cannot be diagnosed and certified due to its inability to meet the correct criteria.[74]

For as long as humans have questioned the concept of death, the heart has always acted as the spiritual and biological headquarters of an individual. However, with a proliferation of surgical developments and advanced medical knowledge, the focus on the heart has been replaced with an increase in concern about the brain and its functions. The innovation in surgical equipment and the use of artificial machinery presented the idea that the heart, which was once thought to possess almost mystical qualities, could now be maintained through artificial intervention. Leading professionals such as Boston neuroscientist Robert Schwab argue the traditional narrative of the heart and suggest that ‘the human spirit is the product of a man’s brain, not his heart.’[75] Whilst this transition is noted by many professionals and scholars in the field as dramatic, Alex Capron, the executive director of the President’s Commission, suggested that a move from a definition based on a standard of cardiopulmonary cessation to a brain-based standard was not radical at all; rather, it reflected the recognition and acceptance of new diagnostic measures and equipment that were both not available before and no longer compatible with traditional disciplines.[76] According to Capron and the Commission’s ‘Defining Death Report’, the evolving disciplines and attitudes toward brain death are merely a surrogate that reflects the advanced capabilities of both surgeons and medical equipment and does not highlight a drastic alteration from the traditional narrative of death but instead, offer an innovated alternative approach.[77]

However, whilst these modern ideas and developments have led many to question the occurrence of death in a medical context, it has also created an inquisitive field where professionals and non-professionals have suggested the importance of them as individuals and what components are imperative to natural and ordinary life. These acknowledgements favour the importance of spiritual components, although they do not disregard the biological structure and significance of an individual.[78] The two main theories that reflect these ideas around the spiritual significance of an individual are the Identity Theory and the Functionalist Theory. Firstly, the Identity Theory suggests that ‘states and processes of the mind are identical to states and processes of the brain’.[79] If we take this acknowledgement and apply Jeff McMahan’s idea that ‘each of us is essentially a mind’[80], and our minds are indistinguishable from our brains, then the death of the brain will result in the death of the mind; therefore, the death of the whole person. This theory has been used to add support to some of the brain-oriented standards such as the total and higher disciplines of brain death. This is because the cessation of the cerebral hemispheres will be sufficient for the death of an individual and it is these parts which are believed to constitute the mind and its function.[81]

Secondly, we have the Functionalist Theory which shifts away from the identicality of the mind and the brain and applies the theory that the mind has functions and it is these functions which matter, rather than the particular means by which those functions are carried out. The Functionalist theory of the mind is said to be, ‘what makes something a thought, desire, pain (or any other type of mental state) depends not on its internal constitution, but solely on its function,’ and it is these functions that control an individual’s key features. [82] This theory similarly situates itself in support of the higher-orientated definitions of brain death as it suggests that only the parts of the brain which stimulate the mind are important when determining an individual’s mortal state, where an occurrence of total brain death is not necessarily required. These two theories of identity would prove more appealing as they reflect development within a modernised concept such as brain death. Their theories mirror the description of the higher brain standard which highlights the components of an individual which most would be concerned about, such as producing memories, consciousness, and personality; seemingly underlining the key elements of humanity and living a fulfilling life.[83] However, these theories about personhood and personal identity playing a key role in the determination of death have been critiqued by scholars such as Veatch, who suggests that it is wrong to claim that the higher-brain criteria are solely based on theories of either personal identity or personhood. He acknowledged the possibility that there are living human beings who do not satisfy the various features of personhood and who do not obtain the ability to perform ‘vital’ key functions which are considered ‘essential’ to one’s existence to highlight the invalidity of prioritising these theories of identity when determining death. He concluded that if the law is only discussing the matter of an individual in a state of being alive or dead, then personhood is irrelevant.[84]

Contrastingly, contemporary giants in the field of brain death such as Henry Beecher, on the topic of personal identity, stated that the key functions of an individual are, ‘the individual’s personality, his conscious life, his uniqueness, his capacity for remembering, judging, reasoning, acting, enjoying, worrying, and so on.’[85] Using Veatch’s argument, we cannot be certain that the origins of these functions all stem from either the cerebrum or the cerebral cortex; however, if we apply similar ideas to those of Beecher’s, in the context of heart transplants, the use of a higher-brain criterion for death would result in a proliferation of organs available for donation because of the less confining standards which would now legally define death.[86] Concerning the theories of identity, their key emphasis is that the mind is what gives bodily functions regulated by the brain a purpose, meaning that spontaneous outbreaks of physical movements in a brain-dead person are irrelevant. This idea could reflect a developing attitude regarding higher brain death because while contemporary medical equipment cannot entirely support these standards it does help to debunk any ideas of ‘hope’ in an individual who is irrecoverable, leaving little room for controversy to linger.

As medical capacity grows then the number of theories that argue something ‘new’ or ‘innovative’ increases. This is where we can see a popular rise in the higher brain standards, where supporters of this notion debate internally what definition best suits this discipline. Some claim that the essence of an individual is the integration of both the body and the mind, making these two factors the determiners of life and death. In contrast, other advocates of this definition believe in a stricter version, holding the belief that only the mind is important.[87]

Countries across the globe have introduced amended legislation on brain death that reflects their cultural and legal traditions. These differing notions all try to balance the grounds between applying continuously developing medical understandings, the consideration of a patient’s loved ones, and the patient themselves. As defining death is a grey area, Veatch has implied that ‘when there is a doubt about which of the definitions to adopt, we should take the safer policy course, especially in matters that are literally life and death.’[88] The safer option is one that we can see being implemented by state officials ever since the emergence of human-to-human heart transplants. Also. while the concept of brain death was initially considered unorthodox in theory and practicality, this soon changed when the efficiency and rate of survival through heart transplants were improved, and rejection was no longer an issue. The heart transplant became regarded as a suitable and ethical procedure, one which could effectively fight any type of heart disease. This means that a change in national legislation for the definition of death required adjusting to match the ongoing enhancement in medical proficiency. In the early 1980s, the US government, based on research conducted by the President’s Commission passed the ‘Uniform Determination of Death Act’ (UDDA) which portrayed that an individual was dead if they fulfilled one of the two criteria that were listed. The features described that a patient was dead if they had sustained either irreversible cessation of circulatory and/or respiratory functions, or irreversible cessation of all functions of the entire brain including the brain stem.[89]

However, specific legislation does not apply strictly to all States of the USA. For example, New Jersey has a Declaration of Death Act (1991) that states:

the death of an individual shall not be declared upon the basis of neurological criteria … when the licensed physician authorised to declare death, has reason to believe, on the basis of information in the individual’s available medical records, or information provided by a member of the individual’s family or any other person knowledgeable about the individual’s personal religious beliefs that such a declaration would violate the personal religious beliefs of the individual. In these cases, death shall be declared, and the time of death fixed, solely upon the basis of cardio-respiratory criteria.[90]

These religious and cultural exemptions reflect the ideas stated in Bagheri’s article that nobody should be considered dead based on any form of the brain death standard (often total brain death) if that patient, while competent, has asked to be pronounced dead based on the conventional cardiopulmonary standard.[91] Therefore, the New Jersey Declaration serves as a great example of the kind of issues that are apparent when we define death because although countries have attempted to introduce legal guidelines that firmly state the difference between life and death, subjective recognition is vital to consider when displaying these legal requirements. New Jersey, in the context of the law, has now created a situation where ‘there can be living people with dead brains.’[92] Similarly, the Japanese government passed the Japanese Organ Law of 1997 where individuals, for the pronouncement of their death, can choose the definition related to cardio-respiratory cessation or the loss of entire brain functions based purely on their own decision; an enactment that has been labelled ‘pluralism on brain death definition.’[93] As it is suggested by Kimura that death is not an individual event but rather a family event in Japanese culture, the law gives power to the family to confirm or reject the choice made by the patient under certain circumstances, particularly within the context of organ donation.[94]

This act passed in Japan suggests that any transplant-related legislation which ultimately results in the death of the donor should include the opinion of the family when decision-making. Similarly, it is these considerations which can be seen to mirror legislation passed in New Jersey, but in a national-specific manner that applies to considerations for Japanese cultures and traditions. The developments in understanding are reflected in other state legislations where other countries have used their ideologies and perceptions to formulate their governing laws. In countries such as Canada, the UK, and Switzerland, death is determined under a single-brain criterion. In Switzerland, the Swiss Federal Act of 2004 states that ‘a person is dead when all cerebral functions, including the brain stem, have irreversibly ceased.’[95]

These legislative procedures reflect the differing perspectives around the globe on the controversial matter of establishing death. Whilst Bagheri has suggested that everyone should permit a conscience clause in case of the event where an individual is in a state of irreversibility with ongoing brain damage, stating clearly which definition they would like to be applied, this offers a temporary solution to the current climate and understanding of death.[96] It must be acknowledged that with the ever-evolving medical understandings, surgical disciplines and ideological doctrines must be susceptible to change based on curative breakthroughs. Thus, whilst Barnard’s surgical masterpiece did influence the direction of global attitudes toward death, it does not mean that the brain-oriented definitions of death will serve as permanent solutions. Legislation must continue to find the middle ground between applying innovative ideas about death and acknowledging the considerations and emotions of a permanently comatose patient’s loved ones; as prioritising one over the other can lead to great social and ethical issues.


Barnard’s first successful heart transplant in December 1967 prompted a shift in the medical discourse, one that stressed the importance of re-defining death. In a medical context, as the conventional definitions of death no longer reflected the advances in surgical equipment and understanding, a revised definition was required that would prove compatible with medical developments. As heart transplants served as a resolution for those with heart disease, the preservation of this procedure required a re-evaluated definition of death as the extraction of a heart under conventional understandings of death would mean that the surgeons performing this procedure would essentially be ending a life to preserve another. So, to protect surgeons from legal prosecution and gain acceptance of heart transplantation as a normal ethical event, a reassessed definition of death that focused on the cessation of brain functions rather than the traditional cardiopulmonary system would ensure the longevity of this procedure and the transplant surgeons’ careers. However, whilst this new idea of determining death better reflected medical advances and capabilities, this did not mean that an uncontroversial, objective answer had been reached. As the brain has become the central focus in the determination of death, different theories have surfaced about what areas of the brain are vital to human existence, making the matter of death just as complex as it was before brain death notions were accepted as legitimate disciplines.

Nevertheless, because of heart transplant surgeries, brain death has become the most widely and legally accepted definition of death. Whilst this notion has been recognised as the most suitable way of ensuring the preservation of this procedure, the concept of brain death continues to be shaped by different perspectives and interpretations. Ultimately, these neurological viewpoints served as the perfect replacement for the conventional cardiopulmonary definition; but as theories and advances in surgical equipment evolve, this innovative idea of brain death may continue to be contested by other measures of determining death. In the meantime, however, the first heart transplant procedure and the Symposium in 1968 served as pivotal moments in the seemingly never-ending objective of defining death. Together, they established the need for a revised definition to ensure that the interests of the donor, the donor’s family, and the recipient were given due regard and protection; simultaneously, they also provided the necessary legal protection for transplant surgeons whose reputations depended on the procedure’s success.



Primary Sources

Edited Transcript

Shaprio, H. A. (Eds.) Experience with Human Heart Transplantation: Proceedings of the Cape Town Symposium 13-16 July 1968 (Durban: Butterworth & Co. 1969)

Newspapers and Journals

Atkins, H., ‘Problems in Transplant Surgery’, The Times, Issue: 57281, 19th June 1968

Barnard, C., ‘Patients Moral Courage’ in ‘Surgeon explains first heart transplant’, The Daily Telegraph, Issue: 35031, 11th December 1967

Barnard, C. N., ‘Surgeon Explains First Heart Transplant’, The Daily Telegraph, Issue: 35031, 11th December 1967

By Our Health Services Correspondent, ‘Doctors back brain death concept’, The Daily Telegraph, Issue: 37240, 15th February 1975

Daily Telegraph Reporter, ‘Surgeon attacks “prestige heart transplants”’, The Daily Telegraph, Issue: 35746, 3rd April 1970

Faulkner, A., ‘Brain Death Ruling in U.S.’, The Daily Telegraph, Issue: 35233, 6th August 1968

Hughes, B., ‘Defining death’, The Times, Issue: 60251, 8th March 1978

Kennedy, I., ‘A legal definition of death’, The Times, Issue: 60137, 18th October 1977

Loshak, D., ‘Brain Death is key’, The Daily Telegraph, Issue: 37228, 1st February 1975

Morris, L., ‘Heart Transplants’, The Times, Issue: 57643, 20th August 1969

Mahoney, John, ‘Transplants: the definition of death’, The Times, Issue: 59310, 3rd February 1975

Our Cape Town Correspondent, ‘Barnard’s plans’, The Daily Telegraph, Issue: 35054, 9th January 1968

Our Medical Correspondent, ‘Blaiberg’s death a signal for surgeons to take stock’, The Times, Issue: 57641, 18th August 1969

Our Medical Correspondent, ‘British heart transplant may be too early’, The Times, Issue: 57243, 4th May 1968

Our Medical Correspondent, ‘Cardiology: Transplants assessed’, The Times, Issue: 59861, 15th November 1976

Our Medical Correspondent, ‘Doctors differ on the point of death’, The Times, Issue: 57922, 20th July 1970

Our Medical Correspondent, ‘Heart transplants back in favour, surgeon says’, The Times, Issue: 60722, 12th September 1980

Our Own Correspondent, ‘Doctors’ new definition of death’, The Times, Issue: 57322, 6th August 1968

Our Own Correspondent, ‘Heart Surgeon Hits Back’, The Times, Issue: 57149, 15th January 1968

Our Own Correspondent, ‘Moral difficulties of defining death’, The Times, Issue: 57122, 12th December 1967

Our Own Correspondent, ‘Patient Will Die’, The Daily Telegraph, Issue: 35055, 10th January 1968

Pallis, C., ‘The Criteria for diagnosing death’, The Times, Issue: 59568, 3rd December 1975

Playfair, G., ‘Transplant Rights and Wrongs’, The Sunday Telegraph, Issue: 396, 15th September 1968

Prince, J., ‘Call for brake on heart transplants’, The Daily Telegraph, Issue: 35199, 27th June 1968

Roper, J., ‘Heart surgeons will take stock’, The Times, Issue: 57292, 2nd July 1968

Roper, J., ‘Heart transplants “need more research”’, The Times, Issue: 57455, 10th January 1969

Roper, J., ‘I give extra life Barnard says’, The Times, Issue: 57687, 10th October 1969

Roper, J., ‘Individuals’ rights in transplant surgery’, The Times, Issue: 57352, 10th September 1968

Roper, J., ‘Transplants “should be allowed while heart still beating”’, The Times, Issue: 59308, 31st January 1975

Roy, A., ‘Death of the brain marks end of life, doctors agree’, The Daily Telegraph, Issue: 38457, 26th January 1979

R. Y. Calne & P. D. G. Skegg, ‘Transplants: a register of objectors’, The Times, Issue: 59313, 6th February 1975

Science Report, ‘Cardiology: Survival after transplants’, The Times, Issue: 59057, 5th April 1974

Smith, T., ‘Whose hand on the life switch?’, The Times, Issue: 59860, 13th November 1976

Taylor, F., ‘Concern for “New Heart” patient’, The Daily Telegraph, Issue: 35038, 19th December 1967

Unknown, ‘Doctor criticises heart transplant “vultures”’, The Times, Issue: 57353, 11th September 1968

Unknown, ‘Indian Heart transplant patient dies’, Aberdeen Evening Express, 20th February 1968, p.3

Warman, C., ‘Heart transplants must go on surgeon says’, The Times, Issue: 57969, 12th September 1970

Welbourn, R. B. ‘Heart Transplants’, The Times, Issue: 57170, 8th February 1968

Wolff, M., ‘Did the Surgeon go too far?’, The Sunday Telegraph, issue: 358, 24th December 1967

Official Government Papers

‘Defining Death: A Report on the Medical, Legal and Ethical Issues in the Determination of Death’, Published by President’s Commission for the Study of Ethical Problems in Medicine and Biomedical and Behavioural Research (1981)

‘Federal Act on the Transplantation of Organs, Tissues and Cells’, The Federal Assembly of the Swiss Confederation (2004)

Secondary Sources


Belkin, G., Death Before Dying: History, Medicine, and Brain Death (Oxford University Press 2014)

Cooper, D. K. C., Christiaan Barnard: The Surgeon Who Dared (UK & US: Fonthill Media 2017)

Jonsen, A. R., A Short History of Medical Ethics (Oxford: Oxford University Press 2000)

Kerrigan, M., The History of Death (London: Amber Books Ltd 2017)

Lock, M., Twice Dead: Organ Transplants and the Reinvention of Death (London: University of California Press 2002)

Edited Books

(Eds.) Lena Hansen, S., Schicktanz, S., Ethical Challenges of Organ Transplantation: Current Debates and International Perspectives (Bielefeld: Transcript, Bioethics/Medical Ethics 2021)

(Eds.) Youngner, S. J., Arnold, R. M., Schapiro, R., The Definition of Death: Contemporary Controversies (Baltimore and London: John Hopkins University Press, 1999)

Online Resources

Mayo Clinic Staff, ‘Heart Transplant’, Available online: Accessed on: 14/07/22

Practo, ‘The Hippocratic Oath: The Original and Revised Version’ Available online: Accessed on: 08/11/22.


Alivizatos, P. A., ‘Fiftieth anniversary of the heart transplant: The progress of American medical research, the ethical dilemmas, and Christiaan Barnard’, Baylor University Medical Centre, (4) October 2017.

Ave, A. D., Shaw, D., Bernat, J., ‘Defining Death in Donation after Circulatory Determination of Death’, in (Eds.) Lena Hansen, Solveig & Schicktanz, Silke, Ethical Challenges of Organ Transplantation: Current Debates and International Perspectives (Bielefeld: Transcript, Bioethics/Medical Ethics 2021)

Bagheri, A., ‘Individual choice in the definition of death’, J Med Ethics, 33(3) (2007)

Bernat, J. L., ‘Point: Are Donors After Circulatory Death Really Dead, and Does It Matter? Yes and Yes’, Chest Journal, vol.138, Issue 1. (2010) p.13-16

Bernat, J. L., ‘Refinements in the Definition of and Criterion of Death’, in (Eds.) Youngner, S. J., Arnold, R. M., Schapiro, R., The Definition of Death: Contemporary Controversies (Baltimore and London: John Hopkins University Press, 1999) p.83-93

Birnbacher, D., ‘Determining Brain Death: Controversies and Pragmatic Solutions’, in Lena Hansen, S., Schicktanz, S., (Eds.)  Ethical Challenges of Organ Transplantation: Current Debates and International Perspectives (Bielefeld: Transcript, Bioethics/Medical Ethics 2021)

Brody, B. A., ‘How much of the Brain must be dead?’, in Youngner, S. J., Arnold, R. M., Schapiro, R. (Eds.)  The Definition of Death: Contemporary Controversies, (Baltimore and London: John Hopkins University Press, 1999)

Cooley, D. and O.H. Frazier, ‘The Past 50 years of cardiovascular surgery’, Circulation, Vol 102, No.4 (2000) p.87-93

Cooper, D. K. C., Cooley, D. A., ‘Christiaan Neethling Barnard, 1922-2001’, Circulation, vol.104, No.23. (2001) p.2756-2757

Golia, A. K. & Pawar, M. P., ‘The Diagnosis of brain death’, Indian J Crit Care Med, 13(1) (2009)

Hoffenberg, R., ‘Christiaan Barnard: His First Transplants and Their Impact On The Concepts Of Death’, British Medical Journal, vol.323, No.7327 (2001) p.1478-1480

J. J. C. Smart, ‘The Mind/Brain Identity Theory’, Stanford Encyclopedia of Philosophy (2022)

Jensen, A. M. B., Hoeyer, K., ‘Making Sense of Donation: Altruism, Duty, and Incentives’, in (Eds.) Lena Hansen, Solveig & Schicktanz, Silke, Ethical Challenges of Organ Transplantation: Current Debates and International Perspectives (Bielefeld: Transcript, Bioethics/Medical Ethics 2021)

Lynn M. D., Joanne, Cranford M. D. Ronald, ‘The Persisting Perplexities in the Determination of Death’, in (Eds.) Youngner, Stuart J., Arnold, Robert M., Schapiro, Renie., The Definition of Death: Contemporary Controversies (Baltimore and London: John Hopkins University Press, 1999) p.101-115

Macdonald, H., ‘Considering Death: The Third British Heart Transplant, 1969’, Bulletin of the History of Medicine, Vol. 88, No.3 (2014) p.493-525

Noedir, A., Stolf, G., ‘History of Heart Transplantation: A Hard and Glorious Journey’, Brazilian Journal of Cardiovascular Surgery, 32(5), 2017.

Pallis, C. D. M., ‘On the Brainstem Criterion of Death’, in (Eds). Youngner, Stuart J., Arnold, Robert M., Schapiro, Renie., The Definition of Death: Contemporary Controversies (Baltimore and London: John Hopkins University Press, 1999) p.83-93

Parent, B. & Turi, A., ‘Death’s troubled relationship with the law’, Illuminating the Art of Medicine (December 2020)

Pernick, M. S., ‘Brain Death in a Cultural Context: The Reconstruction of Death, 1967-1981’ in (Eds.) Youngner, Stuart J., Arnold, Robert M., Schapiro, Renie., The Definition of Death: Contemporary Controversies (Baltimore and London: John Hopkins University Press, 1999)

Sarbey, B., ‘Definitions of death: brain death and what matters in a person’, Journal of Law and the Biosciences, vol.3, issue 3 (2016) p.743-752

Veatch, R. M., ‘The Impending Collapse of the Whole-Brain Definition of Death’, The Hastings Centre Report, Vol.23, No.4 (1993) p.18-24


Holt, N. (dir,) Between Life and Death, BBC Production, 13th July 2010



[1] Pernick, M. S., ‘Brain Death in a Cultural Context: The Reconstruction of Death, 1967-1981’ in Youngner, S. J., Arnold, R. M., Schapiro, R. (Eds.) The Definition of Death: Contemporary Controversies, (Baltimore and London: John Hopkins University Press, 1999) See also: Macdonald, H., ‘Considering Death: The Third British Heart Transplant, 1969’, Bulletin of the History of Medicine, Vol. 88, No.3 (2014) p.495.

[2] For the different types of brain death see: Sarbey, B., ‘Definitions of death: brain death and what matters in a person’, Journal of Law and the Biosciences, vol.3, issue 3 (2016): p.743-752.

[3] Veatch, R., The Whole-Brain-Oriented Concept of Death: An Outmoded Philosophical Formulation, 3 J. THANATOL. 13, 23 (1975). Cited in: Sarbey, B., ‘Definitions of death: brain death and what matters in a person’ p.747

[4] The cessation of adequate heart and respiratory functions which results in death without reversal. This definition now became permeable due to the inventions of medical instruments which could now keep humans alive through artificial means.

[5] Youngner, S. J., Arnold, R. M., Schapiro, R. (Eds.) The Definition of Death: Contemporary Controversies, (Baltimore and London: John Hopkins University Press, 1999) p.37-67.

[6] For context see: Cooper, D. K. C., Christiaan Barnard: The Surgeon Who Dared (UK & US: Fonthill Media 2017) p.187.

[7] Quote from, McRae D., Every Second Counts: The Race to Transplant the First Human Heart. New York: G. P. Putnam’s Sons; 2006. Cited in: Alivizatos, P. A., ‘Fiftieth anniversary of the heart transplant: The progress of American medical research, the ethical dilemmas, and Christiaan Barnard’, Baylor University Medical Centre, (4) October 2017.

[8] Cooper, D. K. C., Cooley, D. A., ‘Christiaan Neethling Barnard, 1922-2001’, vol.104, No.23. p. 2756-2757.

[9] Macdonald, H., ‘Considering Death: The Third British Heart Transplant, 1969’, Bulletin of the History of Medicine, Vol. 88, No.3 (2014) p.500.

[10] Rodriguez-Arias, D., ‘Together and Scrambled. Brain Death was conceived in Order to Facilitate Organ Donation’, Dilemata, 23 (2017) p.57-87. Cited in: Ave, A. D., Shaw, D., Bernat, J., ‘Defining Death in Donation after Circulatory Determination of Death’, in (eds.) Lena Hansen, S., Schicktanz, S., Ethical Challenges of Organ Transplantation: Current Debates and International Perspectives (Bielefeld: Transcript, Bioethics/Medical Ethics 2021) p.117.

[11] Pernick, M. S., ‘Brain Death in a Cultural Context: The Reconstruction of Death, 1967-1981’ in Youngner, S. J., Arnold, R. M., Schapiro, R. (Eds.) The Definition of Death: Contemporary Controversies, (Baltimore and London: John Hopkins University Press, 1999) See Also: Lock, M., Twice Dead: Organ Transplants and the Reinvention of Death, University of California Press, London 2002, p.78.

[12] Macdonald, H., ‘Considering Death’, p.501.

[13] Bagheri, A., ‘individual choice in the definition of death’, J Med Ethics, 33(3) (2007).

[14] Lock, M., Twice Dead, p.79.

[15] The Hippocratic Oath is an oath taken by physicians. It ensures that their actions meet the standard ethics and that doctors conduct their work properly to ensure that patients receive the correct treatment.

[16] Our Medical Correspondent, ‘British heart transplant may be too early’, The Times, Issue: 57243, May 4th, 1968.

[17] In this context, a palliative relieves suffering without treating the cause of the suffering. The heart transplant was initially regarded as a palliative because its success was not ensured.

[18] Prince, J., ‘Call for brake on heart transplants’, The Daily Telegraph, Issue: 35199, 27th June 1968.

[19] Daily Telegraph Reporter, ‘Surgeon attacks “prestige heart transplants”’, The Daily Telegraph, Issue: 35746, 3rd April 1970.

[20] Morris, L., ‘Heart Transplants’, The Times, Issue: 57643, 20th August 1969.

[21] David Cooper is an expert in surgical transplantations, serving as a surgeon-scientist until now where he is predominantly involved in the research of xenotransplantation (cross-species). Denton Cooley was a contemporary American surgeon who, along with Barnard, led some of the ground-breaking medical innovations in transplantations. He was also present at the Cape Town Symposium of 1968 along with other contemporary experts in the medical field.

[22] Morris, L., ‘Heart Transplants’, The Times, Issue: 57643, 20th August 1969. See Also: Cooper, D. K. C., Cooley, D. A., ‘Christiaan Neethling Barnard, 1922-2001’, vol.104, No.23.

[23] Hershenov D. The problematic role of “irreversibility” in the definition of death. Bioethics. 2003;17(1):89-100. Cited in: Parent, B. & Turi, A., ‘Death’s troubled relationship with the law’, Illuminating the Art of Medicine (December 2020) p.1055. See also: Kerrigan, M., The History of Death (London: Amber Books Ltd 2017) p.12.

[24] Rothman, Strangers at the Bedside (n.3) p.160. Cited in, Macdonald, H., ‘Considering Death’, p.503.

[25] Idea from: Roper, J., ‘Transplants “should be allowed while heart still beating”’, The Times, Issue: 59308, 31st January 1975. See similar ideas in: Our Medical Correspondent, ‘Cardiology: Transplants assessed’, The Times, Issue: 59861, 15th November 1976.

[26] Smith, D., ‘The Heart Market: Someone Playing God’, Nation, 1968, P.721. Cited in: Lock, M., Twice Dead, p.85.

[27] Hoffenberg, R., ‘Christiaan Barnard’, p.1478.

[28] Ave, A. D., Shaw, D., Bernat, J., ‘Defining Death in Donation’, p.117.

[29] Roper, J., ‘Heart transplants “need more research”’, The Times, Issue: 57455, 10th January 1969.

[30] Roper, J., ‘I give extra life Barnard says’, The Times, Issue: 57687, 10th October 1969.

[31] See: Sarbey, B., ‘Definitions of death: brain death and what matters in a person’, p.751.

[32] Birnbacher, D., ‘Determining Brain Death: Controversies and Pragmatic Solutions’, in Lena Hansen, S., Schicktanz, S. (Eds.) Ethical Challenges of Organ Transplantation, p.103-105.

[33] Jacquez Charl (Kay) de Villiers was a leading South African neurosurgeon emeritus professor of neurosurgery at the University of Cape Town. He was also present at the Cape Town Symposium in 1968.

[34] Shaprio, H. A. (Eds.) Experience with Human Heart Transplantation: Proceedings of the Cape Town Symposium 13-16 July 1968, (Durban: Butterworth & Co. 1969) p.38 & 39.

[35] Adrian Kantrowitz was an American cardiac surgeon and performed the first human heart transplant in the United States, three days after Christian Barnard performed the world’s first human such operation in December 1967.

[36] Shaprio, H. A. (Eds.) Experience with Human Heart, p.41. See also: Loshak, D., ‘Brain Death is key’, The Daily Telegraph, Issue: 37228, February 1, 1975. See also: Lock, M., Twice Dead, p.89.

[37] Science Report, ‘Cardiology: Survival after transplants’, The Times, Issue: 59057, 5th April 1974.

[38] Belkin, G., Death Before Dying: History, Medicine, and Brain Death, (Oxford University Press 2014) p.53.

[39] Shaprio, Hillel A. (Eds.) Experience with Human Heart, p.49.

[40] Brody, B. A., ‘How much of the Brain must be dead?’, in Youngner, S. J., Arnold, R. M., Schapiro, R. (Eds.) The Definition of Death: Contemporary Controversies, (Baltimore and London: John Hopkins University Press, 1999) p.79.

[41] Pernick, M., The Black Stork (New York: Oxford University Press, 1996) Cited in: Pernick, M. S., ‘Brain Death in a Cultural Context’, in Youngner, S. J., Arnold, R. M., Schapiro, R. (Eds.) The Definition of Death, p.7.

[42] Sarbey, B., ‘Definitions of death’, p.746.

[43] Bernat, J. L., Culver C. M., Gert B., ‘On the definition and criterion of death’, Ann Intern Med 1981; 94: 389-394. Cited in Bernat, James L., ‘Refinements in the Definition of and Criterion of Death’, in Youngner, S. J., Arnold, R. M., Schapiro, R. (Eds.) The Definition of Death: Contemporary Controversies, (Baltimore and London: John Hopkins University Press, 1999) p.83.

[44] Idea from: Birnbacher, D., ‘Determining Brain Death’, p.104.

[45] Veatch, R., Death, Dying and the Biological Revolution (New Haven/London: Yale University Press 1989) Cited in Birnbacher, D., ‘Determining Brain Death’, p.104.

[46] Birnbacher, D., ‘Determining Brain Death’, p.104/105.

[47] Sarbey, B., ‘Definitions of death’, p.744. See also: Hoffenberg, R., ‘Christiaan Barnard’, p.1480. Also, Faulkner, A., ‘Brain Death Ruling in U.S.’, The Daily Telegraph, Issue: 35233, August 6, 1968. See Also, Lock, M., Twice Dead, p.90.

[48] Idea from Faulkner, A., ‘Brain Death Ruling in U.S.’, The Daily Telegraph, Issue: 35233, August 6, 1968.

[49] Sarbey, B., ‘Definitions of death’, p.745.

[50] Veatch, R., The Whole-Brain-Oriented Concept of Death: An Outmoded Philosophical Formulation. Cited in: Sarbey, B., ‘Definitions of death’, p. 747.

[51] Pernick, M. S., ‘Brain Death in a Cultural Context’, p.12.

[52] Pernick, M. S., ‘Brain Death in a Cultural Context’, p.9.

[53] Idea from: Beecher, H. K., ‘Ethical Problems Created by the Hopeless Unconscious Patient’, New England Journal of Medicine, 278 (June 27th, 1968) p.1427, cited in: Pernick, M. S., ‘Brain Death in a Cultural Context’, p.9.

[54] Journal of the Medical Association, June 27th 1968, p.1426-29. Cited in: Pernick, M. S., ‘Brain Death in a Cultural Context’, p.13.

[55] Grandstrand Gervais, K., Redefining Death (New Haven: Yale University Press, 1986) p.13 cited in: Pernick, M. S., ‘Brain Death in a Cultural Context’, p.12.

[56] Macdonald, H., ‘Considering Death’, p.501.

[57] Idea from Bernat, J. L., ‘Point: Are Donors After Circulatory Death Really Dead, and Does It Matter? Yes and Yes’, Chest Journal, vol.138, Issue 1. (2010) p.13-16. See also: Parent, B., Turi, A., ‘Death’s troubled relationship with the law’, Illuminating the Art of Medicine (December 2020) Available Online: Accessed on: 03/10/22. Similar ideas mentioned in: Ave, A. D., Shaw, D., Bernat, J., ‘Defining Death in Donation after Circulatory Determination of Death’, in (eds.) Lena Hansen, S., Schicktanz, S., Ethical Challenges of Organ Transplantation, p. 119.

[58] Truog R. D., Miller F. G., ‘The dead donor rule and organ transplantation’, N Engl J Med. 2008;359(7):674-675. Cited in: Parent, Brendan & Turi, Angela, ‘Death’s troubled relationship with the law’.

[59] ‘A Definition of Irreversible Coma’, Journal of the American Medical Association 205 (1968) p.337-340. Cited in: Pernick, M. S., ‘Brain Death in a Cultural Context’, p.8.

[60] Idea from The President’s Commission for the Study of Ethical Problems in Medicine and Biomedical and Behavioural Research Defining Death: Medical, Legal, and Ethical Issues in the Determination of Death. (1981) Cited in: Sarbey, B., ‘Definitions of death’. See also: Hoffenberg, Raymond, ‘Christiaan Barnard’, p.1480.

[61] Sarbey, B., ‘Definitions of death’, p.748.

[62] Sarbey, B., ‘Definitions of death’, p.745.

[63] Presidents Commission, Defining Death, 38-41, cited in: Pernick, M. S., ‘Brain Death in a Cultural Context’, p.19. See also: Sarbey, Ben, ‘Definitions of death’, p.745.

[64] Bernat, J. L., ‘Refinements in the Definition of and Criterion of Death’, p.86-87.

[65] Birnbacher, D., ‘Determining Brain Death’, p.107.

[66] Bernat, J. L., Culver C. M., Gert B., ‘On the definition and criterion of death’, Ann Intern Med 1981; 94: 391. Cited in: Bernat, J. L., ‘Refinements in the Definition of and Criterion of Death’, p.89.

[67] The Quality Standards Subcommittee of the American Academy of Neurology. Practice Parameters for determining brain death in adults. Neurology 1995;45: 1012-1014. Cited in: Bernat, J. L., ‘Refinements in the Definition of and Criterion of Death’, p.85-86.

[68] Shaprio, H. A. (Eds) Experience with Human Heart Transplantation, p.51.

[69] Idea from: Smith, T., ‘Whose hand on the life switch?’, The Times, Issue: 59860, 13th November 1976. See also: By Our Health Services Correspondent, ‘Doctors back brain death concept’, The Daily Telegraph, Issue: 37240, 15th February 1975.

[70] Pallis C., ABC of Brainstem Death (London: British Medical Journal, 1983) Cited in: Bernat, J. L., ‘Refinements in the Definition’, p.87. See similar ideas in: Pallis, C., ‘The Criteria for diagnosing death’, The Times, Issue: 59568, 3rd December 1975.

[71] Veatch, R. M., ‘The Impending Collapse of the Whole-Brain Definition of Death’, The Hastings Centre Report, Vol.23, No.4 (1993) p.24.

[72] Dr. Pierre Rene Grondin was a Canadian cardiac surgeon and is remembered for his efforts in the cardiac field. He was also one of the first doctors to successfully perform a successful human-to-human heart transplant.

[73] Ed., Shaprio, H. A., Experience with Human Heart Transplantation, p.48-49.

[74] Idea from: Golia, A. K. & Pawar, Mridula P., ‘The Diagnosis of brain death’, Indian J Crit Care Med, 13(1) (2009) Available online: Accessed on: 06/10/22.

[75] (Time 1966 Thanatology. May 27) Cited in: Lock, M., Twice Dead, p.92.

[76] Veatch, R. M., ‘The Impending Collapse of the Whole-Brain’, p.19.

[77] President’s Commission (1981) p.59, cited in: Sarbey, B., ‘Definitions of Death’, p.750.

[78] Idea from: Sarbey, B., ‘Definitions of death’.

[79] J.J.C. Smart, ‘The Mind/Brain Identity Theory’, Stanford Encyclopedia of Philosophy Archive (2014) Available online: Accessed on: 10/01/23. See also, Sarbey, B., ‘Definitions of Death’, p.748.

[80] McMahan, J., The Metaphysics of Brain Death, 9 Bioethics 91, 102 (1995) Cited in: Sarbey, Ben, ‘Definitions of Death’, p.748.

[81] Sarbey, B., ‘Definitions of Death’, p.748.

[82] See Putnam, H., ‘Psychological Predicates’Art, Mind, and Religion, 37–48 (William Capitan & Daniel Merrill eds., 1967) Cited in: Sarbey, B., ‘Definitions of Death’, p.749.

[83] Sarbey, B., ‘Definitions of Death’, p.750.

[84] Veatch, R. M., ‘The Impending Collapse’, p.20.

[85] Veatch, R. M., ‘The Impending Collapse’, p.19.

[86] Sarbey, B., ‘Definitions of Death’, p.752.

[87] Veatch, R. M., ‘The Impending Collapse’, p.21-22.

[88] Veatch R. M. Transplantation ethics. Washington, DC: Georgetown University Press, 2000 69–72. Cited in: Bagheri, A., ‘individual choice’.

[89] President’s Commission for the Study of Ethical Problems in Medicine and Biomedical and Behavioral Research. Defining Death: A Report on the Medical, Legal and Ethical Issues in the Determination of Death. Washington, DC: US Government Printing Office; 1981. Available online: Accessed 12th December 2022. See also: Parent, B. & Turi, A., ‘Death’s troubled relationship with the law’, p.1056. See also: Ave, A. D., Shaw, D., Bernat, J., ‘Defining Death in Donation’, p.119. See also: Sarbey, Ben, ‘Definitions of Death’, p.745-746.

[90] New Jersey Declaration of Death Act, N.J. Sess. Law Serv. Ch. 90, 26:6A–5 (1991). Cited in: Sarbey, B., ‘Definitions of Death’, p.746-747. For further mention of New Jersey Act see: Parent, B. & Turi, A., ‘Death’s troubled relationship with the law’, p.1058.

[91] Bagheri, A., ‘individual choice in the definition of death’, p.146-149.

[92] Complaint, McMath v. Rosen. (Cal. Super Ct. Dec. 09, 2015) (No. RG15796121). Cited in: Sarbey, Ben, ‘Definitions of Death’, p.747.

[93] Morioka M. Reconsidering brain death: a lesson from Japan’s fifteen years experience. Hastings Cent Rep 20013141–46. Cited in: Bagheri, A, ‘individual choice in the definition of death’.

[94] Idea from: Kimura R. Death, dying and advance directives in Japan: socio‐cultural and legal points of view. In: Sass HM, Veatch RM, Rihito K, eds. Advance directives and surrogate decision-making in health care. Baltimore: Johns Hopkins University, 1998187–208. Cited in: Bagheri, A, ‘individual choice in the definition of death’.

[95] Swiss Federal Act on Transplantation of Organs, Tissues and Cells. 8th October 2004 (SR 810.21) Cited in Ave, Anne Dalle; Shaw, David; Bernat, James, ‘Defining Death in Donation’, p.119.

[96] Bagheri, A., ‘Individual choice in the definition of death’, p.146-149.

Book Review: Jennifer Mara DeSilva (ed.), The Borgia Family, Rumour and Representation (2020)

Review: Jennifer Mara DeSilva (ed.), The Borgia Family, Rumour and Representation (2020)


Thomas Wood

The Borgias are amongst the most notorious families in history and their sordid legacy has been a source of interest to historians for centuries. Indeed, a new history of the Borgia family appears in Italian Renaissance section of book shops every few years and all of these additions to the vast corpus of Borgia historiography inevitably tackle the question of the Borgia’s legendary reputation. Tales of corruption, poison, and incest have elicited morbid fascination and forensic historical investigation in equal measure with the most recent scholarly examination of the Borgia myth being The Borgia Family, Rumour and Representation edited by Jennifer Mara DeSilva.

This collection of essays forms a fairly comprehensive investigation of the many aspects of the Borgia legend. It draws upon the expertise of twelve researchers who have worked on different aspects of this dark legacy from Cesare’s alleged assassination of his own brother and Lucrezia’s supposed incestuous relationships, to their father Pope Alexander VI purportedly buying the papacy and later accidentally poisoning himself in a plot to murder a cardinal. It is important to note however that this book is not an introductory work. Much of the book presupposes some familiarity with the history of the Borgias and the legends that surround them (two topics that invariably come hand in hand) in order to fully appreciate the scholarship present here. As well possessing as a general working knowledge of the history of the Borgias, much of Rumour and Representation is enhanced by a familiarity of certain seminal scholarly works like that of Michael Mallett which is clearly in the mind of many of the authors who contribute to this volume and remains to this day a good entry-point to the history of the Borgias.[1]

Rumour and Representation consists of thirteen chapters, beginning with an overview of the legend of the Borgias by Jennifer Mara Desilva which serves as an introductory chapter for the rest of the book. The next three chapter concern sexuality and honour; the first by Loek Luiten covers the relationship between the Borgia and Farnese families, while the next two chapters by Diane Y.F. Ghirardo and Sergio Costola examine Lucrezia Borgia’s honour and her time at the Este court respectively. Chapter Five by William Keene Thompson regards the development of and reaction to a roleplaying game based on the 1492 papal conclave, which is followed by a chapter by Roger Gill on the Appartamento Borgia in the Vatican, and then an examination on depictions of Pope Alexander VI as the Devil by Katherine Fellows. The collection then takes a literary turn as Stella Fletcher examines the Borgias in English literature in Chapter Eight, before moving on to Hispanic literature in a chapter on the Ballad of the Death of the Duke of Gandia by Clara Marías. Three chapters then follow examining the life, death, and afterlives of Cesare Borgia; in the first Lucinda Byatt considers the reputation of the infamous Duke, followed by an examination of his death and burials by Alexander Mizumoto-Gitter, and Jennifer Mara Desilva adds another chapter to the collection with a look at Cesare Borgia in film. The collection then closes with a chapter written by Amanda Madden on the afterlife of the Borgias in the Assassin’s Creed series of video games.

The chapters of this edited collection were developed from papers given at a conference that took place on the 9-12th July 2018 at the University of Winchester on the theme ‘Sex, Sin and Madness: the Borgia Family in Early Modern and Modern Popular Culture’. This accounts for the wide range of topics encountered in this book, a skew towards discussions of popular culture, and a noticeable variance in the length of different chapters. The contributors range from doctoral students to well-established historians of the Italian Renaissance, indicating a continuing interest in the Borgias from across a range of generations of historians. These new studies pertaining to the Borgias give readers of this volume an insight into the most up-to-date scholarship on the family across a range of fields.

The strength of this approach is that the different chapters of this collection are of use to a range of different historians – there is valuable scholarship here for any researcher of the Borgias, regardless of their speciality, and there is also plenty of material for researchers of other topics who may wish to add discussions of Borgian history to their work. For example, Chapter Seven by Katharine Fellows on Pope Alexander VI and the Devil, is useful to historians of the Reformation, while scholars working on games and history will find insightful material in Chapters Five and Thirteen by William Keene Thompson and Amanda Madden respectively. The fifth chapter being of particular interest to teaching and the thirteenth to the presentation of history in digital games. The chapters are uniformly well referenced with a good number of images throughout which help to bring the discussion of the Borgia legacy to life. Chapter 9 written by Clara Marías concerning the Ballad of the Death of the Duke of Gandía is especially replete with images of manuscripts and artwork, which add significant value to the presentation of her argument.

This edited collection represents an important chapter in the history of the Borgias. Indeed, it is unfathomable for anyone writing on the family to not cite several of the chapters from this book in their work going forward. The breadth of knowledge and depth of interrogation of sources is impressive, though it is worth noting that this book is perhaps best taken in conjunction with J.N. Hillgarth’s 1996 article on the image of Pope Alexander VI and Cesare Borgia in the sixteenth and seventeenth centuries.[2] This article remains one of the most comprehensive discussions of the Borgia legend within the centuries immediately following their deaths. Covering a large geographic field and a range of different sources, Hillgarth’s article is a treasure trove of sources and it is evident that several authors in this book used this article as a starting point for their own research. While Rumour and Representation does build on many aspects of Hillgarth’s work, it does not eclipse it entirely, and there also remains space for further investigations on the Borgia legend, especially in different geographic contexts.

The book treats the legend of the Borgias for what it is: sensationalist rumours born from the myriad motives of those who despised these individuals or sought to profit from their sordid representations. While it may be wide-ranging, well-researched, and representative of new scholarship in this field, it is hard to imagine that Rumour and Representation will be the last word on the Borgia legacy. The dark reputation of these infamous figures will continue to attract interest from both academic and popular sources for years to come, and one can only imagine that in ten years time popular culture will have spawned enough material to warrant further chapters in this book.

Download PDF


[1] M. Mallett, The Borgias, The Rise and Fall of a Renaissance Dynasty (London, 1971).

[2] J.N. Hillgarth, ‘The Image of Alexander VI and Cesare Borgia in the Sixteenth and Seventeenth Centuries’ in Journal of the Warburg and Courtauld Institutes, 1996, Vol. 59 (1996): pp. 119-129.

Cow 133 at the Central Veterinary Laboratory: Recognising a Novel Zoonosis

Cow 133 at the Central Veterinary Laboratory: Recognising a Novel Zoonosis

Isobel Newby

Download PDF


Known colloquially as Mad Cow Disease, Bovine Spongiform Encephalopathy (BSE) is one of a group of fatal, progressive neurodegenerative diseases called Transmissible Spongiform Encephalopathies. They take the form of scrapie, a contagious disease affecting sheep and goats first recorded in Britain in the early eighteenth century, and Creutzfeldt-Jakob Disease (CJD), a rapidly deteriorating dementia in humans. Farmers were critical in BSE’s early detection. They noticed cattle suffered a loss of control over their limbs, trouble rising, anxiety and eventually paralysis before euthanasia. BSE was found to be a zoonosis: a disease that can be transmitted from animals to humans. The BSE crisis is so-called because it led to the destruction of nearly five million cattle, international bans on British beef imports and the death of 178 people from the variant-CJD. Today, it is accepted that BSE emerged sporadically and spread through the reprocessing of infected material in cattle feed, enabled by intensive farming practices following the Second World War.[1]

Under the auspices of the Ministry of Agriculture, Fisheries and Food (MAFF), Central Veterinary Laboratory (CVL) scientists were responsible for infectious livestock disease diagnosis and research. Despite this, when the first case of BSE was identified at the CVL in 1985, government ministers in charge of animal disease control were not alerted to its existence until 1987. The Central Veterinary Officer (CVO), ostensibly the country’s chief veterinarian, took a minimal role in advising the government on animal health and disease control policy. Focusing on diagnostic and epidemiological research, CVL pathologists did not have an advisory role, though their outputs certainly would have informed the CVO’s recommendations. Existing accounts of BSE have focused on the responsibility of MAFF officials in delaying making it a notifiable disease, out of fear of causing panic amongst the public and the meat trade industry.[2] They do not discuss the influence of CVL scientists on the political response to BSE before 1987.

This paper will demonstrate that, in receipt of the first case of BSE – Cow 133 – the CVL had a pivotal, if indirect, role in dictating how the novel disease was approached by policymakers for a large part of the epidemic. This enables historians of science to rethink their approaches to the BSE episode: the ways in which the CVL understood BSE shows that MAFF did not suppress knowledge but followed the CVL’s relaxed lead. Evidence from materials of the Phillips Inquiry – a public inquiry into the government’s handling of BSE, published in 2000 – show that actions of senior CVL pathologists were largely motivated by a desire to prevent causing panic in the public and industry. Current literature is interdisciplinary with few accounts from the perspective of historians of science. They often employ the narrative of a government ‘cover-up’, focusing on the period following 1987 once MAFF had already been alerted to news of the new disease. The role of the CVL is often neglected in explorations of the British government’s failure to respond to the crisis in a timely manner.[3] Other, more general accounts of the crisis deploy an entertaining telling of the BSE story.[4] This is useful for a sophisticated lay audience but not as an academic analysis of the origins of the disease’s emergence. As a case study for disciplines such as politics, sociology, animal diseases, risk management or science and technology studies, all reflections on the BSE episode require historical analysis.

In late 1984, a cow from Pitsham farm in Sussex presented with the aforementioned behaviours and passed away in February of 1985. This animal was labelled Cow 133 in the veterinarian’s records.[5] By September 1985, seven more cows from Pitsham farm died after suffering similar nervous symptoms. Their peculiarity led to referrals to a Veterinary Investigation Centre for post-mortem, but no definitive diagnosis could be reached. As a result, Cow 133 was passed to the CVL for brain and spinal cord examination.

Assigned to Cow 133 was pathologist Carol Richardson, who found small vacuoles in the brain, causing it to resemble a sponge. These holes were consistent with scrapie-infected sheep brains, but this was the first time she had discovered it in that of a cow.[6] Consequently, she sought a second opinion. Despite Richardson having never observed bacterial toxins causing this effect on the brain, her conclusion was dismissed by colleague Gerald Wells, who believed the holes were caused by a bacterial toxin.[7] Wells’ modified version of the report was submitted back to the local veterinary surgery. Whilst this has been recognised by historians of science, the reasons why two experienced pathologists came to such conflicting interpretations of the same specimen have not been clarified. For example, Kiheung Kim has revealed scrapie’s enduring prevalence in both national and international contexts, but a discussion of BSE research falls outside the scope of his work.[8] Evidence gathered by the Phillips Inquiry indicate that there are two key and connected reasons: organisational changes within the CVL and the scrapie analogy.

Organisational changes

In 1983, Ray Bradley was appointed as the CVL’s new Head of Pathology. Prior to Bradley’s arrival, CVL scientists would specialise in one particular species, including all of its organ systems. He enforced a refocus to expertise in one organ system, across all species. In Richardson’s testimony to the Phillips Inquiry, she attributed a general lowering of CVL expertise in the 1980s to this change.[9] She contended that emphasis on the overall knowledge of an individual species improved the pathologist’s understanding of the animal’s condition as described by farmers. Furthermore, Bradley implemented a change of CVL priorities, from research to diagnosis. As a result, six incomplete research projects were terminated and four dissatisfied staff members left their roles at the CVL. In summary, Bradley’s alterations were intended to prioritise diagnosis, but were not conducive to field practice. They did not reflect the intuition vital and present in the farmer-animal relationship.

The use of analogy

Epidemiological models of BSE were initially based on logical reasoning known as the scrapie analogy. This stipulated that: if BSE was bovine scrapie, and scrapie was of no risk to human health, then BSE was unlikely to be a risk to human health. Maya Ponte demonstrates how constant reference to scrapie in discussions of BSE legitimated the response of the government, allowed the prediction of its behaviour and enabled policymakers to presume that BSE would not be a risk to human health.[10] Ponte generalises that everyone with influence over BSE policy relied on established knowledge of scrapie in discussions about the novel BSE. However, I will demonstrate that the Chief Veterinary Officer (CVO) stood as a solitary, but influential dissenting voice.

Carol Richardson was not inclined to analogise but was trained to make diagnoses based on the morphological structure of specimens without using clinical terminology. As a result, she could only diagnose Cow 133 based on what she observed post-mortem, without assumptions based on knowledge of similar conditions. Wells, on the other hand, saw it in his professional remit to diagnose based on an animal and its herd’s history. Wells, and other CVL pathologists, only decided that the diagnosis of spongiform encephalopathy was appropriate until similar cases entered the CVL in 1986. Wells believed the brain pathologies of the affected cows closely resembled that of scrapie in sheep, leading the CVL to conclude that they had identified the first case of scrapie in a cow.[11] This assumption set a precedent for government response.

Though they are both spongiform encephalopathies, BSE and scrapie are two distinct diseases. Scrapie has been endemic in British flocks for over two centuries and has never crossed the species barrier, whilst this novel cattle disease emerged only in 1984. Still, it was more palatable for CVL pathologists to assume and reassure the ministry that they were dealing with a disease with which they were familiar, than to consider the possibility that it was an emerging disease. Their assumption that the disease was bovine scrapie enabled them to predict its epidemiology, transmissibility and public health implications. Presumptive and analogical strategies formed the basis for future animal and public health policy. As MAFF was led to believe BSE was scrapie, the resulting rationale was that there was only a remote chance of it posing a risk to human health via the consumption of infected beef. In a reality not yet realised, BSE-infected meat products put the British public at risk of the fatal variant-CJD.

Led by Wells, the CVL were to publish an authoritative account of what they called ‘bovine scrapie’ and present the paper to a Joint Meeting of the Medical and Veterinary Research Clubs in May 1987.[12] Knowledge about the novel disease was poised to spread beyond the small circle of state veterinary scientists. In a turning point in this early stage of the BSE episode, CVO William Howard Rees supported publication on the condition that the name of the disease be corrected to BSE. This demonstrates that not all state veterinary scientists shared the belief that BSE was scrapie, with the country’s leading veterinary authority warning against the assumption that the two diseases would behave in the same way.[13] Rees promptly advised MAFF of BSE’s emergence but not on the correct course of action, out of fear that the CVL’s lack of knowledge about BSE would be exposed and cause public speculation and panic. The most accurate method of investigating risk to humans would be through transmission experiments with primates. However, primates were expensive to acquire, and studies would take several years to yield results.[14] By the late 1980s, funding and staffing shortages meant the CVL struggled with its current caseload. Still, any potential risk to humans required immediate mitigation. Accordingly, the CVL were only able to rely on what was already known about scrapie. Despite their efforts to retain knowledge of BSE, two years after Cow 133 was examined by Carol Richardson, BSE began to receive attention by the wider professional veterinary community.[15] The CVL were forced to reverse Bradley’s diagnostic priority to address essential research into the novel disease as the epidemic began.


Before the establishment of independent consultative committees in 1989, MAFF relied solely upon the expertise of CVL scientists. However, a lack of understanding of the disease meant the CVL were unable to carry out their advisory function. Delays and inaction ensued, setting a precedent for the next decade of animal and public health policymaking. The justification of the scrapie analogy was that if BSE is actually scrapie, and scrapie is of no risk to humans, then BSE would not be a risk to humans.[16] Unfortunately, the British government dismissed any risk to human health, with reiterated assurances that ‘beef is safe to eat’. The future course of BSE policy was based on this faulty logic, until the first case of vCJD was confirmed in 1996 after the death of a 19 year old.

A deeper exploration of correspondence between CVL scientists and MAFF officials has shown that the Ministry were not lone decision makers at the start of the BSE crisis. Actions of the CVL were critical determinants of the pace of government response and risk assessment. This demonstrates for the first time the agency and influence of state veterinary scientists on how the emergence of BSE was handled in its formative stages. The CVL had significant control over the initial response to BSE in its identity construction, research dissemination and public relations. Structural organisation and individual personalities within the CVL had a lasting influence on animal and human health policy until the start of the twenty-first century.



Primary Sources

Lord Phillips of Worth Matravers, Bridgeman, J., and Ferguson-Smith, M., The BSE Inquiry: The Report: The Inquiry Into BSE and Variant CJD in the United Kingdom, published October 2000. Archived on 25 May 2006. Retrieved from the UK Government Web Archives:

Holt, T., and Phillips, J., ‘Bovine Spongiform Encephalopathy’, British Medical Journal (Clin Res Ed), 296 (1988), pp. 1581-1582.

Wells, G., Scott, A. C., and others, ‘A Novel Progressive Spongiform Encephalopathy in Cattle’, Veterinary Record, 121 (1987), pp. 419-420.


Secondary Sources

Bartlett, D., ‘Mad Cows and Democratic Governance: BSE and the Construction of a “Free Market” in the UK’, Crime, Law and Social Change, 30 (1999), pp. 237-257.

Beck, M., and others, BSE Crisis and Food Safety Regulation: A Comparison of the UK and Germany, Working Paper (University of York: Department of Management Studies, 2007).

Hardy, A., Salmonella Infections, Networks of Knowledge, and Public Health in Britain, 1880-1975 (Oxford, 2014).

Hueston, W., ‘BSE and Variant CJD: Emerging Science, Public Pressure and the Vagaries of Policymaking’, Preventive Veterinary Medicine, 109 (2013), pp. 179-184.

Jasanoff, S., ‘Civilization and Madness: The Great BSE Scare of 1996’, Public Understanding of Science, 6 (1997), pp. 221–32.

Kim, K., The Social Construction of Disease: From Scrapie to Prion (Oxon, 2006).

Ponte, M., Managing Risk and Uncertainty During a Novel Epidemic (San Francisco, 2005).

Schwartz, M., How the Cows Turned Mad (Berkeley, 2003).

Woods, A., ‘A Historical Synopsis of Farm Animal Disease and Public Policy in Twentieth Century Britain’, Philosophical Transactions: Biological Sciences, 366 (2011), pp. 1943-54.



[1] Abigail Woods explores the development of post-war agricultural policy in: A. Woods, ‘A Historical Synopsis of Farm Animal Disease and Public Policy in Twentieth Century Britain’, Philosophical Transactions: Biological Sciences, 366 (2011), pp. 1943-1954.

[2] A notifiable disease is an animal disease, either endemic or exotic, which must be reported to MAFF even if an animal is only suspected of being affected. Failure to report these was and remains a criminal offence.

[3] See: David M. C. Bartlett, ‘Mad Cows and Democratic Governance: BSE and the Construction of a “Free Market” in the UK’, Crime, Law and Social Change, 30 (1999), pp. 237-257; Matthias Beck, and others, BSE Crisis and Food Safety Regulation: A Comparison of the UK and Germany, Working Paper (University of York: Department of Management Studies, 2007); William D. Hueston, ‘BSE and Variant CJD: Emerging Science, Public Pressure and the Vagaries of Policymaking’, Preventive Veterinary Medicine, 109 (2013), pp. 179-184; Sheila Jasanoff, ‘Civilization and Madness: The Great BSE Scare of 1996’, Public Understanding of Science, 6 (1997), pp. 221–32.

[4] Maxime Schwartz, How the Cows Turned Mad (Berkeley, 2003).

[5] M. L. Teale & Partners, ‘Invoice’, 84\12.28\1.1, in Lord Phillips of Worth Matravers, and others, The BSE Inquiry Report: The Report: The Inquiry into BSE and Variant CJD in the United Kingdom (hereinafter The BSE Inquiry Report), published October 2000, retrieved from the UK Government Web Archives: <>, accessed 04.09.2020.

[6] C. Richardson, ‘Form’, 85\09.10\1.1, in Lord Phillips of Worth Matravers, The BSE Inquiry Report, <>, accessed 04.09.2020.

[7] C. Richardson, ‘Report’, 85\09.19\2.1, in Lord Phillips of Worth Matravers, The BSE Inquiry Report, <>, accessed 04.09.2020.

[8] K. Kim, The Social Construction of Disease: From Scrapie to Prion (Oxon, 2006).

[9] C. Richardson, ‘Evidence given by C. Richardson’, Hearing Day 28, in Lord Phillips of Worth Matravers, The BSE Inquiry Report, <>, accessed 04.09.2020.

[10] M. Ponte, Managing Risk and Uncertainty During a Novel Epidemic (San Francisco, 2005).

[11] C. Richardson, ‘Evidence given by C. Richardson’.

[12] G. Wells, A. C. Scott, and others, ‘A Novel Progressive Spongiform Encephalopathy in Cattle’, Veterinary Record, 121 (1987), pp. 419-420.

[13] W. Rees, ‘Witness Statement no. 126’, in Lord Phillips of Worth Matravers, The BSE Inquiry Report, <>, accessed 05.10.2020.

[14] W. Rees, ‘Minute’, 87\07.29\3.1-3.6, in Lord Phillips of Worth Matravers, The BSE Inquiry Report, <>, accessed 09.11.2020.

[15] T. Holt, and J. Phillips, ‘Bovine Spongiform Encephalopathy’, British Medical Journal (Clin Res Ed), 296 (1988), pp. 1581-1582.

[16] Ponte, Managing Risk and Uncertainty During a Novel Epidemic, p. 39.

Book Review: Laura Ugolini, Fathers and Sons in the English Middle Class, c. 1870–1920 (New York, 2021)

Review: Laura Ugolini, Fathers and Sons in the English Middle Class, c. 1870–1920 (New York, 2021).

Lucy Morgan

Abstract: In this article, Lucy Morgan reviews Fathers and Sons in the English Middle Class, c. 1870–1920 by Laura Ugolini, which was published in hard copy and as an e-book in April 2021. Over the course of the book, Ugolini navigates how the relationships between fathers and sons in the English middle class were constructed in both childhood and adulthood. Ugolini employs late-Victorian and early-Edwardian fiction and non-fiction texts, as well as a sample of oral history interviews as her primary source material in order to create an “in their own words” historiographical study of ideal versus actual father-son relationships in this period.

Biography: Lucy Morgan is a second-year PhD student at the University of Sheffield. Her thesis deals with the social lives and cultural depictions of single men in early modern England. She is more widely interested in historical conceptions of gender and fatherly authority, and how notions of acceptable behaviour were enforced within different social groups. You can follow her on twitter @Lucy_R_Morgan.

Keywords: family, gender, masculinity, Victorian, Edwardian

When considering the stereotypical Victorian father, one of two images emerges: either the stern paterfamilias or the doting papa. These stereotypes existed alongside one another in Victorian popular culture, perhaps most notably in the prolific work of Charles Dickens, where father-figure characters like Dombey of Dombey and Sons and the Cheeryble brothers of Nicholas Nickelby represent both extremes of fatherly cruelty and affection. In Fathers and Sons in the English Middle Class, c. 1870–1920, Laura Ugolini goes a step further to argue that not only did these contrasting images of fathers exist alongside each other in wider society, but that individual men could also embody both of these apparently juxtaposed characteristics simultaneously. Consequently, a more nuanced interpretation of fatherhood is needed. Central to this re-evaluation of the Victorian father is the nature/construction of the father-son relationship in this period, which Ugolini describes as ‘inextricably linked to wider household and family dynamics’ and also ‘gender specific norms and practices’.[1] To recapture a more historicised understanding of fatherhood, three distinct groups of sources are used: seventeen novels and plays written during the Victorian and Edwardian periods, sixty-seven autobiographies by middle-class men published between 1879 and 1994, and twenty-six oral history interviews recorded by Thea Vigne for a project at the University of Essex titled “Family Life and Work Experience before 1918: Middle and Upper Class Families in the Early 20th Century, 1870–1977”. These sources are also supplemented by cases heard by the Middlesex Appeals Tribunal between 1916 and 1918, and local and national newspaper reports of domestic violence published from 1870 to 1919.

The influence of John Tosh’s A Man’s Place is clear throughout the book in the way that Ugolini centres gender and class as reciprocal influences which moulded the behaviour of fathers and sons.[2] First published in 1999, Tosh used A Man’s Place to argue that middle-class male domesticity increased in importance throughout the Victorian period, but peaked and declined after 1870. By incorporating the perspective of both fathers and sons, Ugolini challenges and builds on Tosh’s ideas of the middle-class Victorian home. Viviana Zelizer’s concept of the ‘emotionally priceless’ child can also be seen in the book’s depiction of middle-class understandings of parental authority and childhood obedience.[3] Zelizer suggests that children became increasingly economically “worthless” throughout the Victorian period, with their potential value as wage-earners being replaced by the “priceless” value of the sentimental comfort they provided to their parents, a notion which Ugolini’s middle-class sons reaffirm. The crux of the book is Ugolini’s argument that the social and cultural construction of the “worst” type of father in the period was not violent or abusive, but rather ‘ineffectual and unproductive’ and unengaged with their children, expanding on the earlier work of Joanne Begiato.[4] Ugolini finds this notion reinforced through fiction and non-fiction texts, and is further re-affirmed through the oral history interviews, where “good” fathers are seen as interested in their sons’ lives, even if they did not regularly interact with them in person. By drawing inspiration widely from historical and sociological studies of masculinity, family, and childhood, Ugolini provides the field with a new perspective by uniting parent-first and child-first approaches, allowing both fathers and sons to define what fatherhood meant to them.

Whilst the book is divided chronologically into two halves dealing with childhood and adulthood relationships between fathers and sons respectively, the chapters are arranged thematically, with titles such as “Intimacy and Distance” and “Responsibility and Authority”. This thematic approach is useful in that it allows for the examination of particularly niche topics, meaning that gift-giving, emigration, and corporal punishment are all covered in the same book. The use of oral histories throughout the work, rather than in one distinct chapter, also serves to highlight the homogeneity of some middle-class experiences (such as attending public school), whilst also allowing for an examination of more unique or unusual father-son relationships (such as those within single-parent households). However, this rigid adherence to thematic chapters results in a lack of clarity at certain points, such as when Ugolini argues that even adult sons were bound to fatherly authority, which could be beneficial ‘when sons were unable, because of illness, absence or other causes, to speak or act on their own behalf’.[5] Yet the three examples Ugolini cites as proof of this statement all relate to attempts by fathers to prevent the conscription of their sons during the First World War. This may be Ugolini’s blind spot, as much of her other published work deftly examines how various aspects of masculinity were challenged or affirmed during the First World War. However, its invocation here makes it difficult to gauge whether this circumstance can be considered wholly representative of the 1870 to 1920 period as Ugolini claims. The thematic approach is therefore useful in constructing the roots of the ideal-versus-actual middle-class father-son experience, but as this example shows, it also reduces the importance of chronology and denies the possibilities of ideas changing over time, either gradually or at certain historical “crisis points”.

At other times, Ugolini’s mention of certain themes brings attention to matters that are absent from the book. In the chapter “Conflict and Reconciliation”, Ugolini found that there were eighteen cases of middle-class patricide and four cases of filicide in England between 1870 and 1918.[6] In the chapter, however, she only discusses patricide and not filicide, despite the possibilities for comparison. Ugolini briefly mentions that sons who committed patricide were often described as ‘waifish and stray’ by newspapers, language which clearly emphasises the hierarchical nature of the parent-child bond.[7] There is no evidence provided to illustrate what descriptive language was most often invoked in filicide cases, which raises questions around whether the betrayal of paternal authority or of filial obedience was seen as more serious in contemporary society. Certainly, middle-class filicide was less common, but corporal punishment of children was still socially acceptable throughout the period. A deeper examination of the topic could test the limits of acceptable violent conduct within the middle classes, and would provide an interesting counterpoint about the changing legality of domestic corporal punishment across the United Kingdom today.

Other sections could have benefited from Ugolini providing more in-depth explanations of the topics that she writes about. For example, in the section about first jobs and the expectation that sons would follow their fathers into business, Ugolini presents the testimony of a young man who was estranged from his father. He was given a job by an uncle whom he described as ‘well-meaning . . . [but] no substitute for an active father’.[8] This is presented as a straightforward statement with no deeper meaning, but it could have been an excellent opportunity for Ugolini to dig deeper into the cultural connotations of fatherhood in this period. What was the difference in the support offered by an “active father” and another paternal figure, such as an uncle, grandfather, or godfather? Were non-father figures actively construed both socially and culturally as “lesser” than fathers? This is a tricky debate, which broadly intersects with histories of emotions, and would benefit from being followed up on in further research.

Nevertheless, in Fathers and Sons Ugolini does provide a useful re-interpretation of many of the assumptions held by those who are new to or already familiar with the topic. Ugolini’s emphasis on physical versus emotional closeness challenges depictions of the physically and emotionally absent middle-class father by pointing out that practices like sending sons to boarding school were rooted in expectations of conformity and a desire for betterment, rather than a dislike for their children.[9] Moreover, the strengths of this book are fully demonstrated in the chapters “Consumption” and “Succession and Inheritance”, where the focus turns to material culture practices. Ugolini introduces the concept of the idealised ‘consumer world’, which was integral to the affirmation of the Victorian middle class identity, then moves more specifically into a discussion of how items like pocket watches, pipes, and alcohol came to be associated with adulthood by the sons whose admiration for their fathers led them to ultimately desire and emulate their consumer practices.[10] This, combined with emotional and economic approaches, also serves as a lens to examine inheritance in new ways, which Ugolini presents as an ‘uncomfortable ambiguity’ for many young men in the period.[11] They were reliant on the potential of their future inheritances to sustain their middle-class status, but had to reckon with the notion that their fathers must die for that to occur. The resultant ideological clash between the abstract notion of middle-class economic values and the highly personal relationships between individual fathers and sons is thoroughly dissected by Ugolini, providing a new perspective on the relationship between money, independence, and the transition from youth to adulthood in the Victorian period.

Ultimately, this is a work of breadth, rather than depth. Many aspects of middle-class father-son life are explored, spanning the mundane and exceptional events of the whole life cycle, from family dinners to family holidays, covering instances of anger, violence and disinheritance, as well as instances of affection, closeness and economic provision for both young children and aged parents. This paints a more complete picture of the everyday experiences of the middle classes than those that have been depicted in other scholarly texts. The sources used are rich—equally delightful and emotionally devastating in turns—yet by opting for an “in their own words” approach, Ugolini’s own voice is unfortunately sometimes absent.

Download PDF


Begiato (Bailey), J., ‘A Very Sensible Man’: Imagining Fatherhood in England c.1750–1830’, History, 95/319 (2010), pp. 267-292.

Tosh, J., A Man’s Place: Masculinity and the Middle-Class Home in Victorian England (New Haven, 1999).

Ugolini, L., Fathers and Sons in the English Middle Class, c. 1870-1920 (New York, 2021).

Zelizer, V., Pricing the Priceless Child: The Changing Social Value of Children (Princeton, 1985).


[1] L. Ugolini, Fathers and Sons in the English Middle Class, c. 1870–1920 (Oxford, 2021), p. 2.

[2] See J. Tosh, A Man’s Place: Masculinity and the Middle-Class Home in Victorian England (New Haven, 1999).

[3] See V. Zelizer, Pricing the Priceless Child: The Changing Social Value of Children (Princeton, 1985); Ugolini, Fathers and Sons, pp. 66, 78.

[4] J. Begiato, ‘“A Very Sensible Man”: Imagining Fatherhood in England c.1750–1830,’ History, 95/319 (2010), p. 278; Ugolini, Fathers and Sons, p. 67.

[5] Ugolini, Fathers and Sons, p. 194.

[6] Ugolini, Fathers and Sons, p. 171.

[7] Ugolini, Fathers and Sons, pp. 167, 168.

[8] Ugolini, Fathers and Sons, p. 70.

[9] Ugolini, Fathers and Sons, pp. 42, 50.

[10] Ugolini, Fathers and Sons, p. 92.

[11] Ugolini, Fathers and Sons, p. 137.

Extended Critical Book Review: E. H. Cline, Three Stones Make a Wall: The Story of Archeology (New Jersey, 2017).

Extended Critical Book Review: E. H. Cline, Three Stones Make a Wall: The Story of Archeology (New Jersey, 2017).

Charlotte Coull

Biography: Charlotte has recently completed her PhD at the University of Manchester. Her research interests include the history of archaeology, material history and phenomenology in the nineteenth century.

As an archaeologist with decades of experience in the field and work focusing heavily on Biblical archaeology, American author and archaeologist Eric H. Cline tells a story of his discipline that is very personal. Today in the United Kingdom in the wake of what seems to be a constant barrage of bad news surrounding the British government’s attitude towards archaeology and heritage, most recently the closure of Sheffield University’s archaeology department, it seems we need the personal now more than ever. Voices that are at once accessible, but also knowing, remind us of the warmth and excitement behind an academic discipline at a time when ‘expert’ opinions are often the subject of suspicion. Cline’s 2017 history of archaeology gives a reader this warmth and knowledge in a geographically wide-ranging text filled with vignettes and wonderful facts to pass on to anyone who will listen. Cline is no stranger to authoring popular works, having written accessible titles on Biblical history and ancient Egypt alongside pieces for a more academic audience. The book is divided into six themed parts and each part is in turn composed of accounts of archaeological sites, some famous to a lay reader, such as Pompeii and Troy, and some less famous such as Megiddo in Israel. Interspersed are four cameos on archaeological methods that balance technical details with accounts of this technology in action on subjects ranging from Otzi the Iceman to the Standing Stones at Durrington Walls: ‘How Do You Know Where To Dig’, ‘How Do You Know How to Dig’, ‘How Old is This and Why is it Preserved’, and ‘Do You Get to Keep What You Find’.

The preface to this book sets out its purpose admirably. The history of archaeology is not short of documentation but still Cline sees space for what he calls ‘a new introductory volume, meant for people of all ages’ (p. xvii). Such a volume will be unavoidably “behind the times” should any ground-breaking discoveries be made after its publication, a fact that Cline alludes to at the end of several chapters when he notes the continuation of archaeological work at sites such as Nimrud and Ur in Iran, but with this in mind these books act as waypoints in the broader story of archaeology. They allow us to take stock of where we are at the moment they were written and to note what issues were pressing at the time. These issues are not always transient either: Cline aims to address the continuing invocation of extra-terrestrial, supernatural or divine forces to account for seemingly unbelievable acts of human innovation (the pyramids and the Sphinx being examples) noting that this obscures ‘real scientific progress’ (p. xvi). This is still a salient point in today’s climate of misinformation. However, Cline’s main objective is to inspire his readers to remember their role in protecting our archaeological heritage from the ongoing, and increasing, looting and destruction seen across the world (pp. xvi-xvii). In a sense then this book is an exercise in community building, reminding readers of their shared history and uniting them around fantastic discoveries.

Cline begins this with Part 1, ‘Early Archaeology and Archaeologists’. This is an impressive geographical journey, starting in Pompeii, Italy, and moving through Troy, Egypt and Mesopotamia before crossing the Atlantic to the Maya in the Central American jungle. Pompeii (and Herculaneum) gives Cline a chance to discuss nearly three hundred years of excavation at a single site, seeing individuals from Emmanuel Maurice de Lorraine to Giuseppe Fiorelli move through various excavation techniques including the ‘looting’ of de Lorraine and the far more sophisticated lost wax method of Fiorelli. Whilst this presentation could be considered overly linear, and attributes a lot to the individual, it shows that present archaeological practices did not simply appear in a vacuum.[1] In ‘Digging up Troy’ Cline covers the work of Heinrich Schliemann and the continued controversy surrounding the site which may or may not be Homer’s famous city. ‘From Egypt to Eternity’ includes what will be the usual suspects for many (Lepsius, Mariette and Champollion), in addition to the pub-friendly fact that acid was sometimes used to dissolve the brain during mummification resulting in a ‘gray gooey mass’ running out of the nasal cavity (p. 51). Egyptology is also brought right up to date with the mention of muon radiography and its potential in investigating the Great pyramid. Chapters four and five, ‘Mysteries in Mesopotamia’ and ‘Exploring the Jungles of Central America’ respectively, play with chronology, starting with the more recent and working backwards. For Mesopotamia the reader is told first about Woolley and Mallowan of the early twentieth century, before being shepherded back to Austen Henry Layard, Henry Rawlinson, and Paul Botta. In the Central American jungle we first meet with LIDAR surveys before covering the work of John Lloyd Stephens and Frederick Catherwood in the nineteenth century. Unfortunately, there is a feeling that the discussion of the Maya civilisation is not as deep as it could, or indeed should, be. This is a trend that reappears in later chapters.

Part 2 covers the development of farming with two chapters titled ‘Discovering our Earliest Ancestors’ and ‘First Farmers in the Fertile Crescent’. Perhaps understandably ‘Earliest Ancestors’ is not a comprehensive discussion of the Victorian intellectual chaos surrounding the origins of man such as can be found in A. B. Riper’s Men Among Mammoths.[2] Instead it covers the work of Lee Berger in South Africa, pointing out the importance of caves in prehistoric archaeology as well as using the Chauvet, Lascaux and Altamira caves to discuss the difficulties in keeping such fragile heritage sites open to the public (pp. 112-14). Cline possibly considered these subjects a more tangible entry point to the topic. Chapter 7 unpacks Göbeli Tepe as well as Jericho and Çatalhöyük. Cline’s take on Göbeli Tepe is especially no-nonsense, noting that the site has attracted outlandish interpretations ‘like flies to honey’ but firmly stating that it is not ‘the Garden of Eden . . . an ancient site related to Watchers or ancient Nephilim from the Bible’. Instead, it is ‘plain and simple, one of the most interesting Neolithic sites currently being investigated’ (p. 118). This is pleasingly to the point given the sheer number of bizarre theories surrounding stone age material.[3]

Part 3, ‘Excavating the Bronze Age Aegean’, moves to Mycenae and includes Arthur Evans’s restoration of the Dolphin Fresco at Knossos where he made the mistake of including five dolphins when he only found evidence of two. Cline highlights this as an interesting example of Occam’s razor, in which the simplest solution is often correct (p. 141). Chapter 9 boldly takes on Atlantis in ten pages. Of particular focus are Santorini and Akrotiri, and Cline voices his own opinion that there is ‘a kernel of truth lying at the bottom of many of the Greek myths and legends’, but ultimately this chapter does not discuss many of the more outlandish claims about the city, focusing mainly on the prevalence of earthquakes and their part in the myth (p. 154). In context with the reference to pseudo-archaeology in the prologue this is disappointing, and an engaged reader might like to turn to Paul Jordan’s The Atlantis Syndrome for in-depth discussion.[4] ‘Enchantment Under the Sea’ discusses George Bass, Cemal Pulak, and their work on the Uluburun shipwreck in the later twentieth century (p. 158). This seems like an overly brief cameo for the concept of underwater archaeology, itself a wonderfully rich and interesting area of study as evidenced by works such as Robert Marx’s The History of Underwater Exploration.[5]

Part 4 ventures into what will be perhaps more familiar territory for many readers, the classical world. The first chapter, ‘From Discus Throwing to Democracy’, moves through three sites, Olympia, Delphi, and Athens, mixing history (over one hundred years of archaeological work at Olympia) with Cline’s personal experience (his own time excavating in Athens at the Agora). Cline’s enthusiasm shines through here as he explains the ‘amazing feeling’ of standing in Socrates’s jail cell and Euripides’s theatre (p. 187). Chapter 12, despite its name being directly lifted from Monty Python (‘What Have the Romans Ever Done For Us’), takes a stab at a more serious topic to briefly bring up Mussolini’s Italian nationalism and the considerable amount of archaeological work undertaken by the fascist regime, overseen by Corrado Ricci (pp. 191-192). Cline’s final paragraph in this chapter notes that the combination of archaeology and nationalism has a ‘dark side’, such as when ‘the past has been invoked . . . to support the superiority of one modern group over others’. His assurance that there is a ‘concerted effort’ to avoid such bias in archaeology today is worth making but one wonders if this oversimplifies our contemporary interactions with the archaeological past (p. 203).

The five chapters of Part 5, ‘Discoveries in the Holy Land and Beyond’, are some of the most personal for Cline. Chapter 13, ‘Excavating Armageddon’ discusses the site of Megiddo where he spent ten seasons. It hints at transformations in the archaeological method, bringing in stratigraphy and pottery seriation, and the evidence, or lack of evidence, that ‘Solomon’s Stables’ are a structure from a Biblical story (the spoiler that they are not will be of no surprise) (pp. 225-27). This touches on a far broader issue which is not at all unpacked; the politics and culture behind the beginnings of Biblical archaeology, its continuation, and potential to risk defining a large swathe of land and society using one book. Calling back again to the prologue, it seems a wasted opportunity not to discuss this phenomenon in the context of how archaeology can be both misused and misleading.[6] Chapters 14 and 15 cover the Dead Sea Scrolls and Masada respectively. The description of how the Copper Scroll could not be unrolled but instead had to be ‘cut up’ is especially evocative. Israeli archaeologist Yigael Yadin’s work at Masada is highlighted in the context of the controversy surrounding the combination of archaeological evidence and Israeli nationalism but this is not explored in any depth and the reader is left to extrapolate what the causes and effects of this might be (pp. 249-51). Chapter 16 covers Palmyra, Petra, and Ebla and ends with the somewhat haunting reminder that these sites are not eternal and can quickly be lost to conflict or looting (p. 268). As always it seems Cline stops short of fully engaging with controversial notions; a good deal of antiquities are lost to Western collectors, both individuals and groups, reinforcing a power imbalance that is particularly pertinent considering that many of the countries Cline refers to have a history of being under colonial control.[7]

The final part of this epic odyssey seems somewhat anticlimactic considering it covers many archaeological sites that are lesser known than their classical counterparts and thus also subject to a large amount of misinterpretation. Each chapter is more of a whirlwind tour than the last and it feels incredibly rushed in contrast with Part 5. Sites and civilisations discussed in Chapter 17 include the Nazca lines, the Moche, and Machu Picchu. Chapter 18 moves on to Teotihuacán, the Olmec sites of San Lorenzo, Tres Zapotes, and La Venta and the Aztec Templo Mayor with its famous rack of human skulls carved in stone. Chapter 19 is the most whirlwind-like of them all, looking at historical archaeology and veering from the discovery of a Confederate submarine called the Hunley in 1995 off the coast of South Carolina to a rather gruesome discovery of seventeenth century cannibalised remains in Jamestown to Chaco Canyon in New Mexico and then to the Chacoan culture. Finally, this chapter moves to Cahokia Mounds, built by the Mississippian culture in what is now Missouri for a mere three paragraphs on ‘the largest pre-Columbian archaeological site in the United States’ (p. 324). It is perhaps a little out of line with Cline’s discussion of other prehistoric remains throughout the text that in this instance, for this site, he states that with written records ‘we would undoubtably be even more impressed by the Native American inhabitants responsible for these remains’ (p. 325). When so much of this book has been occupied with discussing material evidence and how archaeological sites are interpreted without written sources, to state this in reference to an entire culture and area of archaeology Cline barely covers is incredibly dismissive. This dismissal is especially problematic considering the challenges indigenous archaeology and archaeologists face in legitimising their work.[8]

It is in many ways hard to fault a book for a popular audience that tells compelling stories with fascinating details and carefully sets out plenty of factual information behind a discipline whose material can lend itself to some truly bizarre interpretations. However, there are things that this book does not do, or does not do enough of, that may discourage someone looking for a more critical account of archaeology’s history. In the same way that many elements of Cline’s work are welcome in the present circumstances, there are other angles that would also have been welcome. There is material missing and its absence is problematic; certain archaeological areas are given a precedence that reinforces nineteenth-century narratives of Western and Mediterranean historical supremacy whereas other areas, such as South and North America, labelled as ‘New World Archaeology’, are left overly subsumed in tales from the Holy Land, Greece, and Rome.

Additionally, the ongoing discussion and awareness of the place of archaeological objects in the problematic structures of imperial subjugation means it is a little uncomfortable that the chapter ‘Do You Get To Keep What You Find’ does not fully address the cultural and historical context of colonial looting. On the other hand, this chapter does draw attention to some aspects of the present illegal trade in antiquities including the very real ethical dilemma faced by those who do not wish to encourage looting but who are often compelled to buy important or rare artefacts from dubious sources in order to avoid losing them entirely (pp. 328-29). This interlude also mentions the British museum, the Louvre, and the Metropolitan Museum of Art (New York) as being embroiled in debates over returning ‘items that they obtained in the period of European colonialism’, specifically the Elgin Marbles, the Bust of Nefertiti, and the Rosetta Stone (p. 327). It is of course welcome that these big names are included in the discussion, but here the discussion is rather short. For a longer evaluation the reader will have to turn elsewhere to books such as David Hicks’ The Brutish Museums (2020), but even with this option Cline could have woven the controversy a little deeper into his narrative. It would also have been a simple task to include at least some of the names of those who discovered the Terracotta Army in 1974: the archaeologist who named the site and realised its significance, Zhao Kangmin, and maybe the Chinese farmers, including Yang Zhifa and Wang Puzhi, who first found fragments of the terracotta warriors (pp. 275-77). As plenty of other archaeologists are named this absence stands out, especially alongside the still larger absences of many important East Asian archaeological finds and historical moments.

Although the prologue on Tutankhamun is an obvious starting point to prompt engagement with the rest of the book, a slightly deeper examination of the cultural context behind the widespread interest in the tomb would have been welcome, rather than repeating the familiar narrative. Alternatively, the boost that Egyptian nationalism, and the country’s process of reclaiming its pharaonic past, received from the tomb could also have been an interesting story to give a public audience a more unusual take.[9] As it stands this prologue is representative of the book as a whole; it is the story of archaeology from a mostly Western perspective and misses many of the complicated power dynamics at play.

However, it is difficult to know whether every single book squarely aimed at a popular audience has a responsibility to delve into matters of colonialism, nationalism and power comprehensively. Some readers may already be familiar with the issues, for some a few brief mentions will inspire them to seek out further material. Although it is not a book that will be actively detrimental to the cause of archaeology, there are major pieces of the puzzle missing and this should be acknowledged.

Ultimately, I side with Brian Fagan when I say that this is a hard book to review, but not for his reasons. Fagan notes he does not see who Cline has aimed this book at, but I see its audience as broad as it covers material that a western public are quite familiar with (such as the opening salvo on Tutankhamun), answers some of the basic questions on methods and theory (those four interludes that for brevity I have not covered) and includes many headline archaeological discoveries.[10] That there are problematic elements is certain and given that Cline sets himself up as somewhat of a polymath it would have been nice to see him tackle these with more gusto. This text does not challenge the reader to think about inequalities within archaeological practice and theory, such as the difficulties in decolonizing archaeological practice.[11] It is also extremely obvious where Cline’s own interests are centred; the final part of the book on the ‘New World’ suffers for this in a way that could sadly reinforce historical notions of the importance of classical and Mediterranean archaeological histories over and above alternative narratives.

Despite these not inconsiderable caveats I will end on a positive note; this is a warm and welcome overview of a vast and often impenetrable disciplinary history that gives the lay reader so much opportunity to take their own next steps into deeper, richer, literature.

Download PDF


Abadía, O. M., ‘The History of Archaeology as Seen Through the Externalism-Internalism Debate: Historical Development and Current Challenges’, Bulletin of the History of Archaeology, 19.2 (2009).

Barnard, H., ‘In Search of Biblical Lands: From Jerusalem to Jordan in Nineteenth-Century Photography’, Near Eastern Archaeology, 74.2 (2011), pp.120–23.

Brodie, N., ‘Restorative Justice? Questions Arising out of the Hobby Lobby Return of Cuneiform Tablets to Iraq’, Revista Memória Em Rede, 12 (2020), pp.87–109.

Bruchac, M., S. Hart, and H. M. Wobst, eds., Indigenous Archaeologies: A Reader on Decolonization (New York: Routledge, 2010).

Card, J. J., Spooky Archaeology: Myth and the Science of the Past (University of New Mexico Press, 2018).

Cline, E. H., Three Stones Make a Wall: The Story of Archaeology (Princeton University Press, 2017).

Davis, T. W., Shifting Sands: The Rise and Fall of Biblical Archaeology (Oxford University Press, USA, 2004).

Fagan, B., ‘Review: Three Stones Make a Wall. The Story of Archaeology, by Cline, E. H.’, Journal of Eastern Mediterranean Archaeology and Heritage Studies, 5 (2017), pp.454–56.

Franken, H. J., ‘The Problem of Identification in Biblical Archaeology’, Palestine Exploration Quarterly, 108.1 (1976), pp.3–11.

Gange, D., ‘Religion and Science in Late Nineteenth-Century British Egyptology’, The Historical Journal, 49 (2006), pp.1083–1103.

Hicks, D., The Brutish Museums: The Benin Bronzes, Colonial Violence and Cultural Restitution (Pluto Press, 2020).

Jordan, P., The Atlantis Syndrome (Sutton, 2001).

Marx, R. F., The History of Underwater Exploration (New York, 1990).

Reid, D. M., Whose Pharaohs?: Archaeology, Museums, and Egyptian National Identity from Napoleon to World War I (London: University of California Press, 2003).

Riper, A. B., Men among the Mammoths (University of Chicago Press, 1993).

Van Dyke, R. M., ‘Indigenous Archaeology in a Settler-Colonist State: A View from the North American Southwest’, Norwegian Archaeological Review, 53.1 (2020), pp.41–58.


[1] O. M. Abadía, ‘The History of Archaeology as Seen Through the Externalism-Internalism Debate: Historical Development and Current Challenges’, Bulletin of the History of Archaeology, 19.2 (2009), p.13.

[2] A. B. Riper, Men among the Mammoths (University of Chicago Press, 1993).

[3] J. J. Card, Spooky Archaeology: Myth and the Science of the Past (University of New Mexico Press, 2018).

[4] P. Jordan, The Atlantis Syndrome (Sutton, 2001).

[5] R. F. Marx, The History of Underwater Exploration (Courier Corporation, 1990).

[6] H. J. Franken, ‘The Problem of Identification in Biblical Archaeology’, Palestine Exploration Quarterly, 108.1 (1976), pp.3–11; D. Gange, ‘Religion and Science in Late Nineteenth-Century British Egyptology’, The Historical Journal, 49 (2006), pp.1083–1103; T. W. Davis, Shifting Sands: The Rise and Fall of Biblical Archaeology (Oxford University Press, 2004); H. Barnard, ‘In Search of Biblical Lands: From Jerusalem to Jordan in Nineteenth-Century Photography’, Near Eastern Archaeology, 74.2 (2011), pp.120–23.

[7] N. Brodie, ‘Restorative Justice? Questions Arising out of the Hobby Lobby Return of Cuneiform Tablets to Iraq’, Revista Memória Em Rede, 12 (2020), pp.87–109.

[8] R. M. Van Dyke, ‘Indigenous Archaeology in a Settler-Colonist State: A View from the North American Southwest’, Norwegian Archaeological Review, 53.1 (2020), pp.41–58.

[9] D. M. Reid, Whose Pharaohs?: Archaeology, Museums, and Egyptian National Identity from Napoleon to World War I (London: University of California Press, 2003), pp.16–18.

[10] B. Fagan, ‘Review: Three Stones Make a Wall. The Story of Archaeology by E. H. Cline’, Journal of Eastern Mediterranean Archaeology and Heritage Studies, 5 (2017), pp.454–56.

[11] Indigenous Archaeologies: A Reader on Decolonization, ed. by M. Bruchac, S. Hart, and H. M. Wobst (New York: Routledge, 2010).

‘The introduction into English public life of the educated workman’: The rise of Labour in the Edwardian Mass Press


This paper explores how the emergent Labour Party was represented by two of Britain’s leading popular daily newspapers: the Daily Mail and the Daily Express. Focusing on coverage afforded the party during its first general elections — 1900 and 1906 — it will be argued that the response of the Conservative popular press to the rise of Labour was complex. While often hostile, these newspapers also showed considerable interest in the party’s rise and were also broadly positive to both individual Labour MPs and the movement’s desire to better represent working class interests. Adding to past works into pre–Great war political culture, this paper interrogates the complexity of Labour’s emergent place within a mass political culture that, while broadly hostile to left–wing politics, primarily catered toward an imagined ‘everyman’ who was very similar to Labour’s assumed electoral supporter.

Keywords: Labour Party, popular press, newspaper language, political identity, pre–1914 British culture

Author Biography

Dr Chris Shoop-Worrall is Lecturer in Media & Journalism at UCFB, having completed his PhD at the University of Sheffield’s Centre for the Study of Journalism and History in 2019. His work explores the intersections between politics, mass media, and consumer culture within nineteenth– and twentieth–century Britain. His first book, an adaptation of his doctoral work, is forthcoming with Routledge Focus.


‘The introduction into English public life of the educated workman’: The rise of Labour in the Edwardian Mass Press

Download PDF


The mass election–time political culture of Edwardian Britain, into which the Labour Party[1] first entered in 1900, was framed primarily around the perceived wants and interests of an imagined ‘man in the street’, whose significance had grown particularly after the various reform acts of the 1880s.[2] This ‘everyman’ was the person whom the proposed political policies of both the Liberals and the Conservatives were increasingly pitched, on issues including tariff reform, religious education and alcohol consumption.[3] This increasingly mass and masculinised election sphere was part of a wider consumer culture within which the everyman also held significance.[4] A key component of these interconnected cultures of politics, urban consumerism, and entertainment was the daily mass press: the ‘new dailies’ Mail and Express which lay the groundwork the dominant tabloid culture of the twentieth century.[5] These newspapers, and newspapers in general, were key conduits of political communication in late–nineteenth and early–twentieth century Britain.[6] Their content sensationalised and personalised election news in ways that effectively spoke to their mass readerships, many of whom were the same ‘man in the street’ sought by politicians across the political spectrum.[7] Their communicative potential was noteworthy: Stephen Koss’s chapter on these newspapers shows Joseph Chamberlain’s intense interest in courting their support[8], while recent scholarship by David Vessey has noted how the Women’s Social and Political Union (WSPU) similarly saw the merits of their suffrage campaigns capturing the attention of these particular newspapers.[9]

Labour were perhaps uniquely interested in the political significance of the new dailies. Their appeal to the man in the street — an individual from whom Labour particularly sought the vote — made the daily mass press a hugely significant force. Indeed, Labour would eventually launch their own newspaper, the short–lived Daily Citizen, such was the perceived political importance of having a Labour–friendly mass daily newspaper[10]. The knowledge of the mass press’s appeal to the man in the street came with a parallel hostility from across the early Labour movement towards this ‘capitalist’ press. The fact that the Citizen’s birth was a decade in the making spoke significantly of the agonising across the pre–war British left about what constituted appropriate mass political communication: an issue which the party would continue to struggle with for decades to follow.[11]

While some scholarship has explored aspects of Labour’s relationship towards and with both the popular press and popular culture pre–1914[12], little exists on the ways in which Labour manifested within the pages of the mass daily press. This paper interrogates the ways in which the two founding publications of Bingham and Conboy’s ‘tabloid century’, the Mail and Express, represented the emergence of Labour during their first two general election campaigns. Using these two periods of newspaper coverage, spanning the weeks of the elections both in 1900 and 1906[13], this paper explores the complex place that Labour held within the pages of these mass–selling newspaper and, by extension, a significant component of the political culture in which they sought success.

On the one hand, it would seem that the hostility shown across the British left towards the new dailies, and the wider culture to which they contributed, was somewhat mutual. Both the Mail and Express featured articles critical of the party’s politics, especially after their true ‘arrival’ onto the national political scene in 1906. Much of this criticism revolved around Labour’s language of chaos and destabilisation; the emergence of this new, left–wing political movement clashed considerably with the broadly conservative outlook of both the new dailies and the consumer political culture to which they sold so well. However, this criticism was not uniform. In fact, both newspapers dedicated coverage that was receptive to much of this emergent party. Central to this positivity was the idea that Parliament was becoming increasingly representative. For example, ‘working men’ entered the Commons and were seen as a welcome and overdue reality. This, and an appreciation of some of the societal inequalities that Labour were struggling to overcome, underlines the complicated place which Labour occupied within this massified, masculine election culture to which the new dailies contributed so significantly.


Early Indifference

The 1900 election was the Labour Party’s first ever election, as well as the first time that Britain had a socialist party competing at a national election. Their initial success was modest, having had two MPs elected to the House of Commons and amassing just under 63,000 votes.[14] That said, it marked a significant change in the British political landscape; in their first election, Labour won a larger share of the popular vote than John Redmond’s Irish Parliamentary Party. Considering the later significance that can be (and has been) so easily placed on a party’s first election, one would assume that there was a noticable response at the time to Labour’s electoral debut, including from two of the country’s most popular newspapers.

The reality of the response, from both the Daily Mail and the Daily Express at least, was considerably underwhelming. Admittedly, the 1900 election was defined by the central issue of the Second Boer War; a pro–imperial national spirit borne out of the war was widely credited with helping the Conservatives sweep to victory, and both the new dailies’ election coverage was heavily focused on the electoral importance of the ongoing conflict in the Transvaal.[15] However, even considering the weight of coverage afforded the war, the Labour Party was given almost no coverage at all. Far from being a watershed moment which saw a conservative press react with intensity, the rise of Labour prompted Britain’s two leading right-of-centre dailies to do little more than shrug.

The sparse mentions that were given to Labour by the two newspapers during their first election represented the party as a curious, inoffensive new oddity. Most of the attention in these newspapers focused not on the party itself, but on some of the high–profile individual members. Of particular interest was Keir Hardie, the party’s founder, leader, and first elected MP. One report noted that he had earned the support of renowned businessman, philanthropist and ‘Quaker cocoa manufacturer’ George Cadbury, who had sent Hardie £500 to help the party to support ‘the expenses of Labour candidates’ in Blackburn, Manchester, and Glasgow.[16] Besides earning Cadbury’s support, Hardie’s brief appearances portray him as a curious eccentric, assigning him the nickname ‘Queer Hardie’ and noting how his personality was not that of traditional members of Parliament; ‘(he is) the most erratic of Labour members… his outward oddities only faintly disguise a strong, simple, resolute character’.[17]

Similarly, the other mentions of Labour parliamentary candidates focus on curious aspects of their personalities, rather than on controversial or original aspects of their political leanings. For example, a candidate in Derby called ‘Mr. R. Bell’ was portrayed similarly to Liberal or Conservative candidates, stating that he ‘loves conciliation more than controversy’.[18] Another, Thomas Burt of Morpeth, was described as ‘no friend of socialism’ and given a background that remarks on the originality of his political background; ‘he still bears on him the marks of his early life of toil at the pit mouth… teetotalism and trade unionism made him a speaker… his mates elected him secretary (of his trade union) nine years later they sent him to Parliament’.[19] Far from being portrayed as revolutionaries, Labour’s new and prospective parliamentary candidates were represented as relatively unremarkable new additions to the British political landscape. The above–examples of language used to portray them focuses more on personality quirks than political leanings. Any reference to personal or party ideology seems to deliberately play down any radical or controversial tendencies. Their emergence is noted, but as little more than a minor footnote on the wider issues in the election.

One potential reason why the Mail’s and Express’s coverage of the party’s emergence seems to have been so underwhelming can be seen in how the broader idea of a worker-propelled political movement is discussed. Again, references to a wider Labour movement are scarce, but they suggest a shared understanding that a future of worker–driven politics was a long way off. For instance, a front page in the Express features a speech from the leading Liberal Unionist MP Joseph Chamberlain, in which he espouses the view that any new, ‘Labour’ members of Parliament — ones elected directly from a working–class community to represent their interests — would be like ‘fish out of water’ in the Commons.[20] Another article, published later in the election, speculates light–heartedly on a future where Britain has a ‘worker–controlled future electorate’. It argues that a time should come when the only barrier to voting should be an age limit of 21, and concludes with an interested look forward to what types of legislation might be passed if ‘the working man controlled the voting’.[21] Interestingly, while it has a more positive view than the quoted speech by Chamberlain, this article shares the view that a worker–driven politics is still not a present concern.

Overall, the Labour Party’s emergence and first presence at a British general election met with a muted response from the daily popular national press. On the one hand, there is some acknowledgement of the party’s arrival onto the British political scene and how a Labour–orientated working–class politics had the potential to lead to future change. However, this future theorizing is an exception to an initial response which represents Labour and it’s members as odd new additions to the established political landscape. Labour’s members were presented as original and unconventional, but only in relation to aspects of their personalities or the manner of their upbringing. Indeed, their politics are barely discussed and any references to ideology are framed to downplay any radical aspect of Labour beliefs. The impression left by these newspapers is that Labour, while new, were little but an eccentric, minor addition to British politics. Their emergence may well have been a matter of concern or interest for an undetermined point in the future. However, Labour was represented as a party of little concern to the readers of these two newspapers during their first general election.


Second Coming

As has been discussed, the representations of the emerging Labour party in the popular new dailies during the 1900 election placed little significance on them. At the beginning of the next — and Labour’s second — general election in 1906, the initial coverage from both newspapers was similarly sparse. In the Daily Mail for example, the opening few days of the election contained very few articles on Labour, and these, similarly to those from 1900, characterised the party by the unconventional personalities of its members. In particular, a piece on the opening day of the campaign focuses on the sitting MP of Woolwich and his ‘quaint sayings’ and ‘his insistence on his absolute ignorance of Latin’.[22] On the same day, the Daily Express’s sole representation of Labour concerned a speech by the ‘Socialist Countess’ Lady Warwick, and how local workers in the West Ham area of London ‘go and look at the lovely Countess while she is making one of her Socialistic speeches’.[23] While covering very different stories, both newspapers were again constructing Labour, its members, and socialism in general as a quirky, yet separate, addition to the British political tradition.

This approach changed dramatically after Labour began winning more MPs, with the first news breaking on January 15th 1906 that Labour had already gained seven seats in Parliament. The Daily Mail noted these ‘Labour successes’ and named the new members elected for Labour.[24] The Express meanwhile represented the new significance of Labour’s election successes by including them on their front–page ‘Election Race by Motor Car’: a daily cartoon which would track a political party’s progress to the ‘finish line’ at the end of the election.[25] Labour, missing entirely from the Express’s equivalent cartoon in 1900, now merited a place in the race.

This initial appreciation by both newspapers would change into a dramatic reaction in the subsequent days after Labour’s ‘arrival’ onto the main political stage. The day after the announcements, both newspapers published editorials focused on the electoral triumphs of Labour. The Express noted the party’s ‘astounding victories’ and how their success now posed a threat to the paper’s favoured Unionists.[26] This editorial echoed their front page of the same day which marvelled at the ‘astounding succession’ of Labour victories, while noting that it may well be a watershed historical moment; ‘nothing like it [Labour’s victories] has ever occurred in the history of British politics’.[27] This same sentiment was shared in the Mail’s editorial ‘Outlook’, headlined ‘The Rise of Labour’. Like the Express, it marked a decisive shift in the paper’s coverage of Labour which now represented the party as a ‘hurricane’ that was fundamentally changing the face of British politics;


Enormous Labour polls are, indeed, the great feature at the election, and even where Labour has not won it has voted in a manner that is beginning to cause nervousness to its Liberal ally . . . Socialism, by its very essence, means the abolition of all competition . . . equal rewards for fit and unfit.[28]


 After the relative indifference shown during the 1900 campaign, both the Mail and the Express increasingly represented Labour as both the defining aspect of the 1906 election, as well as a landmark shift in the history of British politics. This shift in both papers’ interpretation of the party led to a multitude of articles and editorials across the rest of the election dedicated to the party and its new MPs. Some of this new content was, perhaps unsurprisingly, fiercely hostile.


Chaotic Threat

It is interesting to note that, in the same early articles detailing Labour’s historic election successes, the new dailies quickly represented Labour as a potentially damaging and dangerous new political entity. For example, The Mail editorial cited above appears to associate Labour with forces of chaos, from the metaphorical ‘hurricane’ to the latter outlining of socialism’s radical stance against competition. The final quote above extends to communicate the potentially ruinous damage of Labour’s anti–competitive nature; ‘if the British worker cannot compete, so much the worse for them!’[29]

The clear conclusion, that Labour’s position would restrict the competitiveness of British labour both at home and abroad, represents the party as potentially ruinous both for wider British society and the very class of people it claims to represent. This association between Labour and chaos was also echoed in the Express, the same as the Mail’s ‘hurricane’ editorial. Their own ‘Matters of Moment’ associated the victories of the Labour party to ‘wreckage’ upon the status quo, with political policy labelled as both ‘fairytales’ and ‘insidious poison’.[30] Again, the choice of language used in these editorials associates Labour with chaos, and their negative impact on both the political system and those who may have, or may in future, vote for them.

These ideas of Labour–driven chaos would continue to be referenced throughout the rest of the election campaign, although the first days marked a high–point for both newspapers’ sense of panic. Their successes were frequently labelled as part of a ‘revolution’ or ‘upheaval’, which repeatedly suggested a link between the party and potential political unrest. This potentially damaging impact of the party was also applied to Labour itself, with the Mail speculating on a future Labour split between the small pro–Liberal section of new MPs and the majority of the rest of the MPs whom ‘do not trust Liberals’ and whose ideological extremism threatened an irreparable split between the two factions;[Labour radicals think] it better that ever Labour member candidates [loses] than that the cause should be degraded or obscured by weak MPs’.[31] While no other article considering the self–divisive nature of Labour’s emergence in 1906, it added to a broader representation from both new dailies that presented Labour as an unstable party, both within the wider climate of Westminster and, potentially, its own ranks.

Another persistent representation of Labour’s chaotic nature came from both papers’ repeated association between Labour and the Liberal Party. When again considering the initial responses of both dailies, the ‘hurricanes’, and ‘wreckage’ wrought on the election is appropriated to both Labour and the Liberals. The Mail’s editorial on the sixteenth contends the link between both anti–Unionist parties by saying how some Liberal candidates ‘are indistinguishable from Communists or extreme Socialists’,[32] while the Express also drew an immediate link between Labour and the Liberals, first being saying the latter were ‘aided and abetted’ by the former, and that together they were a threat to the Unionists.[33] These initial links drawn between the two parties are particularly fierce compared to the rest of the coverage, but were the first of several instances where Labour is represented directly, and negatively, in relation to its union with the Liberals.

Throughout the rest of the election coverage of the two newspapers, representations of Labour’s association with the Liberals seemed to be primarily focused on the former’s potentially damaging impact on the latter. For example, accusations of Liberalism’s pandering to Labour interests implies that the Liberals could end up regretting their partnership with the new socialists. The Mail for instance alluded to the idea that Labour were the real power, and that elected Liberals were ‘merely delegates’ of Labour and their trade union allies.[34] The fear of a trojan–horse, socialist incursion into the Liberals was continued later in the election as both Labour and Liberal victories kept growing, with a prophetic editorial that the upcoming Parliament’s true struggle would be ‘between Socialism and Protection’,[35] thus presenting Labour as the real force in any future non–Unionist government.

The Express shared a similar opinion of the two party relationships, arguing that Labour, not Liberalism, would play the greater role in a future government and that a ‘solid phalanx’ of Labour members had ‘forced their way into the Liberal ranks’.[36] Between evocative portrayals of militarized Labour infiltrating their ranks to the neo–criminal language of ‘aided and abetted’, the representations in both newspapers showed Labour to be just as damaging to their Liberal allies as to their Unionist opponents. This idea would continue to be explored throughout the election in both newspapers, with the ‘menace’ of Labour and their socialist policies frequently being associated to the eventual election–winning Liberals. For example, a particularly dismissive note in the Mail that declared that ‘oil and vinegar would readily mix than the ideals of [Labour MP] Philip Snowden’ and the Liberals[37], as well as updated summaries of the new Commons numbers with Liberal and Labour MPs combined (along with the Irish Nationalists) into the ‘Parliamentary’ column against the Unionists.

Perhaps unsurprisingly, the initial shock shown in the new dailies’ representations of the emergent Labour successes in 1906 quickly developed an antagonistic element. As two leading press supporters of the Conservatives, it is perhaps unsurprising that aspects of their coverage represented Labour in variously negative ways. What was remarkable was the speed of transition between coverage of Labour’s minor oddities to its newfound revolutionary, negative impact on British politics, its supporters and its Liberal allies.

Both the Mail and the Express were undeniably hostile towards Labour after their growth in influence during the 1906 election, and in this regard Labour were justified in the hostility they would, in turn, show to these particularly popular daily newspapers. However, the hostile representations were one of several ways in which these newspapers represented the party after its surge in the polls during the 1906 election. The hostility was noticeable, but generally subsided to reveal a more complex portrayal of the party which showed an interest in, and indeed levels of appreciation for, their membership and parts of their political message.


‘A most salutary influence’

Labour’s surge in popularity in its second–ever contested election was met with some hostile words from both the Express and the Mail. Interestingly however, the majority of the negative representations of the party focused on its potentially negative impact within the narrow confines of the Houses of Commons. Whether in relation to Labour’s potential to harm Parliament, its Liberal allies or the Labour party itself, the majority of their more negative representations in the new dailies were restricted to their place in Parliament. Very little coverage across either newspapers focused on the potentially negative impact of Labour on the everyday British public, besides the initial fear over the party’s position ‘against competition’ and a brief mention by the Mail’s early editorial of the party’s attitudes against public houses and a supposed plan to ban betting news inside pubs[38]. Conversely, the representations of Labour and its impact on British life outside of Westminster were broadly positive.

After the early outrage shown in both of the newspapers’ early editorials, the Mail and the Express shifted to positively representing an aspect of Labour’s emergence: the increased representation of the working classes. The day after their ‘insidious poison’ editorial, the Express ran another editorial dedicated to Labour, appreciating that ‘it is right and proper’ that the working classes had direct representation in Parliament and that Labour were well–placed to best voice their interests:


every class of the community should be represented in Parliament . . . we have more in the Labour men than to believe that they would permit themselves to degenerate into mere money–making politicians.[39]


The appreciation of working–class representation in the Commons was twinned with a portrayal of the new Labour members as people who would honestly work for them in Parliament, undistracted by other potential perks of the role in the House of Commons. A very similar sentiment was shown in the Mail’s Outlook the next day. While the newspaper’s opinion on Labour’s future plans (‘whether for good or evil remains to be seen’) created a certain degree of doubt, it agreed with the Express on matters of representation and the honesty of the new members;


It cannot be suggested that labour will be unduly represented . . . [many elected] have been bona–fide working-men… frankly, we much prefer these workers to a good many, who [hitherto] used the House of Commons as a road to money–making.[40]


Across both newspapers, Labour was represented as a positive influence both for the wider electorate and for the moral fabric of the Commons. While occasionally appearing alongside sentiments expressing mistrust or outright antagonism to the party, there was a shared understanding of Labour as a collection of politicians who would represent the British lower classes better, and less corruptly, than any other political group striving for their support. Admittedly, this more positive aspect of the party’s portrayals in the new daily press did not ever become a full endorsement, as high levels of mistrust were also associated with the party’s wider plans for the future of Parliament’s stability and the industrial way of life. It was, however, an undoubted acceptance, or possibly even a degree of admiration, of some of the party’s potential positives.


‘Gone is the Club’

As a collective party, Labour was represented in complex ways to the readers of the new dailies. Praise of their honesty and of overdue and deserved working-class representatives in Parliament were counter-balanced by persistent descriptions of the party as a disruptive force to their parliamentary colleagues and the British political tradition. Interestingly however, the majority of the coverage of the Labour Party in the Mail and the Express was not dedicated to the party itself. The most frequently occurring representations of Labour in the 1906 election focused on individual members; the MPs, old and new, whose collective integrity both newspapers positively represented.

The most noticeable focus in the new dailies was an interest in the employment backgrounds of Labour MPs. This manifested itself in sections in both newspapers that detailed members of the House; short descriptions of sitting MPs, challengers and the newly–elected. To understand the curious uniformity of the two papers’ profiles of Labour politicians, it is important to know the diversity of terms through which both Liberal and Conservative politicians were discussed in the same articles. For example, on January the seventeenth, the Mail ran a ‘Who’s Who’ column, providing brief details of a host of new faces in Parliament. The ways in which Liberal or Unionist politicians were described varied considerably; ‘forty-two years of age’, ‘an architect’, ‘a Londoner by birth and education’, ‘a Tariff Reformer’, ‘was born in 1845’, ‘a Fellow and lecturer of Merton College, Oxford’.[41]

The key words or phrases that were used to primarily define Liberal or Unionist candidates showed differences from person to person: age, education, upbringing, employment and particular political beliefs were all used to describe them. In stark contrast, Labour candidates or returned MPs were principally defined most often with reference to their engagement in hard physical labour, very often with reference to their early beginnings in said trades. The Mail also summaries from mid–January contained, among others, the following Labour returns;


Mr. Enoch Edwards, after a defeat at last election, has gained Hanley for the Labour Party. He is fifty-four years of age. He entered a colliery aged nine . . .

Mr. George Wardle, Labour member for Stockport, worked in a factory from the age of eight and became a clerk on the Midland Railway when fifteen.

Mr. Charles Duncan, the new Labour representative for Barrow-in-Furness, is an engineer and trade-union organizer

Mr. W. C. Steadman (Central Finsbury) is a Labour member . . . a barge builder by trade

Mr. Thomas Glover, St Helens Labour representative . . . At nine years of age he was working in the mines.[42]


Where Liberals or Unionists were just as much defined by education and politics as by their employment history, Labour politicians were primarily represented as politicians defined by their connections to industrial labour. The Express, on the same day, was compounding this manifestation of the same Labour members as people defined by their pasts in hard employment in their ‘Who’s Who’ equivalent called ‘The Polling’;


Finsbury Central, W. C. Steadman . . . apprenticed in the barge-building trade

Barrow-in-Furness: Charles Duncan . . . apprenticed to the engineering trade

Birkenhead: Henry Vivian . . . a carpenter and joiner by trade

Hanley, E. Edwards . . . at nine entered colliery.[43]


This attention to the manual employment backgrounds of Labour politicians was repeated throughout the election;


Summertail: son of a miner, started work as grocer.[44]

N. Barnes: apprenticed as an engineer.[45]

R. Clynes: cotton-factory boy.[46]

Crooks (Woolwich): has been a workhouse lad.[47]

Seddon (Newton): apprenticed to the grocer trade.[48]


The difference between Labour and non–Labour members is starkest when the briefest of summaries were printed side by side with a double election in Sunderland of a Liberal and a Labour candidate, describing the former as a Fellow of Trinity College and the latter as having ‘started work at seven’.[49]

The potential reasoning behind the consistent identification of Labour candidates by their industrial backgrounds is varied. On the one hand, there was the reality that the vast majority of Labour politicians did not have the same lavish educational or professional backgrounds often cited in descriptions of Liberal or Unionist candidates. This reality however cannot adequately explain the curious consistency with which both newspapers categorized Labour politicians by their labouring pasts, as non–Labour candidates sharing significant traits (for example, an excellent university education) were not treated to the same uniformity. It is possible that the new dailies’ fixation on the pasts of Labour members was an extension of the representations of individuals from 1900, which highlighted curious eccentricities of the likes of Keir Hardie. In place of ‘Queer Hardie’, there was a consistent interest in MPs with pasts in manual labour. Edwardian Britain’s Parliament was populated largely with members of the higher classes: peers, newspaper proprietors, industrialists, and lawyers.[50] Therefore, an influx of men who had worked in coal mines as children represented a curious break from the norm — a quirk to tradition that made these new members stand out from the rest. By consistently highlighting working pasts, the new dailies were partly continuing this image of Labour as a curious new phenomenon, potentially intended to provoke a wry, almost amused response from readers.

Another potential interpretation of the new dailies’ representations of Labour members as people defined by their pasts is that it shows considerable admiration of their emergence onto the political scene. These men, some of whom had to go to work from as young as seven, had now entered into the elite of British political life against considerable personal odds. Their individual stories represented triumphs over adversity; proverbial rags–to–riches narratives that correlated with the new dailies’ broader interest in emotive, human-interest news content that appealed to their mass, lower–class audiences. Rather than, or as well as, being a representation of curious backgrounds for British parliamentarians, these newspapers’ focus on employment pasts presented Labour members as everyday success stories to be respected and admired.

This latter interpretation is further supported by the fact that both newspapers dedicated longer profile articles to particular Labour politicians, which explicitly championed their rise from difficult upbringings. In the Mail, the article ‘A New Style Labour Member’ focused on the new West Ham MP Will Thorpe. Much was made of his journey from relative poverty to the Commons, and he is positively shown to have worked his way from the bottom to the top;


Seventeen years ago . . . a day labourer. Today, he is a member of Parliament.

Proved himself a born captain . . .

Born to misery . . . (parents) brickfield workers . . . endured the burden of toil.[51]


His transformation from the ‘urban slums’ to a ‘representative of starvation’ is shown to be something to be admired, even despite the article’s explanation that his life had led to him becoming ‘a Socialist of the most extreme type’. Indeed, in this context, the Labour man’s radical politics are presented as an understandable, if not agreeable, response to his personal history.[52] His past is a story of respectable, positive success, even in spite of politics wholly against those of these two newspapers.

The Express shared this positive depiction of Labour members and their industrial pasts with their ‘Romance of Labour’, a story about J. T. Macpherson who, having ‘served as a boy at sea’, had become an MP after his union had helped him pay his through a degree at Ruskin College, Oxford.[53] Again, the ‘romance’ comes from an individual who had reached Parliament, via one of the world’s best universities, having started life as a child labourer. He, like other Labour MPs, was represented as a personal success story. His journey was chronicled quite succinctly in the same newspaper a few days later;


At twelve, cabin boy.

At eighteen, Middlesbrough steel smelter

At twenty-one, founder of Steel Smelters Society

At thirty-two, Oxford Graduate and MP.[54]


When discussed in the new dailies as a collective, Labour politicians were categorised as honest and potentially simple characters who would do their best to represent working people. When discussed as individuals, Labour was represented as a group deserving of respect and interest due to their shared pasts overcoming hardships to enter Parliament. Often with reference to their histories working as children, Labour politicians were represented most strikingly as successes of hard work against personal adversity, to the point where disagreeable politics were contextualised and possibly even appreciated.  Labour, both as a party and as a group of people, was shown by the Mail and the Express to be a fresh addition to political life that carried with it an emotive, positive story of triumphing against difficult beginnings.


‘What Labour Wants’

In contrast to their broad political aims, the new dailies represented Labour’s politicians as broadly positive additions to the British political system. On occasion, the emphasis on personal triumphs over difficult starts in life was used as understandable context for any radical politics they may fight for in any future Parliament. This appreciation of the potential roots of socialism was not unique to profiles of individual MPs. Indeed, both the Express and the Mail dedicated significant coverage during the 1906 election that represented Labour, and socialism more broadly, as a cause driven by righteous discontent with existing realities of British life.

The most notable example of this came in the Daily Mail and its two–part long article ‘What Labour Wants’, written by a Mr. Bart Kennedy. Published on the seventeenth and eighteenth of January, its stated wish was to explore what the working man wanted, drawn from a series of interviews with ‘hard, strong–faced men of labour’ who, after everything, wanted nothing but ‘to live’. In its retelling of their stories, it paints an evocative picture of a horrific, lower–class existence;


[these men] did the dread work in the blackness of the earth… starving with their wives and family on a few shillings strike pay. Wives suckle their babies from their almost dry breasts.

Treated worse than the beasts in the fields.

Their wrongs cry out, no voice, no pen can fully put their case.[55]


In addition to these dramatic representations of suffering workers, Bart Kennedy portrays the owners of these businesses as nothing less than villains;


The people who own the mines have gradually pressed them [the labourers] down below the bare living point.[56]

. . . making the worker produce more wealth than it ever did before, and at the same time it is giving him less in proportion for his labour

You (the owner) are going on in a way that will bring England down about our ears.[57]


This extraordinary account of striking workers and profit–driven owners vividly represents an unsustainable divide between the richer and poorer elements of British society. Taken in the context of the broader coverage of the party and its members, it articulates the cause of the Labour party as one entirely justified by the current conditions facing workers. One of the party’s principle aims — to fight for better conditions for workers — is one that would directly tackle the ‘evil’ shown so evocatively in this article.

Interestingly however, the second part of this article concludes that ‘evil though the present system, it is better than it would be under Socialism’. This conclusion is sound and asserts the writer, because the current evil lies in the haplessness of authority, which would only increase under a socialist government. This conclusion, while strikingly brief in the context of the longer two–part article, correlates with the broader attitudes shown across the two newspapers towards Labour’s political ambitions. Labour and socialism are never shown positively; they are frequently associated with instability and neo–revolutionary disorder. What is interestingly though is that these two newspapers, which clearly and consistently represented Unionist politics as the best course of action, represented the conditions that Labour’s politics sought to address as a significant concern to its readers. The newspapers did not represent Labour’s motivations negatively and at times actively agreed with them on issues that politics needed to address. The party’s solution was not represented positively; their intentions often were.

This balance between the rejection and appreciation of Labour’s political aims was particularly pronounced in the Mail. For example, the twenty–third of January saw a column in the Mail written by recently–elected Labour MP Philip Snowden, in which he focuses on the party’s aim to ‘transfer large profits from private pockets to public utility… (and) enable better conditions to be given to the workers’.[58] On the one hand, sub–headings stating that Labour is a party that will ‘Tax the Very Rich’ and instigate ‘The Overthrow of Capitalism’ suggests the potentially revolutionary intentions of Labour, but it is countered by Snowden’s assertions that any future policy would be ‘not quite so blood–curdling as it sounds’. It is interesting that the input of the newspaper — the sub–headings — often contrasts with the actual content of Snowden’s writing; it is the heading, and not the Labour MP, who mentions anything tangibly proving an attempt to overthrow the existing capitalist system. This article, like the Kennedy article, touches upon the struggle between wealthy owners and poor workers, and represents Labour as a party fighting against an undisputed wrong. Also, particularly due to the sub–headings, the more positive representation of Labour’s motivations are countered with language portraying the party as a force of revolutionary harm.

The Express also echoed these same sentiments, though less frequently than its rival. Most notably, on the nineteenth of January, an editorial discussed ‘Labour on its Trial’ and the ‘colossal experiment’ of a socialist party in Britain. It, in contrast to the evocative longer reads in the Mail, represents the duality of Labour’s politics very concisely;


we say, give Labour its chance. If it succeeds, well, good.

If it fails, ________![59]


That brief editorial summary gets to the crux of this curious complexity at the heart of the representations of Labour’s politics. The party had won its place in the Commons. Now, it was time to see how they planned to solve issues that were of undeniable concern to British society. If their solutions proved a success, then it would be of benefit to all: in particular, to the many people who resonated with the imagined ‘man in the street’ sought by political parties, the mass press, and the surrounding popular culture of the period. However, as demonstrated by the concluding pause, it was clear that any Labour success, according to these newspapers, was both undesirable and rather unlikely.

This dichotomy teases out the fascinating and often contradictory place of Labour within the new dailies: two fundamental and widely consumed components of the election culture of early twentieth–century Britain. This new political party was, for many, a hostile and radical entity that clashed with much of the political and popular cultures into which they entered. However, their perceived connections to the everyman who was such a dominant part of those same two overlapping cultures meant that, for the hostility, there was also considerable admiration and support shown by the new dailies toward this ‘chaotic’ new addition to the electoral landscape of Long Edwardian Britain. While it would take until 1912 for Labour to have a mass daily newspaper for their own, they had already provoked a diverse and contested presence within Britain’s most popular daily newspapers during their emergent years as a political party.




Primary Sources:

Daily Mail: 26th September – 24th October 1900; 12th January – 8th February 1906

Daily Express: 26th September – 24th October 1900; 12th January – 8th February 1906


Secondary Reading:

Beers, L. Your Britain : Media and the Making of the Labour Party (Cambridge; Mass, 2010).

Bingham, A. and Conboy, M. Tabloid Century : The Popular Press in Britain, 1896 to the Present (Oxford, 2015).

Blaxill, L. ‘Joseph Chamberlain and the Third Reform Act: A Reassessment of the “Unauthorized Programme” of 1885’. Journal of British Studies 54/1 (2015), pp. 88–117.

______. The War of Words: The Language of British Elections, 1880-1914 (Woodbridge, 2020).

______. ‘Electioneering, the Third Reform Act, and Political Change in the 1880s*’. Parliamentary History 30/3 (2011), pp. 343–73.

Brodie, M. The Politics of the Poor : The East End of London, 1885-1914 (Oxford, 2004).

Butler, D. and Butler, G., British Political Facts, 10th ed. (Basingstoke, 2010).

Conboy, M. The Press and Popular Culture (London, 2002).

Hopkins, D., “The socialist press in Britain, 1890-1910” in Curran, J., Boyce, G. and Wingate, P. (eds.), Newspaper History from the seventeenth century to the present day (London, 1978), pp. 265-280

Koss, S. The Rise and Fall of the Political Press in Britain V. 2 (London, 1984).

Lawrence, J., Electing Our Masters : The Hustings in British Politics from Hogarth to Blair. (Oxford, 2009).

Rix, K. ‘“The Elimination of Corrupt Practices in British Elections”? Reassessing the Impact of the 1883 Corrupt Practices Act’. The English Historical Review CXXIII/500 (2008), pp. 65–97.

Shannon, R. The Age of Salisbury, 1881-1902 : Unionism and Empire (London, 1996).

Shoop-Worrall, C. ‘Politics and the Mass Press in Long Edwardian Britain 1896-1914’. (unpublished PhD thesis, University of Sheffield, 2019).

Thomas, J. A. The House of Commons 1906-1911 (Cardiff, 1958).

Thompson, J. British Political Culture and the Idea of ‘Public Opinion’, 1867-1914 (Cambridge, 2013).

Vessey, D. ‘Words as Well as Deeds: The Popular Press and Suffragette Hunger Strikes in Edwardian Britain’ Twentieth Century British History, 32/1 (2021), pp. 68–92.

Waller, P. J. and Thompson, A. F. Politics and Social Change in Modern Britain : Essays Presented to A.F. Thompson, (Brighton, 1987).

Waters, C., British Socialists and the Politics of Popular Culture, 1884-1914  (Manchester, 1990).

Windscheffel, A. Popular Conservatism in Imperial London, 1868-1906 (London, 2007).



[1] Throughout this paper, the word ‘Labour’ will be used to refer both to the party and, at times, to the wider movement to which the party remained connected. It is noted by the author, however, that they existed as the Labour Representation Committee (LRC) during the general election of 1900.

[2] L. Blaxill, ‘Joseph Chamberlain and the Third Reform Act: A Reassessment of the “Unauthorized Programme” of 1885’, Journal of British Studies 54/01 (2015), pp. 88–117; L. Blaxill, ‘Electioneering, the Third Reform Act, and Political Change in the 1880s’, Parliamentary History 30/3 (2011), pp. 343–73; M. Brodie, The Politics of the Poor : The East End of London, 1885-1914 (Oxford, 2004); P. J. Waller and A. F. Thompson, Politics and Social Change in Modern Britain : Essays Presented to A.F. Thompson (Brighton, 1987), p. 36; K. Rix, ‘“The Elimination of Corrupt Practices in British Elections”? Reassessing the Impact of the 1883 Corrupt Practices Act’, The English Historical Review CXXIII/500 (2008), pp. 65–97; Richard Shannon, The Age of Salisbury, 1881-1902 : Unionism and Empire (London, 1996).

[3] L. Blaxill, The War of Words: The Language of British Elections, 1880-1914 (Woodbridge, 2020); A. Windscheffel, Popular Conservatism in Imperial London, 1868-1906 (London, 2007).

[4] M. Conboy, The Press and Popular Culture (London, 2002), p. 95.

[5] A. Bingham and M. Conboy, Tabloid Century : The Popular Press in Britain, 1896 to the Present (Oxford, 2015), pp. 7–9.

[6] For more on the broader importance of newspapers, see J. Lawrence, Electing Our Masters : The Hustings in British Politics from Hogarth to Blair (Oxford, 2009), p. 78; J. Thompson, British Political Culture and the Idea of ‘Public Opinion’, 1867-1914 (Cambridge, 2013), p. 25; Windscheffel, Popular Conservatism in Imperial London, 1868-1906: pp. 26-7.

[7] C. Shoop-Worrall, ‘Politics and the Mass Press in Long Edwardian Britain 1896-1914’ (unpublished PhD thesis, University of Sheffield, 2019).

[8] S. Koss, The Rise and Fall of the Political Press in Britain (London, 1984), v. 2: pp. 15–53.

[9] D. Vessey, ‘Words as Well as Deeds: The Popular Press and Suffragette Hunger Strikes in Edwardian Britain’, Twentieth Century British History 32/1 (2021), pp. 68–92.

[10] Shoop-Worrall, ‘Politics and the Mass Press in Long Edwardian Britain 1896-1914’, pp. 180–200.

[11] See L. Beers, Your Britain : Media and the Making of the Labour Party (Cambridge; Mass, 2010).

[12] D. Hopkins, “The socialist press in Britain, 1890-1910” in J. Curran, G. Boyce and P. Wingate (eds.), Newspaper History from the seventeenth century to the present day (London, 1978), pp. 265-280; C. Waters, British Socialists and the Politics of Popular Culture, 1884-1914 (Manchester, 1990).

[13] See Bibliography

[14] D. Butler and G. Butler, British Political Facts, 10th ed. (Basingstoke, 2010).

[15] Bingham and Conboy, Tabloid Century, p. 26.

[16] ‘Campaign Items’, Daily Mail 27/09/1900.

[17] ‘Who’s Who in the Election’, Daily Mail 5 October 1900, p. 3.

[18] ‘Who’s Who in the Election’.

[19] ‘Who’s Who in the Election’.

[20] ‘Labour Members and Mr. Chamberlain’, Daily Express 1 October 1900, p. 1

[21] ‘The Working Man’s Vote’, Daily Express 11 October 1900, p. 6.

[22] ‘Woolwich’, Daily Mail 12 January 1906, p. 3.

[23] ‘The Socialist Countess’, Daily Express 12 January 1906, p. 5.

[24] ‘Labour Successes’, Daily Mail 15 January 1906, p. 7.

[25] ‘Election Race by Motor-Car’, Daily Express 15 January 1906, p. 1.

[26] Daily Express 16 January 1906, p. 4.

[27] Ibid, p. 1.

[28] ‘The Outlook: The Rise of Labour’, Daily Mail 16 January 1906, p. 6.

[29] Ibid.

[30] Express, 16 January, p. 4.

[31] ‘The Coming Troubles of the Labour Party’, Daily Mail 31 January 1906, p. 6.

[32] ‘Rise of Labour’, Mail, p. 6.

[33] Express, 16 January, p. 4.

[34] ‘The Outlook: Revolution of 1906’, Daily Mail 18 January 1906, p.6.

[35] ‘The Outlook’, Daily Mail 22 January 1906, p. 6.

[36] ‘Solid Labour Phalanx’, Daily Express 18 January 1906, p. 5.

[37] ‘The Outlook: Hushing it up’, Daily Mail 23 January 1906, p. 6.

[38] ‘The Outlook’, Daily Mail 7 February 1906, p. 6.

[39] ‘Matters of Moment: Labour and Liberalism’, Daily Express 17 January 1906, p. 4.

[40] ‘Revolution of 1906’, Mail, p. 6.

[41] ‘Who’s Who in the New House’, Daily Mail 17 January 1906, p. 7.

[42] Ibid.

[43] ‘The Polling’, Daily Express 17 January 1906, p. 1.

[44] ‘Who’s Who’, Daily Mail 19 January 1906, p. 7.

[45] ‘The Polling’, Daily Express 19 January 1906, p. 1.

[46] ‘Labour Successes’, Daily Mail 15 January 1906, p. 7.

[47] ‘The Polling’, Daily Express 18 January 1906, p. 1.

[48] ‘Who’s Who’, Daily Mail 25 January 1906, p. 4.

[49] ‘The Polling’, Daily Express 19 January 1906, p. 1.

[50] J. A. Thomas, The House of Commons 1906-1911 (Cardiff, 1958).

[51] ‘A New Style Labour Member’, Daily Mail 19 January 1906, p. 6.

[52] This would not be unique to the two papers’ coverage of Labour, as the broader issue of British socialism was discussed in dedicated articles elsewhere in the election coverage (See ‘What Labour Wants’).

[53] ‘Romance of Labour’, Daily Express 20 January 1906, p. 1.

[54] ‘Labour MP’s Romance’, Daily Express 23 January 1906, p. 5.

[55] ‘What Labour Wants’, Daily Mail 17 January 1906, p. 6.

[56] Ibid.

[57] ‘What Labour Wants (Part II)’, Daily Mail 18 January 1906, p. 6.

[58] ‘The People’s Party: Which Will Tax the Very Rich’, Daily Mail 23 January 1906, p. 6.

[59] ‘Matters of Moment: Labour on its Trial’, Daily Express 19 January 1906, p. 4.

Deconstructing monarchical legitimacy: Lancastrian depositional propaganda and the language of political opposition, c. 1399–1405


This article assesses the political impact of the propaganda created by the newly-installed Lancastrian regime in 1399, used to justify the deposition of Richard II and legitimise the ascension of Henry IV. Ideological and historical discourses, underwritten by the concept of kingship, were integral to Lancastrian depositional propaganda. Importantly, they were also appropriated by overt political opposition in Henry IV’s early reign (c. 1399–1406) to articulate and justify their grievances, and used as interpretative frameworks by chroniclers to rationalise this opposition. Firstly, this article provides a new perspective on Lancastrian propaganda, emphasising the role of the literary, historical, and ideological context in shaping its language, and the ideas which underwrote it. It will analyse “official” Lancastrian documents, pro-Lancastrian chronicles, and more equivocal chronicles, including those from France, to identify three key discourses. It will then show how—and why—the leaders of the Percy Rebellion (1403) and Archbishop Scrope’s Rebellion (1405) also used these discourses as vehicles for arrogating their own legitimacy as rebels, and for simultaneously challenging Henry’s legitimacy as king. The “manifestos” published by, or ascribed to, the rebels of 1399, 1403, and 1405, and their reception and reproduction by chroniclers, are discussed here, to illustrate the resilience of the discourses first used in depositional propaganda and how they had the potential to shape subsequent political opposition. This article emphasises the importance of ideology and ideas, the rhetorical power of language and “history”, and the considerable, yet hitherto unappreciated, impact these had on the early Lancastrian polity and its politics. 

Keywords: Lancastrian, Henry IV, Richard II, Propaganda, Chronicles, Ideology, Political Opposition, Deposition, Rebellion, Kingship

Author Biography

David Clewett is a graduate of the University of Nottingham, and a recipient of the W. R. Fryer Prize. He is currently studying a PGCE in Secondary History at Nottingham.


Deconstructing monarchical legitimacy: Lancastrian depositional propaganda and the language of political opposition, c. 1399–1405

Download PDF

The end of the fourteenth-century in England is generally discussed as a rather troubled period, afflicted by ‘baronial rebellions’, ‘an immediate and steep plunge into insolvency’, and an ‘acute instability’ in government and politics.[1] These issues largely stemmed from, or were exacerbated by, the particularities and consequences of the ascension of Henry IV. In 1399, Henry, in the ostensible process of reclaiming his ducal inheritance of Lancaster, rebelled and usurped the throne of England from his infamously megalomaniacal cousin, Richard II. However, alongside the naturally destabilising impact of changes in dynasty and regime, the process of deposition was antithetical in several ways to prevailing ideological schema. Particularly, the fundamental centrality and apparent inviolability of the king to the medieval political system was compromised by Richard’s deposition, thus making monarchical legitimacy something of a political cause célèbre that Henry could never quite escape. The new Lancastrian regime was, therefore, anxious to emphasise the legitimacy of his kingship, and of Richard’s deposition. The rhetorical power of the written word—and, indeed, the historical “truth” it sought to record—in this period lent itself well to such concerns; the Lancastrian government quickly became ‘adept propagandists’.[2]

The existence of “official” Lancastrian ‘propaganda’ has long been recognised.[3] While early historians on the reign, namely James Wylie and William Stubbs, unwittingly accepted the Lancastrian version of events in their narratives, more recent historians have termed it a ‘smoke-screen of untruth’.[4] Indeed, since the 1930s at least, the existence of ‘deliberate suppressions of the truth’ and ‘soothing falsehoods’ in Lancastrian propaganda has been appreciated.[5] Yet, the concomitant assumption of incredibility and inherent political expediency of Lancastrian propaganda has led to a tendency to conflate this rather cynical appraisal with a misconceived understanding of how contemporaries received and worked with it—or against it. Historians have, therefore, generally failed to consider that Lancastrian propaganda, or, more specifically, the discourses it employed and the ideologies which underwrote them, could, and did, have a significant political role after the events of 1399. Tentative work in this area such as that by Paul Strohm and Jenni Nuttal has, however, indicated that the late-medieval society’s ‘knowledge of and interest in political language’ was greater than previously believed, and that the political community was undoubtedly ‘highly aware of the power of words’ and their manipulation.[6] These are assumptions that also underpin the current study. Yet, even these commendable studies are limited in that they focus almost exclusively on the poetic, allegorical, and didactic literature produced at the time, such as the vernacular Crowned King and Richard the Redeless (both anonymous), or Thomas Hoccleve’s Regiment of Princes. Treatment of more “political” or politicised documents is somewhat lacking, despite the increasing appreciation of ‘the importance of the study of language and literary production’ for understanding the dynamics of medieval politics.[7] As such, the possibility that Lancastrian depositional propaganda could influence the actual political dynamic in the early Lancastrian polity, that is to say, beyond the immediacy of 1399–1400, is left underappreciated.

This study will illustrate, therefore, that Lancastrian propaganda had a more significant political role in the tumultuous years of Henry IV’s early reign than has hitherto been ascribed to it, and that it reveals and reflects more about the contemporary political-ideological Zeitgeist than previously thought. This will be done through the lens of three historical and ideological discourses integral to both the Lancastrian propaganda of 1399, and the anti-Lancastrian indictments produced by the Percy Rebellion of 1403 and Archbishop Scrope’s rebellion of 1405, long regarded as two of the most serious challenges of his reign.[8] These include: the historical nature and conceptual implications of Richard’s resignation as king, the king’s expected and perceived financial conduct, and the king’s expected and actual choice of counsel. This article will show that, because of the similarity and continuity in the moralised discourses used to internally articulate and externally rationalise them, the rebellions of 1403 and 1405 were conceptualised through—and perceived in relation to—the ideas given currency in the depositional propaganda of 1399. It will show how these ideas rotated around the axis of monarchical legitimacy in particular, and how they were set within the conceptual framework of kingship in general. It will become apparent that the regime lost control of the very things that underwrote its own existence, which consequently provided a ‘perpetual opportunity’ for those who soon came to challenge it.[9]


Lancastrian Historical Writing and the Use and Abuse of “History”

The Lancastrian regime, unsurprisingly, sought to rewrite history or, at least, those parts which they perceived as important, because they ‘clearly recognized the value of historical works as propaganda’.[10] As such, historical—or, rather, historicised—discourses were brought quite organically into, and formed an integral part of, Lancastrian propaganda. One in particular, that being the nature of Richard’s resignation as king, and Henry’s role in the process, was especially important, given the conceptual implications it could have on both monarchs’ legitimacy. This also held considerable moral authority given its immediate relevance, and will be considered here as representative of the broader Lancastrian use and abuse of “history”.

On the one hand, we have contemporary and near-contemporary sources representative of Lancastrian depositional dogma. These include “official” governmental documents, such as the parliamentary Record and Process of 1399; chronicles, largely monastic and written in the aftermath of the usurpation, from ‘official Lancastrian apologists’, like Thomas Walsingham’s Chronica Maiora, the Annales Ricardi Secundi, and, tentatively, the secular chronicle of the Henrician courtier Adam of Usk; and other, often more equivocal, anonymous chronicles, like the Vita Ricardi Secundi from Evesham, and the Continuatio Eulogii (the continuation of the Eulogium Historiarum, also known as the Eulogium).[11]

The Record of 1399 was produced by the new Lancastrian regime, albeit ostensibly through parliament, and with its ‘deliberately vague and possibly misleading contents’, it is the clearest expression of the ‘official Lancastrian view’, and presents a rather straightforward and convenient narrative of Richard’s resignation.[12] The Record recounts, unequivocally, that Richard had, while ‘at liberty’ in Conway, promised to ‘resign and relinquish the crown … and his royal majesty’.[13] Walsingham, an ‘acerbically anti-Ricardian chronicler’ from St Albans, echoes this point, stating that Richard was then willing to uphold this promise and, in the Tower on 29 September, ‘with a cheerful countenance’, he ‘renounced and quit his royal powers’.[14] This decision, thus presented as entirely free and comprehensive, was underpinned by his own recognition of his ‘unfitness and inadequacy’ as king, with his ‘notorious faults’—subsequently set out in the Record’s thirty-three “deposition articles”—rendering him entirely deserving of deposition.[15] This portrayal was intentionally uncontroversial, as by indicating that Richard had willingly resigned the crown, Henry could ‘mitigate the ideological discomfort’ concomitant to royal depositions.[16] Henry’s ascension, agreed upon ‘unanimously and without any difficulty or delay’ by the three estates (the clergy, or Lords Spiritual, the nobility, or Lords Temporal, and the Commons) in parliament, in light of his genealogical pedigree, was thus rendered wholly legitimate.[17]

On the other hand, those sources which provide alternative versions of events, either by omission or contradiction, and which thus throw the use of this discourse into sharper relief, include: contemporary Cistercian monastic chronicles, namely the Dieulacres Chronicle, the Whalley Chronicle, and the Kirkstall Chronicle; and secular French chronicles, such as Jean Creton’s Histoire du Roy d’Angleterre Richard (known as the “Metrical History”) and the Chronique de la Traïson et Mort de Richart Deux Roy D’angleterre, which are ‘uniformly favourable’ to Richard.[18]

These sources suggest that Richard’s resignation was not so amicable. The Cistercian Kirkstall Chronicle, for example, does not record any promises made by Richard at Conway or Flint, and consequently does not imply a free or willing resignation in the Tower. Instead, it does record Henry’s rather equitable distribution of the severed heads of Richard’s councillors after they had been beheaded at Chester; an event which was, undoubtedly, not conducive to the subsequent atmosphere of cordiality between the two at Flint, as alleged by the Vita.[19]

Other sources contradict the Lancastrian narrative, and, notably, draw upon the idea and act of perjury to add a moral emphasis to their accounts. The Dieulacres Chronicle and Whalley Chronicle in particular leave ‘little doubt’ that Richard’s free resignation was a fabrication.[20] Dieulacres, as ‘the most fearlessly partisan [to Richard] of the English chronicles’, is a particularly valuable counterpoise to the Lancastrian narrative.[21] It highlights the duplicity and perjury of Henry’s proctors in securing Richard’s eventual—and evidently not free—resignation. Archbishop Arundel and Henry Percy, the first Earl of Northumberland, at Conway, ‘swore upon the sacrament of the body of Christ … that King Richard would be permitted to retain his royal power and dominion’, but then at Flint they ‘denied their fine promises’, capturing Richard, ‘and began to treat [him] like a prisoner’.[22] Even Usk’s chronicle, valuable for its author’s proximity to the events, as one of the commissioners tasked with legalising Richard’s removal, presents this episode similarly, despite this intimacy with the Lancastrian faction. He, like Dieulacres, records the apparent conditionality of Richard’s surrender to Henry through Northumberland and Archbishop Arundel, such that he promised to do so only ‘on condition that his [royal] dignity would be saved’—he would remain king.[23] The Whalley Chronicle also highlights Henry’s apparent perjury, stating that he, ‘contrary to the aforesaid oath [to treat Richard appropriately, and not to dethrone him], seized King Richard’ and imprisoned him in the Tower ‘until such time as he would resign to him the crown’.[24] The moral emphasis contemporaries placed upon oath-taking and promises, especially those made upon religious artefacts, was significant, and we should not pass over their inclusion in both Lancastrian and non-Lancastrian historical narratives. Perjury was, therefore, seen with considerable distaste, with the perjurer losing their moral authority, and in a king’s case, his legitimacy. A final English document, the memorandum entitled the Manner of King Richard’s Renunciation, previously regarded by George Sayles as a ‘Lancastrian narrative’, but now regarded, following Christopher Given-Wilson’s reappraisal, as a more ‘independent account’, presents several clear contradictions to the Lancastrian version.[25] Specifically, this document, likely an eyewitness account, according to Given-Wilson, and thus of great value, records with exceptional clarity that Richard ‘would not [resign] under any circumstances’.[26] Together, these sources indicate that Richard’s resignation could not be free nor willing, thus rendering it invalid, and the act of perjury committed by Henry’s proctors is associated with him personally by these accounts, thereby compromising his integrity and legitimacy as king.

French chronicles, taking a uniformly ‘pro-Ricardian standpoint’, have been seen, perhaps a bit too emphatically, as ‘the most accurate accounts’ of the deposition.[27] Firstly, Creton’s Metrical History carries particular value for scholars, especially for events at Conway, owing to Creton’s personal presence at Conway in 1399, as part of his travels in England, and the History having been written shortly after, over the winter of 1401–02.[28] In this, Creton, like Dieulacres, highlights Northumberland’s—and, by association, Henry’s—duplicity and perjury, by stating how he ’swore upon the body of our Lord’ to convince Richard of his ostensible, but ultimately false, honesty.[29] Secondly, the contemporary historian-poet Jean Froissart, despite his superficial inaccuracy with dates and place, similarly contradicts the more general thrust of the Lancastrian narrative, making it clear that, as Richard was ‘imprisoned in the Tower’, ‘it was decided that [he] must give up all his royal prerogatives’.[30] Of importance here is the removal of agency from Richard in changes in his own condition. While Froissart acknowledges that Richard ultimately gave up the crown ‘freely and willingly’, by framing it as a pragmatic recognition of the precarious state of his life in London, we can understand the Lancastrian emphasis on an entirely voluntary process to be a fabrication, divorced from Richard’s actual circumstances.[31] And thirdly, the Traïson et Mort includes a novel episode regarding the Bishop of Carlisle, who apparently made a protest in parliament that Richard had not had a fair trial in person, and that he should, in fact, be brought to parliament ‘to see whether he be willing to relinquish his crown to the duke or not’.[32] The possibility of such dissent clearly challenges the perceived validity of the apparently self-evidently free resignation in the Tower. Cumulatively, the accounts recounting the deposition highlight that the Lancastrian historical narrative of Richard’s resignation, with its emphasis on notions of peace, amicability, and moral acceptability, was an important discourse within their propaganda.

We then see that this same discourse was employed in the accounts of the Percy Rebellion, crucially, as a vehicle to challenge Henry’s legitimacy. The Percy ”manifesto”, set out by John Hardyng, has been treated and dismissed somewhat unfairly by historians, on the implicit assumption that its author’s own political affiliation, both at the time as a Percy associate, and in later years as a Yorkist, and the potential political expediency it could thus serve, precludes it from being of much use.[33] That the Yorkist version of Hardyng’s chronicle, in which the manifesto is recorded, is not favourable to Henry is well known.[34] Yet, while his chronicle was written about 50 years after the Percy Rebellion, and so the specifics of the manifesto have rightly been questioned, it is nevertheless fair to assume that he at least captured the main ideas put across in 1403. Even if Hardyng exaggerated how far he ‘knewe [Hotspur’s] entent’, it is not too far removed from his position, as part of Hotspur’s (the son of the rebel Earl of Northumberland) household ‘fro twelve yere of age’, to suggest that his account is based on the contemporary political Zeitgeist as he experienced and internalised it.[35] This manifesto is, therefore, highly informative, and his chronicle not entirely the ‘most untrustworthy quarter possible’ it has been dismissed as.[36]

Firstly, the manifesto flatly contradicts the Lancastrian narrative of 1399, which can be interpreted as a method of legitimising their resistance to Henry. While the manifesto acknowledges that Richard had ‘resigned the kingdoms of England and France’, it stresses that this only came after being coerced ‘under threat of death’.[37] Hardyng’s verse similarly states that the resignation had been made only ‘vnder dures … in fere of his life’.[38] It is thus implied that the throne which Henry claimed was not vacant, because Richard’s resignation was invalid, and his consequent ascension was thereby rendered illegitimate.

Secondly, the manifesto also challenges Henry’s legitimacy through the idea of perjury, as the Record did with Richard’s. An oath, sworn ‘upon the holy gospels’, was apparently made by Henry at Doncaster in 1399, with rumours elsewhere of one made at Knaresborough.[39] Henry had apparently sworn to ‘claime no more but his mothers heritage, His fathers landes, and his wifes in good entent’, and to ensure that ‘Richard would remain king for the term of his life’, and retain his royal prerogatives.[40] Richard’s enforced resignation, his unsavoury death, and Henry’s seizure of the crown thus violated this oath, rendering Henry ‘perjured and false’.[41] Yet, Dieulacres does refer to an oath sworn by Henry, on the relics of Bridlington, that he would not seek the Crown; whether this was made at Doncaster is unclear, but perhaps the perjurial element thus suggested may actually be less a fabrication than historians have thought.[42] That Henry made some sort of oath in 1399, at Doncaster, Knaresborough, or elsewhere, is probably true—perhaps, as has been suggested, even ‘inherently plausible’.[43] While the Doncaster oath may still be dismissed as a politically-expedient ‘forgery’, it does nevertheless show that the idea of perjury certainly had a legitimating or de-legitimating effect, and that it was employed as a rhetorical device by the political opposition during the events of both 1399 and 1403 for those ends.[44] Overall, what we find in the Percy manifesto is a commentary upon Henry’s current legitimacy as king, based on a revised account of the events which made him so.

The rebels of 1405 levelled charges against Henry that ‘closely echoed’ those of both 1399 and 1403.[45] They were previously, and unfairly, dismissed as ‘naïve nonsense’, highly divorced from political reality.[46] But it is now understood that, actually, they ‘reflected political reality and resonated with those who read them’, which throws into sharper relief how they might have actually reflected perceptions of Henry’s legitimacy.[47] The manifesto ascribed to Archbishop Scrope and his followers is more widely and variably recorded than that of 1403. Walsingham’s version is usually seen as the most accurate, with historians like Given-Wilson taking for granted that he had ‘translated [the articles] almost word for word, and … inserted them here as they were expressed, without any bias’.[48] Yet, it is likely, upon analysis of alternative sources, that he actually consciously obscured certain historical narratives, which naturally throws into doubt his professed accuracy. In particular, there is an alternative version of Scrope’s manifesto, which has been in print for some time, but has been largely unused by scholars, although it is unclear why this is the case.[49] This longer version, valuable for its vociferous contrariness to Walsingham’s, condemns the Lancastrian faction as ‘invasores, destructores et proditores (invaders, destroyers, and traitors)’, a turn of phrase which would be highly out of place in Walsingham’s chronicles.[50]

The narrative of Richard’s deposition thus resurfaces in 1405 in a similarly contentious manner. The fourth article of the longer manifesto states that Henry had captured Richard and forced him to resign the crown ‘per metum mortis (under fear of death)’, thereby making the invalidity of his resignation self-evident, and Henry’s conduct morally questionable.[51] The manifesto also comments upon the nature of Richard’s death. Article five makes it clear that Henry had sent Richard to Pontefract and had him shamefully murdered there, after fifteen days of indignities.[52] This directly contrasts the nascent Lancastrian narrative of Richard’s death, as is represented in the Annales Ricardi Secundi: Richard was, after hearing of the death of John Holland, the Duke of Exeter and a close and longstanding friend, apparently ‘so overwhelmed with grief … that he wished to put an end to his life by refusing all food. So thoroughly did he starve himself’.[53]

The theme of perjury returns again in 1405. The second article highlights that Henry had returned to England ‘contra juramentum (against his oath)’, although which oath this refers to is not made clear, and on the ostensible premise of recovering only his ‘hæreditatem paternam (paternal inheritance)’, which retrospectively exacerbates the detestable nature of his conspiracy, given that he then seized the throne instead.[54] Clearly, the idea of perjury was one eagerly seized upon by political opposition in 1399, 1403, and 1405, and was used explicitly as a de-legitimating tool to simultaneously challenge the monarch and justify their resistance.

The events of Richard’s deposition were therefore a highly contested issue, and as a whole formed a key historical discourse within both Lancastrian propaganda and that produced by subsequent anti-Lancastrian political dissenters. It was made ever more relevant and useful through the implications it had on the legitimacy of both monarchs’ kingships; a fact which underwrites its use across the period. To obscure any resistance on Richard’s behalf, and any intimation of Lancastrian misconduct, as favourable narratives, like the Record, do, is to provide Henry with a greater degree of legitimacy. To discuss the complications of Richard’s resignation, in particular the element of coercion and Lancastrian perjury involved, as unfavourable sources, like the French chronicles, do, is to naturally throw into question the validity of the deposition, the vacancy of the throne, and the legitimacy of Henry’s ascension and assumption of kingship as an office.



Political Theory and Depositional Propaganda

The second and third discourses considered by this study relate to the diachronic ideological frameworks in which the political community operated, one of which gained an amplified synchronic political currency through its deployment in 1399: the concept of kingship. It is probably not too ambitious of Mark Ormrod to suggest that kingship was the one political issue on which ‘almost everyone living’ in England had an opinion’.[55] With this heterogeneity notwithstanding, the common denominator across the various theories of kingship was the king’s centrality to the political system, and his threefold purpose: to provide for ‘the defence of the realm’, which had a financial resonance; to ‘maintain internal order’ through judicial means; and to provide a general ‘directive force’ in the government of the realm, moderated by his moral qualities.[56] The currency of these ideas stemmed from their elaboration in the “mirrors for princes”, a didactic genre of advice literature extolling the virtues of the ideal ruler.[57] In particular, the theory of kingship reflected in the Record and Process was influenced by medieval writers of political theory, such as Henry of Bracton, John of Salisbury, and Giles of Rome.[58] Kingship was largely a public performance, and one which needed to be upheld to retain royal legitimacy. For the purposes of this study, in the two discourses discussed in this section, specific reference is made to the king’s expected financial conduct, and his keeping of royal counsel. What we find in Lancastrian sources, when they are examined using these two discoursal frameworks, is critical commentary on Richard’s performance as king, and the inference that Henry would perform better. It will emerge that, as John Watts has suggested, ‘the public principles and practices’ of the medieval political system were, in fact, ‘quite as real as the private aims of its participants’.[59]

The interwoven deployment of these two discourses is revealed in the Record. The first article states that Richard had granted the crown’s ‘goods and possessions … to unworthy persons’, thus ‘dissipating them carelessly’, and ‘imposing taxes and other weighty and insupportable burdens on the people without cause’.[60] Furthermore, the inclusion of the coronation oath in the Record seems to bear witness to the emphasis the propagandists placed upon the oath as a touchstone for their subsequent critical commentary on the perjury inherent in Richard’s exercise of kingship.[61] Yet, this aspect of the Record is often ignored or treated as a formulaic inclusion, isolated from the charges it quite clearly underpins. It is made very clear in the Record that Richard had ‘rashly [violated] the aforesaid oath’.[62] Perjury was an unavoidable and, in the event, quite useful, idea underwriting Richard’s deposition.


The Problem of Financing Dynastic Legitimacy

Alongside the increasing emphasis on the more general ‘responsibilities of the king in domestic government’, the late-medieval king was increasingly expected to rule in such a way as to enhance the material well-being of his subjects, which meant a reduction in public taxation, non-interference with his subjects’ property rights and inheritance, and a reduction in royal profligacy and unnecessary expenditure, especially in the household.[63] Moreover, Aristotelian political thought, as expressed in Thomas Aquinas’s and Ptolemy of Lucca’s “mirrors”, had held for some time that, should the king not rule in this economical way, he fulfilled ‘the criterion of tyrannical behaviour’, and could be deposed.[64] In several ways, therefore, ‘financial rectitude was the paradigm of good kingship’.[65] In passing judgement on Richard, the Lancastrians could not avoid commenting on his fiscal misconduct, which conveniently gave them the conceptual scope to depose him.

The Record sets out Richard’s financial misconduct quite clearly. Despite the fact that Richard could ‘live honestly from the proceeds of his realm’ and ‘the patrimony pertaining to his crown’, he ‘imposed so many burdens of grants [taxation] on his subjects … almost every year’, resulting in ‘the impoverishment of his realm’.[66] He also ‘cunningly’ deceived his people ‘to acquire their goods for himself’ through the infamous “blank charters”, which were charters given to royal agents that allowed them to fill in as they saw fit and in such a way as to obtain additional revenue for the crown, and which raised ‘great sums of money’ as a result.[67] Indeed, the blank charters raised as much as £30,000, not an insignificant sum, and one which, when seen to have been used inappropriately, was bound to raise criticism.[68] Richard is further condemned for still finding it necessary to also raise loans from ‘a great number of lords and others’, and, importantly, ‘not [fulfilling] this promise of his’ to repay them.[69]

Given that Richard did not pay the money raised, the charge, as Caroline Barron asserts, is ‘completely substantiated’.[70] It is unsurprising, therefore, to find that Walsingham highlights how ‘the king was very rich’, undoubtedly a result of his intentional ‘measures to impoverish all the rich and poor’ to ‘amass riches’ for himself.[71] While the extent of Richard’s actual wealth can be brought into question, it is the idea that he had such ill-gotten wealth which provided the grounds for such moralistic condemnation. To drive the point home, the Record states that this ‘superfluous wealth’ was spent for the sole purpose of the ‘ostentation and pomp and vainglory of his name’.[72] Richard’s avarice, profligacy, and his misappropriation of public funds are thus laid bare. Beyond the Record, the anonymous, anti-Ricardian author of the continuation of the Eulogium Historiarum, who Antonia Gransden suggests to be a Franciscan friar from Canterbury, provides an account of Richard’s ‘extravagance’ and ‘life of debauchery’ which is uniquely descriptive.[73] Apparently, ‘no one in [the Books of] Kings was more glorious’ than Richard, who, seeking to ‘outdo all his predecessors in riches and to rival the glory of Solomon’, ‘accumulated inordinately’ vast wealth and ostentatious symbols of it: ‘treasures and jewels … kingly robes and adornments … the splendour of his table … the palaces that he built’.[74] In light of where the money for this was perceived to have come from, such conspicuous self-aggrandisement was not in line with the expectations of the monarch. The only conclusion that the unwitting reader can reach, and one which the propaganda intended to encourage, was that Richard’s financial conduct was simply incongruous to the established expectations of the monarch. He was categorically not ruling in the material interests of his subjects, nor was he applying the material wealth of the realm towards proper ends.

As a component of kingship, financial conduct was also important to the rebels of 1403, and we find a similar condemnation of Henry through this discourse, which was further exacerbated by its interweaving with the act of perjury. The rebels claimed that Henry swore, as part of the Doncaster oath, that while he lived, he ‘would not permit any tenths … or fifteenths … or any other tallages’ to be levied without parliamentary agreement.[75] The fact that he had apparently done so in the interim thus rendered him ‘perjured and false’, a claim compounded by the accusation that he had requested them ‘under threat from [his] royal majesty’ in suspiciously Ricardian fashion.[76] This is notwithstanding the fact that Henry’s promise of parliamentary assent for taxation was essentially an affirmation of existing practice. Of importance here, though, is that Henry was seemingly providing a blueprint, defined in opposition to Richard’s, for his own fiscal governance, against which he is, in 1403, clearly being judged. Henry’s wider financial conduct was also brought into question by the Eulogium, which states that, prior to the battle at Shrewsbury, Henry was rebuked by Hotspur, who said: ‘you rule worse than [Richard] did … you despoil the kingdom, yet you always say you have nothing … you never make payments, [and] you do not maintain your household’.[77] These accusations—or at least, the ideas which the Eulogium intimates as relevant in 1403—are akin to those voiced in 1399 to condemn Richard’s financial misconduct as king, and are fundamentally based on the same conceptual framework.

Henry’s general promise to ‘live of his own’ in 1399 meant that his subsequent financial demands were invariably portrayed as a ‘breach of the king’s faith’, and this is also apparent in 1405.[78] The third article of Scrope‘s manifesto claims that Henry had promised a general freedom from tithes and certain taxes, declaring that clerical tenths (‘decimationes ecclesiasticas in clero’), lay fifteenths (‘quintam-decimam in populo’), and certain indirect taxes on cloth and wine (‘panni … et vini’), would either be left unrequested, or reduced.[79] Despite this, the manifesto continues, Henry had continued to demand money from the realm, and article nine highlights the great damage and financial extortion which the country was thus subject to, such that there is now no money at all.[80] Henry was clearly not living of his own. Indeed, Douglas Biggs has highlighted that, considering the substantial taxation between 1401–05, this issue ‘reflected no small amount of political reality’.[81] One of their aims, therefore, was to liberate the realm from these ‘exactione, extortione, et injusta solutione (exactions, extortions, and unjust solutions)’.[82] Even in Walsingham’s version, the first article admits that a series of ‘insupportable burdens’ had been placed on the clergy, and his third similarly posits ‘extortionate and oppressive demands’ made on the lay members of society.[83] The Eulogium, too, recounts that Scrope referred to the ‘excessive levies of tolls and customs’, and the ‘unendurable taxes’ levied on the ‘clergy and people’, but does so from a more directly challenging perspective, without the scepticism inherent in Walsingham.[84] Clearly, Henry’s financial conduct, or misconduct, as king was highly topical in 1405, and could not be refuted regardless of the sympathies of the various authors. In such a conducive ideological context it was only logical that, across all the instances of rebellion under investigation, this could be made into a vehicle for de-legitimation and rebel justification.


The Quality of Royal Counsel

While the king had public financial responsibilities, this was an age of ‘overwhelmingly personal kingship’, so other aspects of royal conduct, and how they reflected upon the personal moral calibre of the king, naturally influenced his perceived legitimacy.[85] Because rulership was seen as an ‘ethical act’, the moral quality of those advising the king in his household, his council, or elsewhere was of vital importance, as they might influence his character, and his suitability for kingship.[86] Indeed, in moments of political crisis, the royal household was never ‘far from the centre of the stage’, and those who complained about its size and extravagance in these years indeed ‘had good reason to do so’.[87] It was also a well-established belief in the “mirrors” that, as John of Salisbury’s Policraticus exemplifies, it was vital that a king should ‘act on the counsel of wise men’, without favouritism.[88] Taken together, the increasing distinction between the king’s two bodies (that ‘dichotomous concept of rulership’), an emphasis on the king’s personal attributes and the role of royal counsel in determining these qualities, and the popular beliefs in the “mirrors”, meant that the personality of the king, and how this was indicated by the perceived quality of his counsellors, was a ‘natural and proper target’ for those questioning monarchical legitimacy.[89]

Royal favourites were seen as particularly problematic insofar as they could ‘dominate the king’s person and manipulate his prerogative’.[90] A susceptibility to this was associated with a lack of legitimacy as king, as had been the case in the preceding reigns of John I, Henry III, and especially Edward II. Richard, in keeping his favourites, like John and Thomas Holland, as his principal counsellors, which was a natural and logical connection to make, ignored the advice of his ‘natural counsellors’—his wider nobility, who were themselves ‘answerable for the good governance of the realm’—choosing instead, we can assume, those upstarts or the duketti he so favoured.[91] Indeed, the belief that such men should naturally surround the king was similarly well-established. Giles of Rome’s De Regimine Principum, written in the mid-thirteenth century, emphasises that the wisest counsel of all came from the prince’s nobility.[92] The Record incorporates this idea by highlighting how Richard had ‘frequently rebuked and reprimanded’ his faithful counsellors—the nobility and justices—in great councils, such that they ‘did not dare to speak the truth’.[93] Moreover, the unsavoury qualities of youth, such as impulsivity, ignorance, and vanity, were also associated with Richard’s counsellors, exacerbating their malign influence. Usk makes it quite clear that Richard had ‘callow counsellors’, and, like ‘Rehoboam, the son of Solomon’, followed ‘the counsel of youths’.[94] While Richard could not lose the Kingdom of Israel by following such poor-quality counsel, he certainly did lose the Kingdom of England.

Royal favouritism was also associated with a parasitic drain on royal finances. As highlighted above, in the Record and Process, the first article against Richard accuses him of granting possessions to ‘unworthy persons’—in other words, his favourites, some of whom had been granted the confiscated lands of the Lords Appellant, notably those of the Earl of Arundel and the Duke of Gloucester, after Richard had them killed in 1397.[95] To compromise royal finances in this quite unprofitable way, for the benefit of favourites was, naturally, to throw into doubt the king’s wisdom—the ‘root of all kingly rule’—and to present it in such a way as to provide a deeper resonance with the landed classes, many of whom naturally lost out from Richard’s narrow patronage.[96] A lack of wisdom, it is construed, removes legitimacy from a king.

The issue of royal counsel naturally resurfaced in 1403. The Middle English continuation of the Brut, and the Eulogium it drew heavily from for this period, posits a conversation between Northumberland and Henry. Here, Northumberland said that Henry had ‘made promys forto be rewlid be our counsel’, yet despite receiving ‘great sums every year’, Henry ‘[has] nothing’, and ‘[pays] for nothing’, because he was not taking wise counsel from one of his natural counsellors.[97] The tendency among pro-Lancastrian chroniclers to dismiss, or at least minimise, the historical accusations and claims made by the rebels in 1403, and to frame the rebellion through these ideological discourses, is most clear in Walsingham’s works. From the start, he points out that the letters sent out by the Percys contained pure fabrications, made solely ‘to excuse their conspiracy’, that is to say, their contents were false or unfounded.[98] Nevertheless, what he does say about these letters is revealing. The letters presented the rebels’ aims in fairly simple terms: they sought to ‘correct misrule in the state’, and to ‘establish wise counsellors’.[99] The equally favourable Annales similarly records that the rebels sought ‘the reform of public administration, and the appointment of wise councillors’.[100] Given the Annales’ subsequent references to the rents, taxes, and tallages received by the king ‘pro salva regni custodia’ (meaning, essentially, for the defence of the realm) having been ‘atque consumpta (improperly wasted and consumed)’, we can assume that such misrule related to financial matters, or at least the financial implications of political decisions, as in Henry’s choice of councillors.[101] To a degree, however, these were accusations that could be made against every medieval monarch, and so their inclusion is not necessarily a direct and specific challenge to Henry’s particular legitimacy. What is interesting, however, is the acknowledgement that many magnates ‘praised the quick perception’ of the rebels, and ‘applauded [their] insolent behaviour’; to refute entirely the shortcomings of Henry would apparently be too much even for Walsingham.[102]

This tendency resurfaces in Walsingham’s account of Scrope’s rebellion. His articles refer to the ‘squandering of funds, namely expenses claimed for private individual advancement’, but he obscures who these individuals are, and does not bring Henry’s conduct into question.[103] The Annales, too, speaks of ‘uncontrolled extravagance’ as one of the rebels‘ grievances, but does not associate this with Henry specifically.[104] But, when compared with the Eulogium and the Brut, which are less favourable, it becomes clear that these references could mean little other than the perceived self-aggrandisement of the royal household. The Eulogium makes it clear that those who were ‘enriching themselves’ at the expense of others were ‘the greedy and rapacious councillors who surround the king’.[105] The existence of ‘suche covetous men’ was, as suggested above, both a pragmatic financial problem and, in relation to the king, a moral one, such that the rebels of 1405 were eager to draw upon it to justify their dissent.[106]



Ideological discourses associated Richard with fixed and homogenous assumptions, which were then used to evaluate his kingship and prevent flexible thinking upon it.[107] Although the thirty-three charges in the Record were, pragmatically speaking, a ‘Lancastrian political manifesto’, they were also, in John Theilmann‘s words, a ‘mid-range work of political theory’, and thus provided both a justification and normative standard for Henry’s kingship.[108] In combination with the obvious financial implications of Richard’s failed kingship, the corollaries of Richard’s keeping of poor counsel and favourites, which included alienating his nobility, damaging his patrimony, and compromising his judicial impartiality, were inherently de-legitimising, amplified by the idea of perjury, and employed by the Lancastrian faction to justify their actions. While Richard’s failings as king had evidently led to the breakdown of the Ricardian status quo, it was beyond Henry’s ability to close the ‘Pandora’s box of disorder’ once it had been opened in this way.[109]



In 1399, Henry IV and his Lancastrian faction had ‘perpetrated an act of political unorthodoxy of truly monumental proportions’.[110] In this essay, the pivotal role played by the propaganda which sought to justify this unorthodoxy, and its consequent influence on the political opposition faced by Henry IV, has been laid bare. Rather than being constrained to 1399 by its own incredibility, the language of Lancastrian propaganda, characterised by its emphasis on monarchical legitimacy, actually functioned as a set of discoursal vehicles through which subsequent political opposition could articulate their grievances. Moreover, it encouraged contemporary and near-contemporary chroniclers across the Lancastrian-Ricardian spectrum to rationalise this opposition through similar discoursal frameworks. Henry’s legitimacy as king, especially vis-á-vis Richard, was, at least among contemporary political agents and historical commentators, the subject for discussion at the turn of the fifteenth century.

The discourses used by the regime provided outlets for rhetorical moralisations, social and political commentary, and for the expression of political dissent. Their value was not so much in their factual plausibility, but in the possibility that people could be persuaded that they were true. While it is true to state that ‘Henry had raised great expectations in 1399 and had disappointed them’, it is more informative to explain why those specific expectations were raised, what purpose this had served, why Henry was seen to have disappointed them, and how this apparent failure contributed to the political dynamic of these years.[111]

Caroline Barron wrote that, ‘Nearly six hundred years after Richard’s deposition, it is time, finally, to rid ourselves of the pervasive influence of the propaganda of the House of Lancaster’.[112] However, this essay has shown that we are not entirely ready, or entirely justified, in doing so. There is much more to be said about Lancastrian propaganda, but only if we take it and apply it to the politics of Henry’s reign, rather than just to Richard’s. What emerges is a rather different perspective on this fractious reign, one in which ideas, concepts, and language have a much greater role than they have hitherto been assigned. Ultimately, language frames our social and political existence, and puts into identifiable shape both the abstract and the physical; but it can also be used to change this existence, to alter our perspective on reality, and the evidence of the early-fifteenth century bears witness to this.



Primary Sources

Chronicle of Dieulacres Abbey, 1381-1403, in Clarke, M., and Galbraith, V. (eds. & trans.), ‘The Deposition of Richard II’, Bulletin of the John Rylands Library, 14/1 (1930), pp. 125-81.

Unknown Author, An English Chronicle of the Reigns of Richard II., Henry IV., Henry V., and Henry VI. Written Before the Year 1471, Davies, J. (ed.) (London, 1856).

Annales Ricardi Secundi et Henrici Quarti, Regum Angliæ, in Ellis, H. (ed.), Chronica Monasterii S. Albani, III. Johannis de Trokelowe et Henrici de Blaneforde, Monachorum S. Albani, Necnon Quorundam Anonymorum, Chronica et Annales, Regnantibus Henrico Tertio, Edwardo Primo, Edwardo Secundo, Ricardi Secundo, et Henrico Quarto (London, 1866), pp. 155-422.

John Hardyng, The Chronicle of Iohn Hardyng, Ellis, H. (ed. & trans.) (London, 1812).

3– Death of Richard II, Annales Henrici Quarti, in Flemming, J. (ed. & trans.), England Under the Lancastrians (London, 1921), p. 5.

10– Rebellion of the Percies, 1403, Annales Henrici Quarti, in Flemming, J. (ed. & trans.), England Under the Lancastrians (London, 1921), pp. 13-14.

16– Rebellion in the North, May, 1405, Annales Henrici Quarti, in Flemming, J. (ed. & trans.), England Under the Lancastrians (London, 1921), pp. 18-19.

11– Bolingbroke’s campaign and his meeting with Richard according to the monk of Evesham, in Given-Wilson, C. (ed. & trans.), Chronicles of the Revolution, 1397-1400: The Reign of Richard II (Manchester, 1993), pp. 126-131.

12– Two accounts of Bolingbroke’s progress through England, in Given-Wilson, C. (ed. & trans.), Chronicles of the Revolution, 1397-1400: The Reign of Richard II (Manchester, 1993), pp. 132-6.

13– The betrayal and capture of the king according to Jean Creton, in Given-Wilson, C. (ed. & trans.), Chronicles of the Revolution, 1397-1400: The Reign of Richard II (Manchester, 1993), pp. 137-52.

14– Two Cistercian accounts of the perjury of Henry Bolingbroke, in Given-Wilson, C. (ed. & trans.), Chronicles of the Revolution, 1397-1400: The Reign of Richard II (Manchester, 1993), pp.153-6.

16– The “Manner of King Richard’s Renunciation”, in Given-Wilson, C. (ed. & trans.), Chronicles of the Revolution, 1397-1400: The Reign of Richard II (Manchester, 1993), pp. 162-7.

18– The protest of the Bishop of Carlisle, in Given-Wilson, C. (ed. & trans.), Chronicles of the Revolution, 1397-1400: The Reign of Richard II (Manchester, 1993), pp. 190-1.

19– The protest of the Percies, in Given-Wilson, C. (ed. & trans.), Chronicles of the Revolution, 1397-1400: The Reign of Richard II (Manchester, 1993), pp. 192-7.

Unknown Author, Continuatio Eulogii: The Continuation of the Eulogium Historiarum, 1364-1413, Given-Wilson, C. (ed. & trans.) (Oxford, 2019).

Adam of Usk, The Chronicle of Adam Usk: 1377-1421, Given-Wilson, C. (ed. & trans.) (Oxford, 1997).

Parliament of October 1399, in Given-Wilson, C., P. Brand, R. E. Horrox, G. Martin, W. M. Ormrod, and J. R. S. Phillips (eds. & trans.), The Parliament Rolls of Medieval England (Online Version, Leicester, 2005).

Jean Froissart, Froissart’s Chronicles, Jolliffe, J. (ed. & trans.) (London, 1967).

Thomas Walsingham, The Chronica Maiora of Thomas Walsingham: 1376-1422, Preest, D. (trans.), and Clark, J. (ed.) (Woodbridge, 2005).

  1. Articuli venerabilis domini Richardi Scrope, archiepiscopi Eboracensis, contra Henricum Quartum, intrusorem regni Angliae, in Raine, J. (ed.), The Historians of the Church of York and Its Archbishops, Vol. II (London, 1886), pp. 292-304.

Thomas Walsingham, The St Albans Chronicle: The Chronica Maiora of Thomas Walsingham, Vol. II: 1394-1422, Taylor, J., Childs, W., and Watkiss, L. (eds. & trans.) (Oxford, 2011).



Secondary Sources

Barron, C., ‘The Deposition of Richard II’, in Carlin, M. and Rosenthal, J. (eds.), Medieval London: Collected Papers of Caroline M. Barron (Kalamazoo, 2017), pp. 83-96.

Barron, C., ‘The Tyranny of Richard II’, in Carlin, M. and Rosenthal, J. (eds.), Medieval London: Collected Papers of Caroline M. Barron (Kalamazoo, 2017), pp. 1-16.

Bevan, B., Henry IV (London, 1994).

Biggs, D., ‘Archbishop Scrope’s Manifesto of 1405: “naïve nonsense” or reflections of political reality?’, Journal of Medieval History, 33/4 (2007), pp. 358-71.

Born, L., ‘The Perfect Prince: A Study in Thirteenth- and Fourteenth-Century Ideals’, Speculum, 3/4 (1928), pp. 470-504.

Briggs, C., Giles of Rome’s De Regimine Principum: Reading and Writing Politics at Court and University, c. 1275 – c. 1525 (Cambridge, 1999).

Brown, L., ‘Continuity and Change in the Parliamentary Justifications of the Fifteenth-Century Usurpations’, in Clark, L. (ed.), The Fifteenth Century VII: Conflicts, Consequences and the Crown in the Late Middle Ages (Woodbridge, 2007), pp. 157-173.

Carpenter, C., The Wars of the Roses: Politics and the Constitution in England, c.1437-1509 (Cambridge, 1997).

Clarke, M. and Galbraith, V., ‘The Deposition of Richard II’, Bulletin of the John Rylands Library, 14/1 (1930), pp. 125-81.

Dahmus, J., ‘Thomas Arundel and the Baronial Party under Henry IV’, Albion, 16/2 (1984), pp. 131-49.

Dodd, G., ‘Conflict or Consensus: Henry IV and Parliament, 1399-1406’, in Thornton, T. (ed.), Social Attitudes and Political Structures in the Fifteenth Century (Stroud, 2000), pp. 118-149.

Dodd, G., ‘Kingship, Parliament and the Court: The Emergence of “High Style” in Petitions to the English Crown, c.1350-1405’, English Historical Review, 129/538 (2014), pp. 515-48.

Dunbabin, J., ‘Government’, in Burns, J. (ed.), The Cambridge History of Medieval Political Thought, c.350-c.1450 (Cambridge, 1991), pp. 477-519.

Given-Wilson, C., The Royal Household and the King’s Affinity: Service, Politics and Finance in England 1360-1413 (London, 1986).

Given-Wilson, C., ‘The Manner of King Richard’s Renunciation: A “Lancastrian Narrative”?’, English Historical Review, 108/427 (1993), pp. 365-70.

Given-Wilson, C., Henry IV (London, 2017).

Gransden, A. ‘Propaganda in English Medieval Historiography’, Journal of Medieval History, 1/4 (1975), pp. 363-81.

Gransden, A., Historical Writing in England II: c. 1307 to the Early Sixteenth Century (London, 1982).

Gross, A., ‘K. B. McFarlane and the Determinists: The Fallibilities of the English Kings, c. 1399-c.1520’, in Britnell, R. and Pollard, A. (eds.), The McFarlane Legacy: Studies in Late Medieval Politics and Society (Stroud, 1995), pp. 49-75.

Harriss, G., ‘Introduction: the Exemplar of Kingship’, in Harriss, G. (ed.), Henry V: The Practice of Kingship (Oxford, 1985), pp. 1-29.

Harriss, G., ‘Political Society and the Growth of Government in Late Medieval England’, Past & Present, 138 (1993), pp. 28-57.

Harriss, G., ‘The Court of the Lancastrian Kings’, in Stratford, J. (ed.), The Lancastrian Court (Donington, 2003), pp. 1-18.

Harriss, G., Shaping the Nation: England 1360-1461 (Oxford, 2005).

Kantorowicz, E., The King’s Two Bodies: A Study in Mediaeval Political Theology (Oxford, 1957).

Lapsley, G., ‘The Parliament Title of Henry IV’, English Historical Review, 49/195 (1934), pp. 423-49.

Lewis, K., Kingship and Masculinity in Late Medieval England (Abingdon, 2013).

McFarlane, K., Lancastrian Kings and Lollard Knights (Oxford, 1972).

McNiven, P., ‘The Betrayal of Archbishop Scrope’, Bulletin of the John Rylands Library, 54/1 (1971), pp. 173-213.

McNiven, P., Heresy and Politics in the Reign of Henry IV: The Burning of John Badby (Woodbridge, 1987).

Morgan, P., ‘Henry IV and the Shadow of Richard II’, in Archer, R. (ed.), Crown, Government and People in the Fifteenth Century (Stroud, 1995), pp. 1-31.

Nuttal, J., The Creation of Lancastrian Kingship: Literature, Language and Politics in Late Medieval England (Cambridge, 2007).

Ormrod, W., Political Life in Medieval England 1300-1450 (London, 1995).

Palmer, J., ‘The Authorship, Date and Historical Value of the French Chronicles on the Lancastrian Revolution: I’, Bulletin of the John Rylands Library, 61/1 (1978), pp. 145-81.

Powell, E., ‘The Restoration of Law and Order’, in Harriss, G. (ed.), Henry V: The Practice of Kingship (Oxford, 1985), pp. 53-74.

Powell, E., ‘Lancastrian England’, in Allmand, C. (ed.), The New Cambridge Medieval History, Volume 7: c.1415-c.1500 (Cambridge, 1998), pp. 457-76.

Sayles, G., ‘The Deposition of Richard II: Three Lancastrian Narratives’, Historical Research, 54/130 (1981), pp. 257-70.

Sherborne, J., ‘Perjury and the Lancastrian Revolution of 1399’, Welsh History Review, 14/1 (1988), pp. 217-41.

Stubbs, W., The Constitutional History of England: In Its Origin and Development, Vol. II (2nd ed., Oxford, 1877).

Taylor, J., English Historical Literature in the Fourteenth Century (Oxford, 1987).

Theilmann, J., ‘Caught between Political Theory and Political Practice: “The Record and Process of the Deposition of Richard II”’, History of Political Thought, 25/4 (2004), pp. 599-619.

Tuck, A., ‘Henry IV and Europe: A Dynasty’s Search for Recognition’, in Britnell, R. and Pollard, A. (eds.), The McFarlane Legacy: Studies in Late Medieval Politics and Society (Stroud, 1995), pp. 107-126.

Walker, S., ‘Rumour, Sedition and Popular Protest in the Reign of Henry IV’, Past & Present, 166 (2000), pp. 31-65.

Watts, J., Henry VI and the Politics of Kingship (Cambridge, 1996).

Wilkinson, B., ‘The Deposition of Richard II and the Accession of Henry IV’, English Historical Review, 54/214 (1939), pp. 215-39.

Wylie, J., History of England under Henry the Fourth, Vol. I (London, 1884).




[1] B. Bevan, Henry IV (London, 1994), p. 70; G. Harriss, Shaping the Nation: England 1360–1461 (Oxford, 2005), p. 498; E. Powell, ‘The Restoration of Law and Order’, in G. Harriss (ed.), Henry V: The Practice of Kingship (Oxford, 1985), p. 54.

[2] G. Harriss, ‘The Court of the Lancastrian Kings’, in J. Stratford (ed.), The Lancastrian Court (Donington, 2003), p. 12.

[3] A. Gransden, ‘Propaganda in English Medieval Historiography’, Journal of Medieval History, 1/4 (1975), p. 363.

[4] J. Wylie, History of England under Henry the Fourth, Vol. I (London, 1884), pp. 14–15; W. Stubbs, The Constitutional History of England: In Its Origin and Development, Vol. II (2nd edn., Oxford, 1877), pp. 502–8; G. Sayles, ‘The Deposition of Richard II: Three Lancastrian Narratives’, Bulletin of the Institute of Historical Research, 54/130 (1981), p. 257.

[5] M. Clarke, and V. Galbraith, ‘The Deposition of Richard II’, Bulletin of the John Rylands Library, 14/1 (1930), p. 155.

[6] J. Nuttal, The Creation of Lancastrian Kingship: Literature, Language and Politics in Late Medieval England (Cambridge, 2007), pp. 1, 127.

[7] G. Dodd, ‘Kingship, Parliament and the Court: The Emergence of “High Style” in Petitions to the English Crown, c.1350–1405’, English Historical Review, 129/538 (2014), p. 515.

[8] E. Powell, ‘Lancastrian England’, in C. Allmand (ed.), The New Cambridge Medieval History, Volume 7: c.1415–c.1500 (Cambridge, 1998), p. 459; Bevan, Henry IV, p. 74.

[9] P. Morgan, ‘Henry IV and the Shadow of Richard II’, in R. Archer (ed.), Crown, Government and People in the Fifteenth Century (Stroud, 1995), p. 24.

[10] A. Gransden, Historical Writing in England II: c. 1307 to the Early Sixteenth Century (London, 1982), p. 186.

[11] Clarke and Galbraith, ‘Deposition’, pp. 137, 142.

[12] B. Wilkinson, ‘The Deposition of Richard II and the Accession of Henry IV’, English Historical Review, 54/214 (1939), p. 238; G. Lapsley, ‘The Parliamentary Title of Henry IV’, English Historical Review, 49/195 (1934), p. 429.

[13] Parliament of October 1399, in C. Given-Wilson (ed. & trans.), in C. Given-Wilson, P. Brand, R. E. Horrox, G. Martin, W. M. Ormrod, and J. R. S. Phillips (eds. & trans.), The Parliament Rolls of Medieval England (Online Version, Leicester, 2005), item 11.

[14] Lapsley, ‘Parliamentary Title’, p. 433; Thomas Walsingham, The Chronica Maiora of Thomas Walsingham, D. Preest (trans.) and J. Clark (ed.) (Woodbridge, 2005), p. 309.

[15] Thomas Walsingham, The St Albans Chronicle: The Chronica Maiora of Thomas Walsingham, Vol. II: 1394–1422, J. Taylor, W. Childs, and L. Watkiss (eds. & trans.) (Oxford, 2011), p. 163; Parliament of October 1399, item 13.

[16] L. Brown, ‘Continuity and Change in the Parliamentary Justifications of the Fifteenth-Century Usurpations’, in L. Clark (ed.), The Fifteenth Century VII: Conflicts, Consequences and the Crown in the Late Middle Ages (Woodbridge, 2007), p. 162.

[17] Adam of Usk, The Chronicle of Adam Usk: 1377–1421, C. Given-Wilson (ed. & trans.) (Oxford, 1997), p. 69; Parliament of October 1399, item 54.

[18] Lapsley, ‘Parliamentary Title’, p. 433.

[19] 12– Two accounts of Bolingbroke’s progress through England, in C. Given-Wilson (ed. & trans.), Chronicles of the Revolution, 1397–1400: The Reign of Richard II (Manchester, 1993), pp. 134–5; 11– Bolingbroke’s campaign and his meeting with Richard according to the monk of Evesham, in Chronicles of the Revolution, p. 130.

[20] J. Taylor, English Historical Literature in the Fourteenth Century (Oxford, 1987), p. 194.

[21] Chronicles of the Revolution, p. 10.

[22] 14– Two Cistercian accounts of the perjury of Henry Bolingbroke, in Chronicles of the Revolution, p. 155.

[23] Adam of Usk, Chronicle, p. 59.

[24] Clarke and Galbraith, ‘Deposition’, p. 144; Two Cistercian accounts of the perjury of Henry Bolingbroke, Chronicles of the Revolution, p. 156.

[25] Sayles, ‘Three Lancastrian Narratives’, pp. 259–60; C. Given-Wilson, ‘The Manner of King Richard’s Renunciation: A “Lancastrian Narrative”?’, English Historical Review, 108/427 (1993), p. 369.

[26] 16– The “Manner of King Richard’s Renunciation”, in Chronicles of the Revolution, p. 163.

[27] A. Tuck, ‘Henry IV and Europe: A Dynasty’s Search for Recognition’, in R. Britnell and A. Pollard (eds.), The McFarlane Legacy: Studies in Late Medieval Politics and Society (Stroud, 1995), p. 107; J. Palmer, ‘The Authorship, Date and Historical Value of the French Chronicles on the Lancastrian Revolution: I’, Bulletin of the John Rylands Library, 61/1 (1978), p. 145.

[28] Chronicles of the Revolution, p. 7.

[29] 13– The betrayal and capture of the king according to Jean Creton, in Chronicles of the Revolution, p. 147.

[30] Jean Froissart, Froissart’s Chronicles, J. Jolliffe (ed. & trans.) (London, 1967), p. 409.

[31] Jean Froissart, Chronicles, p. 413.

[32] 18– The protest of the Bishop of Carlisle, in Chronicles of the Revolution., p. 191.

[33] John Hardyng, The Chronicle of Iohn Hardyng, H. Ellis (ed. & trans.) (London, 1812), pp. 351–4.

[34] Bevan, Henry IV, p. 75.

[35] John Hardyng, Chronicle, p. 351.

[36] Lapsley, ‘Parliamentary Title’, p. 440.

[37] 18– The Protest of the Percies, in Chronicles of the Revolution, p. 194.

[38] John Hardyng, Chronicle, p. 353

[39] The Protest of the Percies, Chronicles of the Revolution, p. 194.

[40] John Hardyng, Chronicle, p. 350; The Protest of the Percies, Chronicles of the Revolution, p. 194.

[41] The Protest of the Percies, Chronicles of the Revolution, p. 195.

[42] Unknown Author, Chronicle of Dieulacres Abbey, 1381–1403, in M. Clark and V. Galbraith (eds. & trans.),

‘The Deposition of Richard II’, Bulletin of the John Rylands Library, 14/1 (1930), p. 179.

[43] J. Sherborne, ‘Perjury and the Lancastrian Revolution of 1399’, Welsh Historical Review, 14/1 (1988), p. 218.

[44] J. Dahmus, ‘Thomas Arundel and the Baronial Party under Henry IV’, Albion, 16/2 (1984), p. 138.

[45] Harriss, Shaping the Nation, p. 497.

[46] P. McNiven, ‘The Betrayal of Archbishop Scrope’, Bulletin of the John Rylands Library, 54/1 (1971), p. 185.

[47] D. Biggs, ‘Archbishop Scrope’s Manifesto of 1405: “naïve nonsense” or reflections of political reality?’, Journal of Medieval History, 33/4 (2007), p. 358.

[48] C. Given-Wilson, Henry IV (London, 2017), p. 274; Thomas Walsingham, Saint Albans Chronicle, p. 445.

[49] Unknown Author, I. Articuli venerabilis domini Richardi Scrope, archiepiscopi Eboracensis, contra Henricum Quartum, intrusorem regni Angliae, in J. Raine (ed.), The Historians of the Church of York and Its Archbishops, Vol. II (London, 1886), pp. 292–304.

[50] Articuli contra Henricum Quartum, p. 294. All translations are the author’s, unless otherwise stated.

[51] Articuli contra Henricum Quartum, p. 297.

[52] Articuli contra Henricum Quartum, p. 298.

[53] 3– Death of Richard II, in J. Flemming (ed. & trans.), England under the Lancastrians (London, 1921), p. 5.

[54] Articuli contra Henricum Quartum, p. 295.

[55] W. Ormrod, Political Life in Medieval England 1300–1450 (London, 1995), p. 61.

[56] J. Watts, Henry VI and the Politics of Kingship (Cambridge, 1996), p. 21; Brown, ‘Continuity and Change’, p. 158; J. Dunbabin, ‘Government’, in J. Burns (ed.), The Cambridge History of Medieval Political Thought, c.350–c.1450 (Cambridge, 1991), p. 483.

[57] K. Lewis, Kingship and Masculinity in Late Medieval England (Abingdon, 2013), p. 17.

[58] J. Theilmann, ‘Caught between Political Theory and Political Practice: “The Record and Process of the Deposition of Richard II”’, History of Political Thought, 25/4 (2004), p. 606.

[59] Watts, Henry VI, p. 14.

[60] Parliament of October 1399, item 18.

[61] Parliament of October 1399, item 16.

[62] Parliament of October 1399, item 26.

[63] Ormrod, Political Life, pp. 64–5; Theilmann, ‘Record and Process’, p. 604.

[64] C. Barron, ‘The Tyranny of Richard II’, in M. Carlin and J. Rosenthal (eds.), Medieval London: Collected Papers of Caroline M. Barron (Kalamazoo, 2017), p. 3.

[65] G. Harriss, ‘Introduction: The Exemplar of Kingship’, in G. Harriss (ed.), Henry V: The Practice of Kingship (Oxford, 1985), p. 15.

[66] Parliament of October 1399, item 32.

[67] Parliament of October 1399, item 38.

[68] Barron, ‘Tyranny’, p. 15.

[69] Parliament of October 1399, item 31.

[70] Barron, ‘Tyranny’, p. 7.

[71] Thomas Walsingham, St Albans Chronicle, p. 145.

[72] Parliament of October 1399, items 38, 32.

[73] Gransden, Historical Writing, p. 158; Unknown Author, Continuatio Eulogii: The Continuation of the Eulogium Historiarum, C. Given-Wilson (ed. & trans.) (Oxford, 2019), p. 91.

[74] Unknown Author, Eulogium, p. 95.

[75] The Protest of the Percies, Chronicles of the Revolution, p. 195.

[76] The Protest of the Percies, Chronicles of the Revolution, p. 195.

[77] Unknown Author, Eulogium, p. 117.

[78] S. Walker, ‘Rumour, Sedition and Popular Protest in the Reign of Henry IV’, Past & Present, 166 (2000), p. 50.

[79] Articuli contra Henricum Quartum, p. 296.

[80] Articuli contra Henricum Quartum, pp. 302–3.

[81] Biggs, ‘Scrope’s Manifesto’, p. 364.

[82] Articuli contra Henricum Quartum, p. 304.

[83] Thomas Walsingham, St Albans Chronicle, p. 445.

[84] Unknown Author, Eulogium, p. 133.

[85] L. Born, ‘The Perfect Prince: A Study in Thirteenth- and Fourteenth-Century Ideals’, Speculum, 3/4 (1928), p. 504.

[86] Dunbabin, ‘Government’, p. 483.

[87] C. Given-Wilson, The Royal Household and the King’s Affinity: Service, Politics and Finance in England 1360–1413 (London, 1986), pp. 23, 41.

[88] Born, ‘Perfect Prince’, p. 473.

[89] E. Kantorowicz, The King’s Two Bodies: A Study in Mediaeval Political Theology (Oxford, 1957), p. 497; C. Carpenter, The Wars of the Roses: Politics and the Constitution in England, c.1437–1509 (Cambridge, 1997), p. 40.

[90] Lewis, Kingship, p. 32.

[91] The term duketti refers to a small number of magnates who were close to Richard, and who were granted newly-minted dukedoms. For example, Thomas de Mowbray, earl of Nottingham, was promoted to Duke of Norfolk. These were seen by the wider, more established nobility with some distaste. The duketti were, in a sense, an unpopular nouveau riche; G. Harriss, ‘Political Society and the Growth of Government in Late Medieval England’, Past & Present, 138 (1993), pp. 33, 38.

[92] C. Briggs, Giles of Rome’s De Regimine Principum: Reading and Writing Politics at Court and University, c. 1275–c. 1525 (Cambridge, 1999), p. 61.

[93] Parliament of October 1399, item 40.

[94] Adam of Usk, Chronicle, p. 77.

[95] Parliament of October 1399, item 18.

[96] Harriss, ‘Introduction’, p. 13

[97] Unknown Author, An English Chronicle of the Reigns of Richard II., Henry IV., Henry V., and Henry VI. Written Before the Year 1471, J. Davies (ed. & trans.) (London, 1856), p. 27; Unknown Author, Eulogium, p. 115.

[98] Thomas Walsingham, St Albans Chronicle, p. 359.

[99] Thomas Walsingham, St Albans Chronicle, p. 359.

[100] 10– Rebellion of the Percies, 1403, in England under the Lancastrians, p. 13.

[101] Unknown Author, Annales Ricardi Secundi et Henrici Quarti, Regum Angliæ, in H. Ellis (ed.), Chronica Monasterii S. Albani, III. Johannis de Trokelowe et Henrici de Blaneforde, Monachorum S. Albani, Necnon Quorundam Anonymorum, Chronica et Annales, Regnantibus Henrico Tertio, Edwardo Primo, Edwardo Secundo, Ricardi Secundo, et Henrico Quarto (London, 1866), p. 362.

[102] Thomas Walsingham, Chronica Maiora, p. 326; Thomas Walsingham, St Albans Chronicle, p. 361.

[103] Thomas Walsingham, St Albans Chronicle, p. 443.

[104] 16– Rebellion in the North, 1405, in England under the Lancastrians, p. 19.

[105] Unknown Author, Eulogium, p. 133.

[106] Unknown Author, An English Chronicle, p. 31.

[107] Nuttal, Lancastrian Kingship, p. 10.

[108] G. Dodd, ‘Conflict or Consensus: Henry IV and Parliament, 1399–1406’, in T. Thornton (ed.), Social Attitudes and Political Structures in the Fifteenth Century (Stroud, 2000), p. 135; Theilmann, ‘Record and Process’, p. 617

[109] A. Gross, ‘K. B. McFarlane and the Determinists: The Fallibilities of the English Kings, c. 1399–c.1520’, in R. Britnell and A. Pollard (eds.), The McFarlane Legacy: Studies in Late Medieval Politics and Society (Stroud, 1995), p. 52.

[110] P. McNiven, Heresy and Politics in the Reign of Henry IV: The Burning of John Badby (Woodbridge, 1987), p. 69.

[111] K. McFarlane, Lancastrian Kings and Lollard Knights (Oxford, 1972), p. 78.

[112] C. Barron, ‘The Deposition of Richard II’, p. 96.

Katie Barclay, Caritas: Neighbourly Love and the Early Modern Self (2021)

Katie Barclay, Caritas: Neighbourly Love and the Early Modern Self (2021)


In this article, Lucy Morgan reviews Caritas: Neighbourly Love and the Early Modern Self by Katie Barclay, published in hardback and e-book in January 2021. This book explores the Christian concept of caritas as an expression of neighbourly love and how it was experienced by lower-order Scottish people from 1660 to 1830. Barclay uses legal depositions and correspondence to examine the emotional and bodily aspects of caritas, positing that in a loving community, marital relationships were the ideal upon which all other social relationships were based. The author goes on to discuss how children were raised into the beliefs of caritas, what happened to those who rejected caritas’s principles, and how itinerant individuals who lived outside of the normal boundaries of society still had a role within the loving community.

 Keywords: Emotion, early modern, history, community, Christianity

Biography: Lucy Morgan is a first-year PhD student at the University of Sheffield. She is interested in the relationship between manhood and paternity in early modern England.

Katie Barclay’s book, Caritas: Neighbourly Love and the Early Modern Self, is the most recent instalment in Oxford University Press’s ‘Emotions in History’ series. Barclay describes caritas as an aspirational form of Christian neighbourly love, practiced by Catholics and Protestants across early modern Europe, with the purpose of creating loving communities of neighbours who supported and relied on each other. This book focuses on the lives of ‘lower-order’ Scots from 1660 to 1830, examining 2,000 cases from the Scottish Justiciary Court as well as sampling some surviving correspondence, providing a wide temporal and geographic overview of caritas throughout this period.[1] Barclay aims to understand ‘how individuals enculturated [caritas], how they performed it, negotiated it, and occasionally rejected it’ within their everyday lives, through a study of ‘behaviour, gesture, material culture practices, and ritual’.[2] Caritas, literally translating into English as ‘charity’, was a force which went beyond the Ten Commandments’ dictates to not covet your neighbour’s house or wife. Barclay positions caritas as the opposite of lust; instead of selfish, individualistic, sinful love, caritas was a selfless, ethical, moral love which encouraged individuals to connect with others.[3]

Scotland provides an excellent setting for a study of this type; during this period, it was still dominated by the village and small-town structure where close-knit communities determined the social life and economic success of an area. Barclay also notes, however, that geographic mobility of Scottish people was rapidly increasing at this time, resulting in an itinerant population who do not necessarily fit into historians’ perceptions of community living. The legal sources, mostly depositions, used by Barclay are invaluable to a study of emotions—they provide a great deal of first-person information about how lower-order people felt about the behaviour of their neighbours. The lives of lower-order people at this time blurred the boundaries between household and community—they were far more likely to live in single rooms or as multiple families to a house in comparison to their higher-order counterparts—and as a result of that forced closeness, their experience of caritas was physical as well as emotional. Caritas was expressed through virtue and grace, and therefore encompassed neighbourly love through non-action—for example, not starting a fight or not reporting bad behaviour—as well as expressions of the traditionally Catholic concept of “good works” like feeding the poor. Where cynical Protestants might interpret expressions of “good works” as selfish ploys only done to get to heaven, the emphasis on charity within caritas made “good works” acceptable within the Kirk and therefore central to the loving community.[4]

Barclay draws on a rich historiography of neighbourliness and familial love, as well as being influenced by more recent feminist philosophical works on the place of love in modern society, such as Adam Philips and Barbara Taylor’s On Kindness. Her approach rejects family nuclearisation models, instead suggesting that lower order-Scots retained extended horizontal and vertical familial-neighbourly networks throughout her study. This book broaches a gap in the field of emotional history, providing a link between individual and communal emotional experiences in the past. She employs William Reddy’s concept of the ‘emotional regime’, where certain emotions can be studied as “dominant norms” within a society, alongside Monique Scheer’s concept of ‘emotional embodiment’, where emotions are “practiced” through expression and reciprocation between individuals.[5] This reinforces the idea that loving communities were sustained through nature and culture. Barclay’s work uses what she describes as the ‘new history of emotions’ to prove that there was a “self” in the early modern period, arguing that caritas existed to relate “the self” to “the other” through individual and shared expressions of neighbourly love.[6]

The first two chapters of Barclay’s book deal with the induction into caritas and the education of children, both at home and through theological teaching. This provides an excellent entry point for readers who are also unfamiliar with the concept caritas, although these chapters could have benefitted from a further explanation of the etymology of the word. Barclay explains the meaning and translation of caritas in the introduction, but how that specific word was chosen remains uncertain throughout the text. It is most noticeable in these chapters, where Kirk and legal books are quoted extensively but none explicitly mention caritas (although charity and neighbours are mentioned often). It may be that this is due to a translation of these books from Latin into English, but a clarification of whether this term was ever used in an early modern Scottish context would have been interesting, if not beneficial, to the rest of the work. The neighbourly love advocated for by caritas was bodily and intimate, reinforced through spoken language as well as physical closeness in eating, working, and living together. As such, Barclay centres marriage and its ideals of reciprocal love and support as the foundation of caritas. This is firstly evidenced through a study of the communal celebration of marriage post-ceremony, with Barclay noting that wedding rituals retained complexity even after the introduction of banns-reading simplified the marriage sermon. Barclay then moves into an analysis of depositions relating to marriage breakdown, showing how members of the wider community were invited to provide testimony on the state of their neighbour’s marriages, indicating that all members of a loving community had a strong understanding of the role of marriage, including the ‘increase of mankind’ and the prevention of ‘uncleanness’ (sexual immorality).[7] This in turn influenced all other relationships, such as parent-child, employer-employee, and neighbour-neighbour, encouraging both moral policing but also forgiveness of moral infringements to maintain a peaceful equilibrium within the loving community. Childhood is used to examine how the uneven distribution of caritas was accepted within society. Privileging certain groups over others was acceptable, as long as any disparities reflected the ordered social hierarchy, such as the prioritisation of the education of sons over daughters. While Barclay notes that Enlightenment ideals about an ‘expectation of love’ for all children, including affectionate treatment, was present in Scotland by the eighteenth century, this is only examined in the case of parents caring for versus neglecting their own children.[8] It would be interesting if further study reversed this perspective and expanded upon the child’s place within the loving community during marriage breakdown or parental death. Similarly, future work could explore whether or not caritas played a role when adults cared for children other than their own.

The third and fourth chapters examine the reception of immoral actions under caritas; their practice, discovery, and reformation. Barclay navigates the existing historiography on the top-down or bottom-up nature of early modern discipline, drawing on Lyndal Roper’s Holy Household and Martin Ingram’s Carnal Knowledge, pointing out that while the Calvinsitic Kirk believed that all people were born with the Original Sin, for many lower order Scots, irregular marriage (marriage without an official Kirk ceremony, legal throughout the period) and premarital sex were ‘disorderly but not immoral’.[9] By pointing out that for many lower-order Scots, their home would just be a single room likely with one or more shared walls, Barclay suggests that neighbours were probably constantly aware of each other’s actions. These homes were shared and porous, and Barclay implies that there was no conception of public and private space for the people in her study. Tolerance of others therefore became a crucial part of caritas. Intermittent bad behaviour was permissible, but not prolonged threats to the social order. As such, while the keeping and telling of secrets was technically immoral, it also became a ‘central mechanism’ of caritas ‘through which peace and harmony’ was enabled.[10] Evoking Amanda Vickery’s Behind Closed Doors, Barclay describes the social rituals of making or revealing secrets as seen in legal contexts. In cases of violent altercations or elopements, the identification of overheard voices was critically important. Wordless sounds of fighting or crying were equally legally pertinent—the victim and the aggressor could be determined by who was louder or seemed more upset. Similarly, materiality as obstructing sight or sound was crucial—the act of closing doors became almost always suggestive of wrongdoing. This approach is innovative and could be adopted into other studies of the early modern home and the family, where scholars are often hampered by a lack of evidence around certain practices or behaviours. At many parts of this book, having no evidence is crucial; for example, Barclay shows that irregular marriages were often not disclosed to the community until it became necessary for a woman to prove that she was married, usually because she was visibly pregnant.[11] The “husband” might then come forward and claim such a marriage had never happened, resulting in the situation ballooning into a legal case where the caritas and the reputation of many members of the loving community would be affected.

The strongest chapter of Barclay’s book is the last, titled ‘Living Outside of Love’. Even by the end of the eighteenth century, Scotland remained mostly rural and non-industrial, meaning that vagrancy and itinerant work were still highly visible facets of communal living. Although these people did not live within the established boundaries of a local community, Barclay locates them within caritas, which gave them a place in the loving community, evidencing not only a ‘pragmatic’ approach to community but also a ‘comradery’ between those who lived on the road.[12] This chapter deftly unites the historian’s usually disparate understandings of work, vagrancy and communal living, showing how caritas encouraged communities to permit begging as a form of Christian charity, but also showing its limits and how it was possible to take advantage of the selflessness of caritas by accepting hospitality without giving back to the community. Barclay also discusses the ramifications of banishment as a punishment for extreme infringements of caritas, usually in cases murder or infanticide, indicating that deliberate exclusion from a community had significant local and personal impact. Although banishments were not usually permanent, it nevertheless indicates that loneliness and social exclusion were seen as the obvious repercussions for communal non-conformity, leading Barclay to conclude that ‘attachment to land and place was critical to the imagining of the social order’.[13]

By giving caritas a position of centrality within early modern understandings of community, Barclay is able to show how it informed a wide range of individual, often contradictory, choices. Barclay’s approach to emotional ethics allows for a more nuanced approach to bodily and emotional experiences which are lost in more prescriptive studies of law or social hierarchy. The methodology of this book could be applied to any Christian sect across Europe in the early modern period as a comparative study, and if Barclay wishes to expand on her own work, an investigation of old age’s place in caritas would be much appreciated. As much of the book deals with education and sexual immorality, children and the unmarried necessarily take the forefront for much of the work. However, Barclay intriguingly mentions that in some parts of Scotland, community elders presided over formal Kirk disciplines alongside the minister. Although Barclay states that these courts became increasingly marginal and ineffective throughout the period, age as an indicator of wisdom and community leadership could enrich further studies of caritas.

Overall, this is a strong work which breathes life into its subject matter, allowing for an examination of complex social and personal issues including domestic abuse and violence, premarital and extramarital sexuality, and the material and immaterial boundaries of society and community. Crucially, Barclay’s finding that almost all immoral and even some criminal actions could be redeemed through caritas provides a new perspective for researchers interested in society, religion, and acceptable behaviour in the early modern period.

Download PDF



Barclay, K., Caritas: Neighbourly Love and the Early Modern Self (Oxford, 2021).

Ingram, M., Carnal Knowledge: Regulating Sex in England, 1470–1600 (Cambridge, 2017).

Philips, A., and Taylor, B., On Kindness (London, 2009).

Reddy, W., The Navigation of Feeling: A Framework for the History of Emotions (Cambridge, 2001).

Roper, L., The Holy Household: Women and Morals in Reformation Augsburg (Oxford, 1989).

Scheer, M., ‘Are emotions a kind of practice (and is that what makes them have a History)? A Bourdieuian approach to understanding emotion’, History and Theory 51/2 (2012), pp. 193–220.



[1] K. Barclay, Caritas: Neighbourly Love and the Early Modern Self (Oxford, 2021), 4 and 19–21.

[2] Barclay, Caritas, p. 14, 24.

[3] Barclay, Caritas, p. 3.

[4] Barclay, Caritas, p. 3.

[5] See William Reddy The Navigation of Feeling 2001; and Monique Scheer, ‘Are Emotions a Kind of Practice?’

[6] Barclay, Caritas, p. 3.

[7] Barclay, Caritas, p. 39.

[8] Barclay, Caritas, p. 64.

[9] Barclay, Caritas, p. 95.

[10] Barclay, Caritas, p. 118.

[11] Barclay, Caritas, pp. 95–97.

[12] Barclay, Caritas, p. 150.

[13] Barclay, Caritas, p. 166.

Milanovic, B. Global Inequality: a New Approach for the Age of Globalisation (London, 2016)

Global Inequality: a New Approach for the Age of Globalisation (London, 2016). Milanovic, B.


Milanovic’s Global Inequality: A New Approach for the Age of Globalisation seeks to create a new model for explaining the patterns in the growth and decline on inequality in the world, remodelling Kuznet’s hypothesis to take account of the rise of a “global plutocracy” and wage stagnation in the Western middle classes. In doing so, he has some pessimistic forecasts for the immediate future of the middle classes in the West and makes timely warnings against neglecting income and wealth inequality in favour of ‘existential inequality’.

Biography: Sam Tarran is an MA student in History at the University of Birmingham, focusing on political institutions in medieval Europe. His undergraduate degree is from Mansfield College, University of Oxford.

Global Inequality: A New Approach for the Age of Globalisation is not a new book. Despite the fact that it was published relatively recently, it still feels necessary to recount the context in which Milanovic wrote and released it. So much has happened since that it can feel like another world, already part of ‘history’. At the time, inequality both within and between countries felt important. The Occupy movement had ended but was still in the mind’s eye. The world was just beginning to creep out of the Great Recession. The public of many countries, after nearly seven years of economic decline, wage pressure and austerity policies, were feeling restless. Many, after Occupy, public sector cuts and the (initial) electoral success of Syriza in Greece, thought it would be the political left that would make hay. It is only with hindsight that many now claim that it was historically predictable that the populist right would capitalise.

Milanovic published his book in 2016. Since then, we have had Trump, Brexit, the ‘cultural wars’ and Covid-19. The arguments over economic inequality now feel traditional and unfashionable. The book is still, therefore, underread, but despite it falling foul of current Twitter trends, it somehow feels timely. Overall, Milanovic’s analysis and argument – well-written and engaging as it is – would benefit from a deeper survey of pre-modern historical periods and the world outside the West. He also discusses potential solutions that are politically and socially impossible.  Where Milanovic is most interesting is his warning against overly focusing on ‘existential inequality’, defined as the unequal “legal treatment of different groups” based on factors such as race, disability, sexual preference and gender (p. 226). This, he argues, is unhelpful as it feeds identity politics, splitting the public into communitarian interest groups who, once their own campaigns and needs are sated, will not aid others. This inhibits the collective action needed to create real change and blocks the discussion of the “harder” questions of how to solve wealth and income inequality both domestically and globally. Reducing income inequality, he points out, also helps with gender and race equality, although he does not offer any substantive examples. If Milanovic’s point was relevant then, his pleas are urgent now. We have, somehow, forgotten about income and wealth inequality.

Milanovic seeks to analyse inequality on a global level and ‘not as a national phenomenon only, as had been done for the past century’ (p. 2). In this, he alludes to the works of Karl Marx and Adam Smith, a previous generation of political economists who sought explanatory power on a planetary scale. There were, however, other works published post-2010 on this topic. Milanovic himself cites Thomas Pikkety’s Capitalism in the Twenty First Century, published in English in 2013. There was also Jason Hickel’s The Divide: a Brief Guide to Global Inequality and Its Solutions, published in 2017. The Great Recession had also spurred new academic interest in the subject from a historical perspective. Histories of Global Inequality: New Perspectives, is a collection of works (edited by Steven Jensen and Christian Christiansen) which seeks to ‘historicise Piketty’. Piketty’s work is reformist, seeking to save capitalism from within by recommending measures such as a global redistribution of wealth, effectively scaling up the tax-and-spend policies of the West. These suggestions were enthusiastically met by some at the time, but their chance of success appears difficult to envision. Witness, for example, the new ‘progressive’ President of the United States threatening sanctions on Britain and the EU for daring to implement new taxes on America’s digital giants. Milanovic is similarly bold in his recommendations, but understandably less optimistic about their success. Despite the many graphs, statistics and discussion of the Gini co-efficient, the work remains very accessible. The language is largely in plain English and the analysis, even for a layman such as this reader, is relatively easy to follow.

From the outset, he sets out themes that, post-Brexit and post-Trump, we recognise instantly: the rise of a global middle class based primarily in Asia, the income stagnation of the middle to lower-middle classes in the West, falling social mobility, and the ‘emergence of a global plutocracy’ (p. 3). Milanovic re-posits the Kuznets hypothesis – that industrialisation leads to higher and later reduced income inequality – as “waves” to take account of our secular stagnation, where periods of intense technological innovation increase inequality before political pressure and educational attainment reduces it again. The technological changes post-1980 have facilitated the rise of finance and tech billionaires who, through the newfound portability of their skills and capital, have become disconnected from their national polities. Milanovic is cold in his analysis of this group until the final chapter, when he just holds back from condemning the hypocrisy of a global top 10% that produces 50% of world carbon emissions flying in private jets to preach about climate change.

Like many economists and economic historians, Milanovic is most comfortable when discussing his model in the context of periods and countries from which we have plenty of data. His analysis therefore skews modern and towards the US and UK. He is on less firm footing when he moves into the early modern, medieval and non-Western worlds. At one point, he freely uses the term ‘capitalist’ in relation to cities in the late Middle Ages. He would have been better discussing income inequality and ‘surpluses’ in the High Middle Ages, when monasteries thrived, guilds were formed, trade increased, fens were drained, and a Latin aristocracy expanded its influence into the former European periphery.[1] Ironically, it was in the following period, in the aftermath of the Black Death, that political agitation among the middling sort began, just as the polities of Europe were starting to expand their bureaucratic reach and depth.[2] A discussion of these periods and a reconciliation of this apparent contradiction – one may expect agitation to come during the period of greater growth and inequality – to his model would have been illuminating. Instead, Milanovic largely skips straight from the fall of Rome to the ‘commercial revolution’ of sixteenth century Italy. The skipping of the medieval period and eurocentrism is a common problem in attempts at global histories, particularly those focused on inequality. The sources, unfortunately, are inconvenient. Medieval records can be patchy and incomplete. Records from outside the West present problems of language and cultural interpretation. A collaborative approach is necessary to bridge these gaps, but pressures of time and resources can make this difficult.

Milanovic, from focusing on inequality within countries, moves to inequality between countries. Here, he does not seek to explain why the West is richer than the East, which has been done elsewhere. Instead, he predicts that the fast-growing economies of the Global South, namely in Asia, will soon begin to move through the Kuznets waves in much the same way as the West. In doing so, they will face similar challenges to the West in the nineteenth and twentieth centuries as the middle classes seek enfranchisement. Further, the Citizenship Premium that we in the West currently enjoy for being born in wealthier countries is slowly being eroded as location becomes a less significant factor in one’s income potential.

So, what are Milanovic’s predictions for the future? He wisely cautions against specific forecasts in Chapter 4, pointing out that economists often extrapolate existing trends in the future without taking account of the unexpected, referencing Taleb’s Black Swans, large-scale, often unpredicted events with far-reaching consequences, such as tsunamis or political assassinations. Few writers of the 1970s, for example, predicted the Reagan-Thatcher boom and the collapse of communism. Indeed, some even predicted a convergence between the two, with the Soviet bloc liberalising slightly and the West becoming more statist. He therefore resists specific predictions but does venture some ideas. His belief that the Great Convergence will continue, no matter what happens to China’s economic growth, is hardly controversial. He shows, however, that the Convergence is largely an Asian phenomenon, which means global inequality is likely to continue to grow unless Africa also begins to converge.

He is also pessimistic about the future of Western middle classes, who he argues will continue to be squeezed by globalisation, automation, and the more debateable assertion that education has, more or less, hit its quantitative and qualitative limit. As capital and, increasingly, labour become more difficult to tax, his solution is more punitive inheritance taxes to reduce the power of “endowments” and tax policies that encourage share ownership and financial asset ownership among the lower and middle classes. He compares the systems of Taiwan and Canada to illustrate how this approach can be as effective as a traditional tax-and-spend system. In the next breath, however, he says this is pointless without increased accessibility to quality education, contradicting his own comments on education earlier on.

This, unfortunately, is the pattern of the book: some good economic analysis, followed by political assertions that are often less reinforced by evidence, then proposed solutions that are politically unpalatable. Another example is his proposal to resolve the conflict between the high numbers of migrants and the typical internal resistance to immigrations: implement discriminatory policies against migrants which make clear they do not share the privileges of ‘native’ citizens. As Milanovic himself intimates, this is unlikely to be accepted by anyone, except, ironically, the migrants themselves. In that sense, Milanovic falls into the classic economist (perhaps, even, academic) trap of proposing ‘evidence-based policy’ that ignores specific social and political contexts. His predictions for Asian societies may also be helped by a consideration of the local and non-economic forces impacting on politics and demographics in the region. To some extent, we have the benefit of hindsight here. China has continued to grow apace, but its burgeoning middle class seems as compliant and pacified as ever thanks to its continued prosperity and Xi’s increasingly aggressive nationalism. This may prevent China moving up its own Kuznets wave any time soon.

His model and his analysis would have greater power and more nuance if it were more greatly supplemented by extended historical discussion of the non-Western and non-modern worlds and the powerful forces that exist outside of economics. Nevertheless, Milanovic’s work is comprehendible, sweeping, and at times gripping. Its structure is logical and helpful. He manages to explain complex economic phenomena for a general audience without being patronising. Further, its message – that inequality will continue to exist despite globalisation, and that we ignore it at our peril – feels incredibly relevant as it struggles to cut through the noise of contemporary political fashions.

Download PDF



Bartlett, R., The Making of Europe: Conquest, Colonisation and Cultural Change, 950-1350 (London, 1994)

Christiansen, C.O., & Jensen, S., Histories of Global Inequality: New Perspectives (London, 2019)

Hickel, J., The Divide: a Brief Guide to Global Inequality and Its Solutions (London, 2017)

Piketty, T., Capital in the Twenty First Century (Paris, 2013)

Watts, J., The Making of Polities: Europe, 1300-1500 (Cambridge, 2009)



[1] See, for example, R. Bartlett, The Making of Europe: Conquest, Colonisation and Cultural Change, 950-1350 (London, 1994)

[2] J. Watts, The Making of Polities: Europe, 1300-1500 (Cambridge, 2009)

Jury Nullification: The Short History of a Little Understood Power

Jury Nullification: The Short History of a Little Understood Power

Richard Marshall is a PhD student in History at the University of Plymouth. His doctoral research explores the place of trial by jury in the politics, culture and society of late eighteenth-century English radicalism. He is supervised by Dr James Gregory and Dr Claire Fitzpatrick.

Keywords: Juries, Nullification, Legal History, Popular Justice, Repression, Liberties, Rights

Jury nullification (or Jury Equity) is perhaps the greatest safeguard against unjust laws or excessive punishment to exist in Britain. It is the practice whereby a jury delivers a verdict contrary to the evidence, law and judicial directions by acquitting a defendant they believe beyond a reasonable doubt guilty by the letter of the law, but on grounds of conscience think should not be punished.

This little understood power has existed since a 1670 ruling declaring that no juror could be punished for a ‘wrong’ verdict, which when coupled with the double jeopardy rule leaves the door ajar for juries to essentially override laws.

For centuries it was considered a critical element of the constitution, a check against tyrannical government and unfair laws but above all a mechanism that permitted the people to directly influence, comment upon and force reform of laws they deemed morally suspect. What’s more, it was widely understood and discussed as a normal part of the legal process. The idea a jury of ‘freeborn Englishmen’ could not deviate from a judge’s charge or the legal letter was an almost universally rejected one. Indeed, it was not uncommon for jurors to be encouraged to ignore or reject judicial guidance and actively consider not just the evidence but the interests of their community, religion, nation, and constitution.[1] When an English juror entered the jury box in the eighteenth century, he was expected to act for his country and discretion was the norm. Granted not everyone accepted the idea but none denied the existence of nullification nor the rights and powers of jurors as the ultimate arbiters of justice. Most today though will never have heard of this power. For too many in the modern legal fraternity nullification is a dirty word, with a myriad of objections raised against it.[2] Most notably, they argue it is ineffective at procuring meaningful change and encourages prejudice among jurors.

I was brought to think about the history of this murky power by the recently introduced Police, Crime, Sentencing and Courts Bill, the politically interested attempt to attack freedoms of protest, speech and the rights of the traveller community all at once. Regarding protest, it seeks to lower the legal test required for police to act against otherwise legitimate protest, with its most egregious element being the new offence of ‘public nuisance’. This will carry a maximum ten-year prison term and is defined as causing ‘serious harm’ to the public through ‘serious annoyance, serious inconvenience or serious loss of amenity’. Meanwhile the Bill also proposes to criminalise trespass, an effort to attack the freedoms of the traveller community opposed by the majority of policing bodies including both the National Police Chiefs Council and the Association of Police and Crime Commissioners.

The backlash has been immense. Politicians, lawyers, civil rights organisations, charities and many others have spoken out against the Bill, as have many thousands in mass demonstrations across the nation. It would not be unfair to suggest that a significant minority, perhaps more, oppose the provisions of this Bill as infringements on liberty. This was the very sort of legislation radicals of the past would have hoped and indeed encouraged jurors to nullify.

But does this mean we should today? For me, the answer is an unquestionable yes. Nullification exists for just these circumstances, to frustrate legislation passed or enforced spuriously or for political reasons. And the past provides plentiful justifications and precedents for its use.

In the first instance, our national history is littered with cases where juries, in spite of evidence and judicial direction, acquitted defendants to the benefit of society. Arguably the most well-known is that of Clive Ponting, the British civil servant who leaked documents relating to the sinking of the General Belgrano during the Falklands Conflict. His trial for allegedly breaching the Official Secrets Act was a cause célèbre and his unexpected acquittal, contrary to the judge’s direction, caused enough consternation for the Conservative government to remove the ‘Public Interest Defence’ Ponting relied on from Official Secrets legislation in 1989.

But the Ponting case is an outlier. For one, the acquittal had a basis in law thanks to the Public Interest Defence. A true act of nullification occurs where the law does not provide an adequate escape route and its provisions are thus arbitrary. The new Policing Bill promises to be such a law. To justify employing nullification against such an Act, there are a myriad of examples where jurors have acquitted in the face of such legislation. Take for instance the case which established the practice, the trial of two Quakers William Penn and William Meed in 1670.

The pair were charged under the Conventicle Act which restricted non-Anglicans to meetings of no more than five people. Meed and Penn were arrested preaching to several hundred.

Despite overwhelming evidence and threats from the judge, the jury refused to convict, believing the law to be morally wrong. The subsequent legal ruling that no juror could be punished merely for their decision still reverberates today and is even commemorated at the Old Bailey. This commemoration strikes me as almost a burlesque given the staunch opposition of the courts and Crown to jurors being permitted to understand the ramifications.

Download PDF


Primary Sources

Hawles, J., The Englishman’s Right: A Dialogue between a Barrister at Law and a Juryman &c. (London, 1680).


Secondary Sources

Darbyshire, P., ‘The Lamp that Shows that Freedom Lives—Is It Worth the Candle?’ Criminal Law Review (1991), pp. 740–752.

Handler, P., ‘Forgery and the End of the “Bloody Code” in Early Nineteenth-Century England’, Historical Journal, 48/3 (2005), pp. 683–702.

Harling, P., ‘The Law of Libel and the Limits of Repression, 1790–1832’, Historical Journal, 44/1 (2001), pp. 107–134.

McGowen, R., ‘From Pillory to Gallows: The Punishment of Forgery in the Age of the Financial Revolution’, Past and Present, 165 (1999), pp. 107–140.

Wharam, A., The Treason Trials, 1794 (Leicester, 1992).


[1] Guides for jurors often encouraged them to remain independent of the judge and emphasised that they represented their fellow citizens in court. A powerful and frequently reprinted example was J. Hawles, The Englishman’s Right: A Dialogue between a Barrister at Law and a Juryman &c. (London, 1680).

[2] See P. Darbyshire, ‘The Lamp that Shows that Freedom Lives—Is It Worth the Candle?’ Criminal Law Review (1991), pp. 740–752.

The Northern Question: A History of a Divided Country by Tom Hazeldine

The Northern Question: A History of a Divided Country by Tom Hazeldine


In this review of Tom Hazeldine’s The Northern Question, David Civil explores how Britain’s geographic cultural constructions and regional inequalities have impacted on the nation’s politics from the Industrial Revolution to Brexit. The Northern Question: A History of a Divided Country, Tom Hazeldine, London, Verso Books, 2020, ISBN: 9781786634061; 290pp.; Price: £20.00.

Biography: David Civil is the MHR’s Spotlight Editor and completed his PhD in History on the concept of meritocracy in 2020. 

A Northern parliamentary seat flipping in a by-election from Labour to Conservative for the first time in its history may immediately bring to mind recent, post-Brexit political developments. We might instinctively reach for the language of ‘left-behind’ voters or highlight the significance of ‘Red Wall’ constituencies to explain this unprecedented electoral shift. And yet this by-election in Workington took place in 1976. It saw the return of 35-year-old Richard Page for a Conservative Party under the new leadership of Margaret Thatcher. In 1979 the seat would return to Labour, until the 2019 General Election when it flipped to the Conservatives as part of their assault on the so-called ‘Red Wall’. The notion of ‘Workington Man’ appeared to capture the impact of Brexit in ripping apart old loyalties and traditional voting patterns. Yet as Page’s triumph in 1976 demonstrates, these developments have a long history. The Editor of the New Left Review, Tom Hazeldine sets out to explore this history in his recent book, The Northern Question: A History of a Divided Country. Post-Brexit Northern England, Hazeldine claims, has propelled itself to the ‘foreground of national attention for the first time since the socio-economic crisis of the Thatcher years’ (p. xii). By exploring the interaction between nation-state, social class and geographical region, The Northern Question hopes to ‘let some light in through several windows’ to illuminate a debate whose importance is only going to grow in the next decade and beyond (p. xiv).

Hazeldine’s historical survey begins in Chapter 2, framing the North as the ‘Badlands’ which for the most part refused to adopt the ‘intense feudalism of the Midlands and the South’ (p. 31). If the burdens of serfdom were ‘generally lighter’ in the North ‘rural benightedness – the absence of towns and literacy – was correspondingly deeper’ (p. 31). For Hazeldine, it is clear that evidence of Northern disadvantage vis-à-vis their Southern neighbour is visible even during the early modern period. Even under pre-modern conditions ‘not even a member of the royal line could scrabble together quite enough strength in the North to reign in defiance of Establishment opinion’ (p. 33). In a theme that runs throughout The Northern Question, ‘the sine qua non of governing England was to have its southern heartland on side’ (p. 33). This whistle-stop tour ends with the historical event or process which gave the North its defining characteristics in the national imagination: the Industrial Revolution. Chapter 3 traces how the forces of industrialism and revolt intersected throughout the nineteenth century as the exponential growth of the factory system was accompanied by Luddites, Chartists, and democratic reformers. Despite these upheavals, ‘the pre-industrial mould of British politics remained unbroken, with fateful consequences for the North once its commercial fortunes began to slide’ (p. 70). Drawing on Elizabeth Gaskell’s 1854 novel North and South, Hazeldine claims that ‘even in the land of long chimneys’, business survival still hinged on the ‘attitude taken by the traditional landowning and monied interests of the South’ (p. 70). Chapter 4, focusing on the ‘capital-goods phase of the manufacturing revolution’, expands on these themes to demonstrate how despite the mirage of Northern prosperity, ‘Liverpool and Manchester wealth holders were second only to Londoners in their readiness to put money into foreign undertakings’. It is at this point, at the start of the twentieth-century, that Hazeldine’s ‘declinist’ thesis begins to emerge in full force: ‘For the North it was a case of so far and no further: from now on, it would have to sink or swim with its nineteenth-century coal mines, textile mills, steelworks and shipyards’ (pp. 72–73).

The repercussions of The First World War loosened Lancashire’s grip on British India, the biggest outlet for its cotton goods. In Hazeldine’s words, ‘outpaced by late-start competitors in Europe, America and the Far East, dependent on imperial privileges approaching expiry, the world’s first industrial region was about to experience the ground giving way beneath it’ (pp. 89–90). Chapter 5, entitled ‘Dereliction’, gives an indication of Hazeldine’s assessment of the British state’s response. Despite the parliamentary breakthrough of the Labour Party, the two wings of the labour movement took turns to court disaster through the timidity of its leaders in the face of the General Strike and the Great Depression. By 1934 unemployment fell back to single digits in London and the South East but in the North and Scotland remained stuck above twenty percent (p. 93, pp. 105–06). It would take the full mobilisation of national resources over the course of The Second World War to reduce the concentration of economic activity in the South East and to achieve ‘the rationalisation of Outer Britain that peacetime politics at Westminster had singularly failed to deliver’ (p. 113). Chapter 6 analyses how this rationalisation was squandered by both Labour and Conservative governments determined to keep the pound strong and British imperialism intact. Regional policy, as proclaimed by post-war governments of both stripes, came to involve ‘merely a modest stimulus to private-sector investment and job creation, aimed at indemnifying the ruling parties against accusations of neglect, should slump conditions return to the industrial towns of northern England, Scotland and Wales’ (p. 116). Instead of overhauling the Northern manufacturing base ‘before the resumption of commercial competition from continental Europe and Japan’, Hazeldine claims, Attlee was ploughing ‘limitless amounts of money’ into a ‘clandestine nuclear-weapons programme’ (p. 120, 115). This neglect, he argues, continued into the 1960s. Harold Wilson’s modernisation programme, which helped the Labour Party win the 1964 General Election, was sacrificed, Hazeldine claims, on the altar of ‘slavish monetary orthodoxy’ (p. 134). Wilson’s ‘prolonged exposure to the official mind acculturated him to the innermost impulses of the British state, instilling a reverence for the monarchy, for centralisation and for the pound sterling’ (p. 132). In Hazeldine’s terms, throughout the twentieth century the representatives of labour were just as ‘complacent as those of capital’ (p. 122).

In relative terms then the North did not ‘tread water’ during the ‘golden age of capitalism’ and during the 1970s and 1980s, Hazeldine claims, ‘it would be forced below the waterline, never to re-emerge’ (p. 136). This intensification of Northern decline is explored in Chapters 7 and 8. As growth slowed and the seemingly existential crisis of stagflation began to grip the British economy in the 1970s, mass redundancies developed on top of existing regional disparities. Unemployment provoked a profound response from the industrial working-class and the 1974 ‘Who Governs Britain?’ General Election which brought down the conservative government of Ted Heath represented a ‘remarkable victory for the industrial ranks of Outer Britain’ (p. 144). This working-class momentum was undone in Hazeldine’s account, however, by conservative Labour governments led consecutively by Harold Wilson and Jim Callaghan. An alternative political economy advanced by Tony Benn, the Secretary of State for Industry between 1974 and 1975, and embodied in the Kirkby Manufacturing and Engineering cooperative, was ostracised in favour of ‘sound money and an open market economy’ (pp. 145–46). For Hazeldine, the Labour government’s cowardice in the face of the forces of capital reached its apogee with the IMF crisis of 1976. This represented the moment that the financial crisis of the post-war state was ‘resolved on City terms at the cost of a lingering recession in Outer Britain’ (p. 150). In these chapters Hazeldine deploys his central argument with the greatest persuasiveness. By highlighting how social democrats like Callaghan and Dennis Healey laid the groundwork for neoliberalism, The Northern Question argues there was something inherent in the British state which meant that at regular historical intervals its default was to side with the forces of capital at the expense of the interests of labour. In this sense, by using the North as a lens, Hazeldine portrays Thatcherism more as an intensification or consolidation of pre-existing patterns of political economy rather than as a revolutionary ideology. In Hazeldine’s account Thatcherism governed from the south and ‘tightened the austerity introduced by Callaghan’s Labour’ (p. 159). Rather than portraying the period as a ‘marketplace of ideas’ where alternative visions of Britain’s political economy competed for ascendancy, Hazeldine characterises the 1970s as one long march to free-market neoliberalism.[1]

While this is a contestable interpretation of the ideological shifts of the 1970s and 1980s, the fact that neoliberalism consolidated its grip over British politics in the 1990s and 2000s is less open to question. The final two chapters of The Northern Question tells the story of this consolidation and brings the narrative up to the contemporary moment where neoliberal ascendancy appears to be assailed on all sides. Hazeldine’s lens becomes a little blurred when exploring New Labour: on the one hand, he repeats familiar tropes that the Blairite Labour Party represented Thatcher’s ‘greatest achievement’ (p. 177); on the other, he highlights how deindustrialisation in the North was ‘buffered by the stimulants’ of higher public spending, which increased by over six percent a year in real terms between 1999 and 2006 (p. 180). These stimulants would quickly be withdrawn, however, following the 2008 financial crisis. The North East lost fourteen percent of its public sector workforce under the coalition while the South East shed less than three percent. Under David Cameron’s prime ministership median household wealth in London increased by fourteen percent while it fell eight percent in Yorkshire and the Humber. For Hazeldine, Brexit and the 2017 General Election were consequences of these regional disparities. Three parallel voter insurgencies left their mark on the post-2008 distemper:


a Brexit revolt that originated in opposition to Maastricht among City mavericks and Tory voters in the market-town South, but then pivoted to attract northern working-class communities left behind by the New Labour boom and reeling from Conservative-Lib Dem austerity; the left-inclined 2014 Yes campaign for Scottish independence [. . .] and a Corbynist upsurge pitting a millennial precariat against a still largely Blairite Parliamentary Labour Party (p. 197).


While Corbyn managed to hold together an electoral coalition from amongst these various insurgencies in 2017—making the Labour Party’s first Commons gains in a General Election in twenty years—this was achieved by capturing major Northern cities and obfuscation on Brexit. In the aftermath of the election the Brexit movement and the Corbynite Labour Party increasingly faced in opposite directions. The final chapter of The Northern Question entitled ‘Taking a Stand’ explores the consequences of this divergence. Considering his damning indictment of Harold Wilson’s leadership of the Labour Party, it is surprising to see Hazeldine claim that Corbyn should have stuck to the ‘Wilson model’ of ‘personal neutrality’ over Brexit. In the end Corbyn and John McDonnell, ‘pinned their economic programme to a Brexit stance indistinguishable from that of the London establishment against which a large part of the Leave vote had been directed’ (pp. 212–13). In terms of seats, the General Election of 2019 turned on a Labour collapse in the deindustrialised small towns and former pit villages of the Midlands and the North. In these ‘red wall’ regions, 750,000 Leave voters switched to the Conservatives while hundreds of thousands more stayed at home (p. 207). If it is straightforward to demonstrate that Brexit cut across traditional political loyalties and divisions, explaining why remains one of the most contested issues in contemporary Britain. Too many commentators rush to proclaim a loosely conceptualised cultural politics as the key dividing line, yet it is clearly more complicated than this. For Hazeldine, ‘Brexit handed a political weapon to a class and region that had been denied one by Labourist hegemony for so long’ (p. 220). Yet as Will Davies has recently argued, much of what is labelled ‘populism’ is ‘really a longing for some version of the state that predated neoliberal reforms’. The slogan ‘take back control’ appealed to older Brexit voters precisely because they could remember a time when the state was ‘in command of its own economy and able to deliver social security to its own citizens’, a product of the very ‘labourist hegemony’ that Hazeldine deplores.[2] Throughout The Northern Question post-war social democracy is attacked for its repeated capitulations to the forces of capital and characterised as an ideological formation that includes everyone from Dennis Healey to, staggeringly, Dominic Cummings who, Hazeldine claims, ‘wrapped the official Vote Leave campaign in social-democratic colours’ (p. 203).

While it is clearly possible to attack post-war social democracy for its failure to adequately subdue the forces of capital, the reduction of this changing, nuanced and electorally successful ideological formation to a handmaiden of capitalist power speaks to a broader problem with The Northern Question. Namely, that Hazeldine lacks a compelling explanation for why the North has been so neglected by successive governments or rulers over at least two centuries beyond the fact that the British state inherently sided with the forces of capital (a byword for ‘the South’) over those of labour (a byword for ‘the North’). Now there is of course a large amount of truth in this reasoning. British institutions proved uniquely adept at preserving the interests of landed and financial interests and co-opting dissenters to stifle unrest. While the interests of the North might have been consistently marginalised, however, they were marginalised for different reasons over the course of the nineteenth and twentieth centuries. At its worst Hazeldine’s approach simply lumps social democrats and neoliberals together into a large amalgam called ‘the establishment’. By adopting Hazeldine’s perspective we learn nothing about the consequences of the profound ideological and conceptual shifts of modern British history; about the impact of the welfare state or the rise of the free market. This largely stems from Hazeldine’s ‘declinism’.[3] The Northern Question is heavily indebted to the work of the political theorist Tom Nairn. Alongside fellow New Left intellectual Perry Anderson, Nairn advanced the thesis that the British polity was uniquely conditioned by the absence of a truly bourgeois revolutionary moment.[4] Instead the aristocracy absorbed the emergent bourgeoisie from the early-nineteenth century, preserving the ancien régime and dooming any attempt at modernisation to failure.[5] There is no acknowledgement in The Northern Question, however, to the fact that this thesis has been powerfully challenged in the intervening decades.[6] Declinist critiques are bound-up with a variety of cultural assumptions about the nature of work, Britain’s place in the world and its transition to a welfare state. The latter, which played an important role in the development of Britain’s service economy, is barely mentioned by Hazeldine. In many ways, as the likes of Jim Tomlinson have argued, New Left critiques like The Northern Question end up mirroring a Thatcherite narrative of modern British history if for largely different reasons.[7]

At the start of his account, Hazeldine informs the reader that to understand the North we must ‘delve into the politics of Westminster and Whitehall, observing these proceedings from a northern perspective, to see what English history looks like when stood upon its head’ (p. 23). Later on, he is critical of recent ‘party-political musings’ which, while treating the North as a significant electoral player, tend to project southern assumptions onto the region. These musings, he claims, ‘are not the same as asking what the region itself wants’ (p. 214). Yet voices from the region are exactly what The Northern Question lacks.

The most engaging sections of the book are those where Hazeldine explores the cultural products of the region, from the ‘Angry Young Men’ of the late 1950s to the ‘feel-good musicals of the New Labour boom’ embodied in The Full Monty (p. 220, 18). By the end of The Northern Question, it is difficult to know what the North is beyond a byword for industrialism and manufacturing or to identify what makes it distinct. This partly stems from Hazeldine’s materialist approach and partly from the reductionism of the North-South binary itself as a lens. A much more fruitful approach would be to explore how this division of Britain became ingrained in the national imagination, compressing other regional or local identities. The Midlands, for example, is rarely the subject of such historical scrutiny. Where the Midlands is cited, it is usually grafted onto the North or South and rarely treated as a region in its own right. Hazeldine falls prey to a similar trap. ‘On a statistical basis’, he argues, ‘much more of the Midlands belongs on the northern side of the regional divide’ (p. 13). Hazeldine acknowledges that cities like Birmingham followed their own ‘distinct trajectory’ and has a ‘different story to tell’ than that explored in The Northern Question (p. 14). Yet he is forced to acknowledge where significant movements and processes spill over from ill-defined, contested and largely imagined regional boundaries. While it is undoubtedly true to say that the North-South divide continues to dominate discussions of regional inequality in modern Britain, there is a pressing need to explore the Midlands as a space of identity formation and political upheaval. The recently launched ‘Midlands Identities Project‘, a one-day interdisciplinary conference supported by both The MHR and the Institute of Historical Research, could therefore not be timelier.

While the North-South divide might be an unrepresentative cultural construction, it undoubtedly retains a considerable grip over the national political, economic and cultural imagination. Hazeldine concludes The Northern Question with a powerful fact: the contemporary North accounts for one-quarter of the UK’s population and parliamentary constituencies as well as one-fifth of its GDP. While these economic indicators may be trending downwards, they do so from a great height. In this sense, Hazeldine argues, ‘the problem of the North isn’t going away anytime soon’ (p. 222). The North may benefit ‘from open bidding between the major parties for support’ but it is far from clear that it will settle for what Hazeldine identifies as the ‘post-war norm’: the ‘dispending of palliatives to take the edge of structural economic change’ (p. 214). If his book fails to grapple with the contradictions and contestations inherent in Britain’s image of the North today, as well as amongst those who live there and lay claim to a Northern identity, Hazeldine’s historical survey is a timely reminder that the contours and constraints of contemporary politics are always liable to change, often in unexpected and surprising ways.

Download PDF



Anderson, P., ‘Origins of the Present Crisis’, New Left Review, 23 (1963), pp. 26–53.

Blackburn, D., ‘Penguin Books and the Marketplace for Ideas’, in L. Black, H. Pemberton & P. Thane (Eds.), Reassessing 1970s Britain (Manchester, 2013), pp. 224–51.

Davies, W., This is Not Normal: The Collapse of Liberal Britain (London, 2020). 

Edgerton, D., Warfare State: Britain, 1920–1970 (Cambridge, 2006). 

English, R., & Kenny, M., ‘Public Intellectuals and the Question of British Decline’, British Journal of Politics and International Relations, 3/3 (2001), pp. 259–83.

Hall, P.A., ‘Social Learning and the State: The Case of Economic Policymaking in Britain’, Comparative Politics, 5/3, (1993), pp. 275–96. 

Nairn, T., ‘The British Political Elite’, New Left Review, 23 (1963), pp. 19–25.

Tomlinson, J., ‘Thrice Denied: “Declinism” as a Recurrent Theme in British History in the Long Twentieth Century’, Twentieth Century British History, 20/2 (2009), pp. 227–51.


[1] For the 1970s as a ‘marketplace of ideas’, see: P.A. Hall, ‘Social Learning and the State: The Case of Economic Policymaking in Britain’, Comparative Politics, 5/3, (1993), pp. 275–96; D. Blackburn, ‘Penguin Books and the Marketplace for Ideas’, in L. Black, H. Pemberton & P. Thane (Eds.), Reassessing 1970s Britain (Manchester, 2013), pp. 224–51.

[2] W. Davies, This is Not Normal: The Collapse of Liberal Britain (London, 2020), p. 16.

[3] For the notion of ‘declinism’ see:  J. Tomlinson, ‘Thrice Denied: “Declinism” as a Recurrent Theme in British History in the Long Twentieth Century’, Twentieth Century British History, 20/2 (2009), pp. 227–51.

[4] See for example: T. Nairn, ‘The British Political Elite’, New Left Review, 23 (1963), pp. 19–25; P. Anderson, ‘Origins of the Present Crisis’, New Left Review, 23 (1963), pp. 26–53.

[5] For a good overview of this account, see: R. English & M. Kenny, ‘Public Intellectuals and the Question of British Decline’, British Journal of Politics and International Relations, 3/3 (2001), pp. 259–83.

[6] See for example: D. Edgerton, Warfare State: Britain, 1920–1970 (Cambridge, 2006).

[7] Tomlinson, ‘Thrice Denied’, p. 235.


Early English Books Online: Mass Digitization and the Archive

Early English Books Online: Mass Digitization and the Archive


This review examines the originations and contemporary usage of the online archive Early English Books Online (EEBO). Highlighting the recent advancements in digital historiography, alongside considerations of inherent archival bias, this article demonstrates a variety of circumstances in which the scholar is encouraged to look beyond the digital archive itself. EEBO here is proposed as a resource capable of profound innovation, one of preservationist historical necessity, and a logical further extension of scholarship dating all the way back to the early twentieth century and the Short-Title Catalogue. Yet also EEBO is a resource of human construction, and therefore must be approached with the same considerations one would the physical archive, giving careful thought to the intersection of material and print culture, and the ways in which they correlate. 

Biography: Conner Wilson is a postgraduate student at the University of Birmingham studying Shakespeare, his contemporaries, and Early Modern theatre culture.

Over the course of the past two decades, the mass digitization of the archive has radically transformed the breadth of primary source material readily available to the modern scholar. Online archives such as Early English Books Online (EEBO), Eighteenth Century Collections Online (ECCO), Manuscript Pamphleteering in Early Stuart England (MPESE), and the Old Bailey Proceedings Online, along with numerous others, have become inundated with modern methodological approaches to historiography, with most, if not all, Masters and PhD programs requiring some compulsory module towards navigating these resources. On one hand, this “revolution”[1] of digitization as Tim Hitchcock describes it, represents a turning point for historians, as researchers embrace the advantages of immediacy and accessibility in the information age; yet, in a field where visual, material, and print culture so often coincide, how do we determine the accuracy in which these online archival substitutions can produce the unique phenomenological experience associated with resource tangibility? Or, for instance, how do researchers overcome the implicit bias of search bar algorithms in tandem with imperfect and outdated Optical Character Recognition (OCR)? While much of what has been written about EEBO tends to exist in a binary dialectic of good vs. bad, helpful vs. unhelpful, accurate vs. inaccurate, this article will aim to circumnavigate such finite categorizations, and acknowledge both the trepidations of scholars who fear misuse, and embrace the growing computational literacy of historical fields. This reciprocal analysis, alongside a detailed historical account of the creation of the database, presents EEBO as not too dissimilar to the physical archive: proposing that, with both the digital and the material, it is ultimately the historian’s job to determine relevancy and overcome inherent bias.


EEBO’s Beginnings

The origins of EEBO can be traced all the way back to the early twentieth century. In 1918 on commission from The Bibliographical Society, scholars A. W. Pollard and G. R. Redgrave began the monumental task of creating a unified catalogue covering all extant books, printed between 1475 and 1640, across Great Britain and North America. It was a project which would take nearly 8 years of research and require an immense amount of interlibrary cooperation, however, by 1926 Pollard and Redgrave’s work: A Short-Title Catalogue of Books Printed in England, Scotland, & Ireland and of English Books Printed Abroad, 1475–1640, was finally ready for publication.[2]2 This Short-Title Catalogue or STC, as it is frequently abbreviated, immediately proved to be an invaluable road map for scholars in the sourcing of rare and out-of-print books. The STC covered the holdings of a myriad of libraries, provided bibliographic information on nearly 26,000 extant texts, and managed a scope of information which was unprecedented. The scholars had successfully proved that the cross unification of resources and information was possible on a massive scale, so far as researchers were willing to acknowledge that it was “dangerous work for any one to handle lazily.”[3] This cautionary caveat would come to permeate historical research well into the information age. Considering now the contemporary trepidations around EEBO, is it fitting here to include Pollard and Redgrave’s initial caution of the STC that “in so large a work based on such varied sources, probably every kind of error will be found represented.”[4] Perhaps, historians have always been cautiously self-aware of the dangers of mass bibliographic consolidation and the seductive illusion of an entirely comprehensive historical archive. Yet, despite this, the pairs’ work has unequivocally become one of the most influential and enduring enterprises towards the sourcing of Early Modern texts. Fourteen years later, with the danger of WWII fast approaching, and the advent of a new technological system, Microfilm, the American Council of Learned Societies felt the processing and photographing of Early English vulnerable texts was a project which could not be delayed, and the Short-Title Catalogue should become the bedrock from which the selection committee would work. Six million pages were prioritized for this microfilmic reproduction process with an ultimate objective of storing the facsimiles securely in America, farther from the increasingly volatile Western Front.[5] This decision to integrate the microphotographic imaging process with the STC would serve as the basis for what is now EEBO, with many of the original images captured by this commission populating the contemporary database today. It is imperative to understand that while much work has been done since the original publication of the STC (notably Donald Wing’s subsequent, yet separate, catalogue expanding the breadth of titles from 1641 to 1700)[6] and on microfilm reproductions themselves (with STC titles continuing to be photographed well into the 1990s) the digital visual make-up of EEBO began nearly 40 years before the advent of the internet; suffice it to say, the microphotographic process was not designed with considerations towards its ultimate digital transference. EEBO, as we know it now, would finally come into existence with the birth of the Text Creation Partnership (TCP) in 1999. This interlibrary effort to “create texts to a common standard suitable for search, display, navigation, and reuse” is the process on which the second half of this article will focus more specifically, as it has come to define the contemporary successes and pitfalls of the database.[7]


OCR, Comprehensive Digital Archives, and Material Culture

Perhaps the most extraordinary feat the EEBO-TCP partnership has undertaken, is its avoidance of common OCR problems, by abandoning the technology altogether. The implementation of a “double-keyed”[8] transcription system, with human editors coding from the original microfilm images, boasts a “99.995%”[9] accuracy rating per-text-entered, thereby enabling the current sophistication level offered in the simple and advanced search bar functions. While immensely expensive and labor intensive, this effort ensures a consistent accuracy which has previously proven difficult in Early Modern typeface transcriptions, yet it does, however, simultaneously shatter the illusion of an entirely comprehensive archive. As Ian Gadd notes, “EEBO does not include every copy of every edition published prior to 1701… nor even does it include a copy of every surviving edition published prior to 1701.”[10] This is an important distinction in that the textual variances between subsequent editions of Early Modern books can prove to be drastic. One need look no further than Quarto 1 and Quarto 2 of Hamlet (both available on EEBO), to detect a noticeably alternate print of the infamous, “To be, or not to be, that is the question” (Tragedy of Hamlet 23)[11] which instead reads, “To be, or not to be, I there’s the point.” (Tragicall Historie of Hamlet 15)[12] Fortunately for Shakespeare, the infamy of his work secures an archival placeholder for the various editions of his plays, however, it is near impossible to discern a similar degree of canonical entirety for the multitudes of lesser-known authors present on EEBO. If a scholar either unknowingly or willfully ignores this fact, the dangers of misrepresentation, false negatives, and false positives are relatively high. Additionally, given that EEBO is computationally manual, the quick inclusion of a subsequent textual edition is seemingly non-existent, and the notion that EEBO could be entirely comprehensive rapidly falls away simply considering the sheer labor intensity of the archive, which is tremendous. This is not to suggest either that EEBO advertises itself as a comprehensive archive (as neither did Redgrave and Pollard consider their work entirely comprehensive), but instead give caution to the scholars who may be first using the resource. One would not assume a physical library could possibly contain every text on a single subject and the same principle must be applied to the digital.

Regarding the physical tangibility of primary source material, EEBO presents both clear advantages and disadvantages. Considering the preservationist origins of the online archive, the sheer volume of scholars who now have access to the texts without having to physically handle the pages is an immense victory for the longevity of Early Modern books. The reality that pages are turned less frequently, less exposed to light, able to maintain a consistent temperature, and are simply less prone to accidental human contamination, will keep these resources accessible to those who need them for many years to come.[13] Healthy shelf life in correlation with the Early Modern book was already a precarious relationship, and the digital archive aids in keeping these texts in the hands of those most qualified to handle their longevity. On the other hand, EEBO all but abandons the material culture of the printed book, as the researcher is, of course, not actually manipulating the original artifact. Books on EEBO all appear to be roughly the same size and dimensions, which is simply not the case.[14] Furthermore, microfilm does little to aid in the capturing of handwritten notes of previous owners, thereby potentially overlooking additional valuable historiographical evidence. For example, many of the digital reproductions of seminal works now existent on the internet today, do little to account for things such as transportability or mobility of the original object. An Early Modern book capable of fitting in its owner’s pocket carries significantly different cultural weight than one which sits on the lectern of a library or lecture hall. Expanding on this work, the researcher may begin to unlock information such as the author’s contemporary popularity or their cordiality with publishing companies. A book existent in multiple different contemporaneous languages may reveal an author’s audience reach, their financial stability, or the sociological circle of which they were a member, all of which in turn can affect literary analysis.

While some of this information may be discerned from EEBO, the researcher must continue to be diligent and thorough with historiographical information beyond the text itself. This ultimately asks the important question at the intersection of material and print culture: can we consider the text of primary source material in a vacuum, or does removing the physical life of the object detract vital information which in turn can affect textual analysis? The answer to this, of course, depends on the author, the text, the book itself, the contemporary and historical associations of the material, the previous owner(s), the type of research being conducted, and a litany of other potential factors, yet still, the contemporary historian must not be swayed into ignoring the material world which exists behind the digital reproduction, as there is certainly valuable information existent there.



In 2001 John Jowett and Gabriel Egan authored one of the earliest reviews of EEBO, writing that “the potential for generating new research in early modern studies is considerable indeed… and electronic products such as EEBO… enable new forms of scholarly study which were not possible using paper and film technologies.” 16 Two decades later, this observation still holds true. EEBO has provided massive amounts of information to scholars over the years, ushering in exciting and new historical discoveries, which otherwise may have gone unrealized, overlooked, or been significantly delayed. Moreover, the access which current students now have to primary source material is unprecedented and evolving the very fabric of how academic arguments are conducted. In the midst of this exciting growth, it is vital for the researcher to remember that they must not rely solely on what is convenient. All archives, whether digital or physical, are ultimately human constructions and therefore contain certain limitations and biases both consciously and unconsciously. The above examples outlined are merely a few of the considerations scholars should take into account when conducting online research. As always, the historian shoulders the burden of accuracy, thoroughness, and overcoming bias, but when used diligently, the potentialities of EEBO are immense.

Download PDF


De But, R., ‘Managing Risks: what are the agents of deterioration’, <https://artsandculture. library/PQKyBVnbqWmqLw?hl=en are-the-agents-of-deterioration-trinity-college-dublin-library/PQKyBVnbqWmqLw?hl=en>, accessed 8.4.2021.

Gadd, I., ‘The Use and Misuse of Early English Books Online’, Literature Compass, 6/3 (2009), p. 680- 692.

Gavin, M. ‘How to Think about EEBO’, Textual Cultures, 11 / ½ (2017), pp. 70-102.

Heil, J. and Samuelson, T., ‘Book History in the Early Modern OCR Project or, Bringing Balance to the Force’, Journal for Early Modern Cultural Studies, 13 / 4 (2013), pp. 93-94.

Hitchcock, T., ‘Confronting the Digital’, Cultural and Social History, 10/ 1 (2013), p. 9-23.

Jowett, J. and Egan, G., ‘Review of the Early English Books Online (EEBO)’, Interactive Early Modern Literary Studies (2001), pp. 1-13.

Nagle, B. ‘Introduction’, in Wing, D. (ed.), Short-title catalogue of books printed in England, Scotland, Ireland, Wales, and British America and of English books printed in other countries, 1641-1700 (New York, 1945) p. 10.

Shakespeare, W., The Tragedy of Hamlet Prince of Denmarke, Printed by George Eld for Iohn Smethwicke, and are to be sold at his shoppe in Saint Dunstons Church yeard in Fleetstreet. Vnder the Diall, (London, 1611).

Shakespeare, W., The Tragicall Historie of Hamlet, Prince of Denmarke, Printed [by Valentine Simmes] for N[icholas] L[ing] and Iohn Trundell, (London, 1603).

‘Text Creation Partnership’, <>, accessed 31.3. 2021.

‘The results of keying instead of OCR’, < content/results-of-keying/>, accessed 31.3.2021.


[1] T. Hitchcock, ‘Confronting the Digital’, Cultural and Social History, 10/ 1 (2013), p. 9.

[2] M. Gavin, ‘How to Think about EEBO’, Textual Cultures, 11 / ½ (2017), pp. 70-102.

[3] B. Nagle, ‘Introduction’, in D. Wing (ed.), Short-title catalogue of books printed in England, Scotland, Ireland, Wales, and British America and of English books printed in other countries, 1641-1700 (New York, 1945) p. 10.

[4] Nagle, ‘Introduction’, 10.

[5] Gavin, ‘How to Think about EEBO’, pp. 70-102.

[6] I. Gadd, “The Use and Misuse of Early English Books Online”, Literature Compass, 6/3 (2009): p. 683.

[7] ‘Text Creation Partnership’, <>, accessed 31.3. 2021.

[8] J. Heil and T. Samuelson, ‘Book History in the Early Modern OCR Project or, Bringing Balance to the Force’, Journal for Early Modern Cultural Studies, 13 / 4 (2013), pp. 93-94.

[9] ‘The results of keying instead of OCR’, <>, accessed 31.3.2021.

[10] I. Gadd, ‘The Use and Misuse of Early English Books Online’, Literature Compass, 6/3 (2009), p. 686. (Italics mine).

[11] W. Shakespeare, The Tragedy of Hamlet Prince of Denmarke, Printed by George Eld for Iohn Smethwicke, and are to be sold at his shoppe in Saint Dunstons Church yeard in Fleetstreet. Vnder the Diall, (London, 1611), p. 23.

[12] W. Shakespeare, The Tragicall Historie of Hamlet, Prince of Denmarke, Printed [by Valentine Simmes] for N[icholas] L[ing] and Iohn Trundell, (London, 1603), p. 15.

[13]  R. de But, ‘Managing Risks: what are the agents of deterioration’, < what-are-the-agents-of-deterioration-trinity-college-dublin-library/PQKyBVnbqWmqLw?hl=en>, accessed 8.4.2021.

[14] I. Gadd, ‘The Use and Misuse of Early English Books Online’, Literature Compass, 6/3 (2009), p. 682.

Witches and the Devil in Early Modern Visual Cultures: Constructions of the Demonic Other


Throughout the early modern period, many Europeans believed in the reality of witchcraft. Those accused of being diabolic witches were thought to have signed a pact with Satan, to worship him, attend Sabbaths, and devise ways to harm humans through maleficia. Witches functioned as an inversion of Christian society, whereby they and their actions were emphasized as being ‘other’, while simultaneously reinforcing the societal norms they revoked. This article investigates representations of devils and witches, and the visual renderings of witchcraft belief, all of which helped construct their otherness. The paper will explore depictions of witches in early modern visual cultures by examining sixteenth- and seventeenth-century fine art, engravings, and woodcuts.

Keywords: Early Modern, witchcraft, supernatural, art, visual cultures, print, gender

Author Biography

Scott Eaton is an independent scholar who is currently researching the history of tea for the social heritage project You, Me and Tea. His research interests include early modern witchcraft, religion, gender, art, and print cultures. Scott’s monograph on a seventeenth-century witch-finder John Stearne’s Confirmation and Discovery of Witchcraft: text, context and afterlife was published by Routledge last year. 


Witches and the Devil in Early Modern Visual Cultures: Constructions of the Demonic Other

Download PDF


Throughout the Early Modern period, an estimated 90,000 people were prosecuted for witchcraft in Europe, about 50,000 of whom were executed.[1] In many witchcraft narratives and confessions, the Devil played a major role as he was believed to form a pact with witches, giving them powers in return for their soul. At the witches’ sabbath, the Devil was purported to be the figurehead, where witches allegedly gathered to have sex with and to worship him. The intrinsic connection between belief in the Devil, heresy, magic and witches led to the construction of the diabolic witch.[2] By absorbing and developing witch-theory in Europe, art became a way of engaging with these ideas and circulating them more widely.

This article explores the depictions of witches and devils in Early Modern European visual cultures, which helped to construct an image of witches as the ‘other’, as the enemy within.

It discusses art concerning witches’ sabbaths, milk and weather magic, maleficia, the sexual threat of witches and English woodcuts which conveyed the otherness of the witch’s body and familiar spirits. The commonality of the visuals chosen are their depictions of the demonic as an inversion of society and pervasive threat to Christendom.


 Visuals of the dairy-witch and Tempestarii

A fear concerning witchcraft was the impact that magic could have on the economy and targeted individuals. Early modern Europe was mostly comprised of agrarian communities where crops, livestock and dairy were very valuable commodities – disruption to these could spell disaster for the owner. Illustrations of dairy-stealing witches emerged in woodcuts such as those accompanying the 1486 edition of Hans Vintler’s Buch der Tugend (originally written in 1411 and modelled on an early fourteenth-century tract, Tommaso’s Fiori di Virtù) and Johann Geiler’s Die Emeis (1517) (Fig. 1). These images were also depicted on wall paintings, such as those by Albertus Pictor (c.1490) in Söderby-Karl, Uppland, Sweden, or the murals in Vejlby kirke, Århus (1492), and Tuse kirke in Holbæk (c.1460), Denmark.[3] Below, the image of dairy-stealing (Fig. 1) shows the witch using axe magic to pilfer milk from the cow, into her pail – hence the emaciated cow in the background. The witch is syphoning the milk to profit, while the victim and their livestock suffer directly from the effects of the magic, and indirectly from its economic impact. Demonic elements are also visible in the image’s iconography, from the gathering storm of destruction, the smoking cauldron and the group of three female witches gathered to help enact the magic. In a fairly benign looking village scene, the image depicts fears surrounding dairy produce – namely that it can inexplicably spoil or disappear because of demonic witches’ direct meddling.

Figure 1: Witch Stealing Milk from a Neighbour’s Cow. Wellcome Collection, CC BY.

A more sinister ‘cumulative concept of witchcraft’ and demonology began to form in the 1400s, culminating at the end of the century. To many of the elite, demons were no longer considered to be external enemies that could be easily be defeated through trickery, magic or piety, but were extremely powerful supernatural agents that invaded every part of daily life.[4] Some medieval scholars believed that demons and Satan could take human and animal form, make pacts with humans, influence thoughts and emotions, have sexual intercourse with humans and even produce offspring.[5] These beliefs helped inform and create early modern art depicting the demonic and the witch as the enemy.

Part of the vast repertoire of magic attributed to witches was weather magic. In visual cultures Tempestarii were portrayed as the enemy of society by causing terrible weather which could cause damage to or devastate crops, destroy buildings and ships. Ulrich Molitor’s popular text, De Lamiis et Pythonicis Mulieribus (1489), the first illustrated witchcraft treatise, showed witches creating weather magic. In a woodcut two witches stand beside a flaming cauldron into which they cast a serpent and a rooster as sacrifices to enact the weather magic, as depicted by the clouds overhead (Fig. 4). The image was simplistic but influential in forming the iconography of witchcraft, especially for Tempestarii. Pieter Bruegel the Elder imitated these concepts and deployed them in a much more intricate manner. His engraving, St James and the Magician Hermogenes (1565) (Fig. 2) depicts a cognate scene, while including evidence of ritualistic sorcery, and crafting his demons and strange hybrids in the style of Hieronymus Bosch.[6] Bruegel’s engraving, loosely based off ‘The Golden Legend’, is loaded with demonic iconography. In the underground chamber of the image, demons are about to dismember a man, overseen by the Devil, and in the middle of the image we can see a witch reading a grimoire and shaking a sieve to divine or enact weather magic. She is aided by the other boiling cauldrons and flying witches scattered throughout the picture. Following the line of clouds, in the top right a witch riding a goat is amidst of the storm which has caused ships to sink and a church steeple to collapse, and, to the left, we see the outline of livestock being killed by the weather. The visuals clearly show demons and witches using weather magic to target and destroy individuals, even to level church buildings, representing witches as an enemy of Christendom.

Figure 2: After Pieter Bruegel the Elder, St. James and the Magician Hermogenes (1565). Public domain.

Jacques de Gheyn II’s Preparation for the Witches’ Sabbath (c.1610) (Fig. 3) also uses the motif of the witch with a cauldron to produce huge plumes of smoke which texture the engraving’s background. Demons and witches abound in the engraving: at the bottom of the image three witches are gathered around a vase with a grimoire to create a potion, while the witches to the right open a cauldron, unleashing the clouds and smoke which envelope the sky. The square topped volcano that is violently erupting serves to remind viewers of the natural, destructive powers that witches command, such as their purported ability to control weather – as evidenced by the witches preparing to hurl thunderbolts from the storm clouds.[7] In the background we can see a further indication of this, as a man and his livestock are crossing a river on a raft and an outline of a city is depicted – both of these are likely to be the target of the diabolic witchery presented in the foreground. The Witches’ Sabbath is thus showing demonic forces preparing to lay siege to the Christian settlement, again locating witches as an enemy.

Figure 3: Designed by Jacques de Gheyn II, Preparation for the Witches’ Sabbath (c.1610). Public domain.

In a similar vein, Jan van de Velde II’s Heks/Sorceress (1626) positions witchcraft on the outskirts of the mundane, taking place under the cover of night. The engraving shows the witch throwing a powder into the cauldron for a magic ritual (signified by the circle, grimoire and skull), the smoke and fire belching out from the force of the wind produced and melding with the rest of the engraving. In front of the sorceress are an array of strange demons, perhaps signifying sins and vices.[8] Crucially, in the bottom right of the image we can see a house either belonging to the witch or a villager, locating witchcraft in the domestic sphere and emphasising the threat on daily life that demonic witchcraft posed.[9]

Witches, demons, nudity, death, and weather magic are the familiar themes depicted in the art explored. The civilians in the background of the visuals remind viewers of the close proximity and the imminent threat of witchcraft to ordinary Christians.


Witches, power and sexuality

The concept of diabolic witchcraft gave impetus to its artistic depictions in woodcuts, engravings and paintings, visually showing the sexual threat of witchcraft, which evolved with other themes such as Tempestarii. Simplistic woodcuts in Ulrich Molitor’s De Lamiis (1489) (Fig. 4) showed key witchcraft iconography, representing it as a threat, and including themes like weather magic, witches flying on pitchforks, and a woman embracing a bestial devil. The latter positioned devils as hybrid creatures, depicting their bestiality and immorality, and as a threat to monogamy since the woman’s head covering in the image indicates that she has married.[10] Civilisation is again represented in the background, at the very top of the woodcut.

Figure 4: Ulrich Molitor, De lamiis et pythonicis mulieribus (1489). Public domain.

In early modern intellectual thought, women were considered ‘the weaker vessel’ and men were believed to have a divinely sanctioned rule over them. Men were to fulfil this commandment by governing women through marriage and by ruling their households, as advised in conduct books such as William Gouge’s popular Of Domesticall Duties (1622).[11] As illustrated through Molitor, Devils could prey on women luring them to adultery, disorder and witchcraft, therefore threatening to destabilise the very nucleus of social order – the household.

But women were not helpless: witches could tempt men through magic and the sexuality of their bodies, as witches were thought to be lustful and woman to have sexual capital. For example, Hans Baldung Grien’s painting The Weather Witches (1523) (Fig. 5) combines the sexual element of witches with Tempestarii to identify the naked female bodies as a source of disorder. The swelling clouds indicate the witches’ power and destruction and their windswept hair signifies the lust of the women, as does the witch’s crossed-legged stance – a visual sign of immorality. In the lower tier of the image we see a shrouded goat symbolising the Devil and sexuality, and a small demon is trapped in the flask (stopped by the fruit of original sin) held by the woman on the right.[12] As Charles Zika noted, the iconography, the strong assertive poses of these women and the eroticism of their bodies, convey the centrality of sexual desire and seduction to this image of witchcraft.[13] It shows that witchcraft was demonic in origin but also had power from the sexuality of women, which gave the ‘weaker sex’ power over men.

Figure 5: Hans Baldung Grien, The Weather Witches (1523). Public domain.

Albrecht Dürer was another artist who encoded these concepts of demonic witchcraft in art, helping solidify the sexualised witch-figure. His engraving The Four Witches (1497) (Fig. 6) makes witches more inconspicuous, locating them within society and thus more threatening. The image shows four young, naked women standing together in a room, possibly a bathhouse. At first glance it may seem unassuming but Dürer included cues for his audience to render its diabolic meaning unmistakable: a sinister aspect is added by the inclusion of the Devil emerging from the flames of hell in the bottom left of the image, and the skull and bones on the floor. The rather cryptic letters ‘O.G.H’ written in a sphere above the witches’ heads could mean ‘O Gotte hüte’(Oh God protect us [from the witches]). Additionally, the nudity of the women functioned as a contemporary cultural cipher of witchcraft as a sexual transgression. The positioning of their hands indicates sexual intimacy with each other, while the witches’ beauty, body and desirability represent a threat to the viewer, to men, and the moral and social order. In the image, the women are empowered as witches, giving them magical influence over men and nature, but the image reasserts male authority, by constructing an invisible prison positioning the witches between the demon’s gaze from behind and the male viewer’s gaze from the front. Dürer places male viewers on the precipice of discovering the clandestine witches, just as the demon appears and male authority is challenged.[14] Dürer’s chiaroscuro woodcut, Witches’ Sabbath (1510) conveys similar messages, showing the motif of a young woman astride a goat at the top of the woodcut, symbolising lust, and also parodying the male pastime of horse riding. The bottom half of the image portrays hag-like witches literally cooking up devilry, including weather magic and a demonic ritual. The sexual element so common in witchcraft iconography is primarily evidenced here through the witches’ nudity and the phallic imagery on the left of the woodcut – witches have reclaimed gender power by stealing penises and dangling them over a wooden stick – thus showing how witches threatened Christianity, gender and established order.[15]

Figure 6: Albrecht Dürer, Four Witches (1497). Public domain.

In early modernity, it was believed that diabolic witchcraft was a complete inversion of established social norms. Women would eschew God, attend Sabbaths to worship the Devil, take part in infanticide and orgies and plan direct acts of maleficia, weather magic or the bewitching of men. The actions, sexual capital and demonic allegiance of witches illustrated the debilitating effects diabolism could have on Christian society.[16] Some images in this paper showed the witches brewing malefic magic just outside of normal society, on the peripheries, while some show that the witches were the ‘other’, infiltrating society and therefore a serious threat operating from within.


Enemy Within: English Witchcraft Pamphlets

The close proximity, and threat, of witches can be shown through the visual cultures of English witchcraft pamphlets, as they described actors in localised trials. These witches were also thought to have created a pact with the devil, carried out maleficia and had familiars, all while living amongst ordinary people and subverting norms. Visually, in woodcuts this ‘enemy within’ was often portrayed as an old dishevelled woman, an evil hag with wrinkled skin, a long nose, a facial protrusion, and a cat for a pet – much like our witch stereotype. In early modern print cultures, image and text suggested that the deformed exterior of the witch’s body was a mirror for the twisted interior of the mind. In this sense, the body was rendered as a readable text that betrayed the inner thoughts and behaviours of the individual, painting her as the enemy.[17] This practice of evaluating an individual’s inner condition based on their outer appearance, stemmed from a long tradition of physiognomy, ‘the study of the features of the face, or of the form of the body generally, as being supposedly indicative of character; the art of judging character from such study’.[18] This is more tangible if we examine some witches portrayed in English witchcraft pamphlets. In 1645 Elizabeth Clarke was depicted as a one-legged elderly widow and Joan Flower, in 1619, was portrayed as an old spinster, partially disabled and ‘full of wrath’.[19] A Northampton witch was labelled as ‘monstrous and hideous’ in her appearance and, likewise, in 1613, Elizabeth Device was described as an ‘odious witch…her left eye, standing lower than the other…so strangely deformed’, who outrageously cursed ‘according to her accustomed manner’. Thomas Potts commented that for women with these attributes, their fates were often sealed in court for ‘the wrinkles of an old wives face is good evidence to a jurie against a witch’.[20]  The descriptions match the visual depictions of the alleged witches, and this may have had basis in reality, affecting the lived experience of the women: Elizabeth Clarke’s appearance and lameness were symptomatic of her dealings with devils and witchcraft, while Joan Flower’s and Elizabeth Device’s aesthetics and demeanour signified the sinfulness of their souls (Fig. 7). Their exterior appearance could be corroborative evidence of their sins and demonic pact with the Devil. Indeed, Egeon Askew questioned in 1605 that if their ‘outward face is so deformed…How much more within the breast lies there a more terrible countenance, a more cruell aspect, a more ugly spirit, and a more deformed face?’.[21] The connection between the aesthetics of a person and their mental or spiritual condition was not idiosyncratic in early modern England, but was an element of popular culture which upheld the stereotypical witch-figure as a conceptually potent enemy.

Figure 7: Anon., Wonderful Discoverie of the Witchcrafts of Margaret and Phillip Flower (London, 1619). Public domain.

Additional visual indications of witches’ otherness in woodcuts were their close relationships with their familiars (Fig. 8). These were personal demons who lived with the witch and enacted her harmful magic and were commonly deformed. References to these creatures were prevalent in English witchcraft literature. Taking Clarke again as an example, she confessed to having a sexual relationship with the Devil and of having familiars, which assumed the role of surrogate children. Indeed, some alleged witches specifically called their familiars their children and they took child-like forms: Elizabeth Hubbard said ‘three things came to her in the likeness of Children, which asked her whispering to deny God, Christ, and all his workes’, and Alice Wright confessed to having two familiars in the shape of boys, one of which ‘spoke to her with a great whorce voyce, as if he had been griev’d’.[22] It was believed that familiars suckled blood from supernumerary teats on the witch’s body (resembling a nipple, mole, pimple, wart or keloid) in order to renew the diabolic pact, the body thus marred by a demonic protuberance. Charlotte-Rose Millar has argued that these women conceptualised familiars as surrogate children because they wanted dependent children yet could not have any.[23] As a result, feeding demonic child-like familiars blood, rather than milk, styled the witch as an anti-mother: the dynamic was an inversion of breast feeding and a parody of English society’s ideal of the ‘good mother’ figure –  a pious woman who was a good wife, mother and manager of her nuclear family within a patriarchy-based household.[24] The witch-figure symbolised the harmful, selfish anti-mother in league with the Devil, a neighbour who was sustaining and nurturing demons within the household and local parish – a dangerous enemy within.[25]

Another conceptual layer within the demonic witch-familiar dynamic was the deformity of the animal familiar akin to the ugliness and crookedness of the stereotypical witch, both aesthetics visually signifying evil. It was posited that demonic spirits could not mirror God’s perfect creation hence their deformity, as seen through the hybridity of the animal familiars in the woodcuts of printed pamphlets. Despite this, by interacting with and attacking humans, animal-familiars were able to subvert God’s natural order. Early modernity inherited the medieval concept of the Great Chain of Being, which stated that God’s law, as recorded in Genesis 1.28, gave humans control over the animal kingdom and placed animals on a lower level of creation.[26] By using animal familiars to harm humans, witches helped to directly breach God’s natural order and destabilise society. The dynamic between witches and animal familiars also signified the corruption of the witch’s soul, much like the witch’s body. Familiars fed from witches and were the conduits through which magic was enacted at the witch’s behest: these deformed creatures were therefore an extension of the witch and represented her sinful thoughts and desires, to kill and harm.[27] With the stereotypical English witch-figure, her outward appearance corresponded to the twisted interior of her mind, which was mirrored and reflected by the deformed animal familiars enacting the witch’s own thoughts and desires.

The stereotypical depiction of an aged widow or spinster who sustained familiar spirits was construed as an anti-mother figure, which sustained demons with blood outside of wedlock, instead of breastfeeding and caring for a child within a nuclear family and patriarchy-based household. A lustful woman living independently was seen as inherently disorderly; moreover, the witch was nourishing demons and causing harm to locals though magic, all while operating outside of male supervision. The witch and familiars symbolised an inverted family and natural order, and her appearance confirmed the demonic nature of the witch. The idea of witchcraft made the authorities anxious as witches operated outside of the systems which maintained social order, namely patriarchy, the household, marriage, and the Church.[28] This visual construction of the archetypical witch positioned her in opposition to traditional societal norms and ideas of aesthetics, rendering the witch as a localised icon of evil and as a cipher for numerous cultural concerns in the early modern period.

Figure 8: Matthew Hopkins, The discovery of witches (London, 1647). Public domain.



Throughout the early modern period, European visual cultures echoed contemporary concerns about witches and devils, whether in fine art or print. Witches’ relationship with the demonic, their harmful magic, and sexuality endangered established social norms. Witches’ actions and aesthetics portrayed in the selected European visuals construed them as an inversion of conventional social, moral, gender and natural order, and as a dangerous enemy threatening society. Visual constructions of witch- and devil-figures were fluid and reflected contemporary cultural concerns, and highlighted the witch’s ability to subvert and challenge various aspects of order. Above all, the threat demonic forces posed to Christendom was emphasised, witches being visually portrayed as a dangerous enemy within and as the demonic other.




Primary Sources

Anon., A Detection of Damnable Driftes (London, 1579)

Anon., A Most Certain, Strange, and True Discovery (London, 1643)

Anon., Apprehension and Confession (London, 1589)

Anon., Damnable Practices (London, 1619)

Anon., Rehearsall Both Straung and True (London, 1579)

Anon., Witches of Northamptonshire (London, 1612) 

Anon., Wonderful Discoverie of the Witchcrafts of Margaret and Phillip Flower (London, 1619)

Askew, E., Brotherly Reconcilement preached in Oxford (London, 1605)

Baldung, H., The Weather Witches (1523), mixed technique on limewood, 65.4 x 46cm

Bernard, R., A Guide to Grand-Jury Men (London, 1627)

Bruegel the Elder, P., St. James and the Magician Hermogenes (1565), engraving, 26.2 x 34 cm

Dürer, A., Four Witches (1497), engraving, 21.6 cm × 15.6 cm 

de Gheyn II, J., Preparation for the Witches’ Sabbath (c.1610), engraving, 43.5 x 65.2 cm

Gouge, W., Of Domesticall Duties (London, 1622)

Hopkins, M., The Discovery of Witches (London, 1647)

James VI and I, Daemonologie (Edinburgh, 1597)

Potts, T., The Wonderful Discovery of Witches (London, 1613)

Molitor, U., De Lamiis et Pythonicis Mulieribus (Reutlingen, 1489)

Stearne, J., A Confirmation and Discovery of Witchcraft (London, 1648)


Secondary Sources

Carr, V., ‘The Witch’s Animal Familiar in Early Modern Southern England’ (PhD diss, University of Bristol, 2017) 

Clark, S., ‘Inversion, Misrule and the Meaning of Witchcraft’, Past & Present, 87, issue 1 (May 1980), pp. 98-127,  

Cohn, N., Europe’s Inner Demons: The Demonization of Christians in Medieval Christendom (2nd edition, London, 1993)

Crawford, P., ‘Attitudes Towards Menstruation’, Past & Present, 91, no. 1 (1981), pp 47–73

Crosley, J., Potts’s Discovery of Witches in the County of Lancaster…With an Introduction and Notes by James Crossley, Esq (Manchester, 1845)

Davidson, J., The Witch in Northern European Art, 1470-1750 (Freren, 1987)

Davis, N. Z., Society and Culture in Early Modern France (Stanford, 1975)

Dolan, F., Dangerous Familiars: Representations of Domestic Crime in England, 1500–1700 (London, 1994)

Durrant, J., Witchcraft, Gender and Society in Early Modern Germany (Leiden, 2009)

Eales, J., Women in Early Modern England, 1500–1700 (London, 1998)

Eaton, S., ‘Witchcraft and Deformity in Early Modern English Literature’, The Seventeenth Century, 35, no. 6 (2020), 10.1080/0268117X.2020.1819394

Eaton, S., John Stearne’s Confirmation and Discovery of Witchcraft: Text, Context and Afterlife (Routledge, 2020) 

Fraser, A., The Weaker Vessel: Women’s Lot in Seventeenth-Century England (London, 1985)

Gaskill, M., ‘Witchcraft and Power in Early Modern England: The Case of Margaret Moore’ in, J. Kermode and G. Walker (eds), Women, Crime and the Courts in Early Modern England (London, 1994), pp. 125-45

Gaskill, M., Witchfinders: A Seventeenth-Century English Tragedy (London, 2005)

Gombrich, E., The Story of Art (16th edition, London, 2006)

Houwen, L., ‘Howling Wolves and Other Beasts: Animals and Monstrosity in the Middle Ages’ in, B. Boehrer, M. Hand and B. Massumi (eds), Animals, Animality, and Literature (Cambridge, 2018), pp. 43-56

Hughes, A., ‘Puritanism and gender’ in, J. Coffey and P. Lim (eds), The Cambridge companion to Puritanism (Cambridge, 2008), pp. 294–308

 Hults, L., The Witch as Muse: Art, Gender, and Power in Early Modern Europe (Philadelphia, 2005)

Jackson, L., ‘Witches, Wives and Mothers: Witchcraft Persecution and Women’s Confessions in Seventeenth-Century England’, Women’s History Review, 4, no. 1 (1995), pp 63–84 

Koslofsky, C., ‘Knowing Skin in Early Modern Europe, c. 1450-1750’, History Compass, 12, issue 10 (2014), pp. 794-806,

Kwan, N., ‘Woodcuts and Witches: Ulrich Molitor’s “De lamiis et pythonicis mulieribus”, 1489–1669’, German History, 30, issue 4 (December 2012), pp. 493-527,

Levack, B., The Witch-Hunt in Early Modern Europe (3rd edition, Harlow, 2006)

Mencej, M., Styrian Witches in European Perspective: Ethnographic Fieldwork (London, 2017) 

Millar, C., Witchcraft, the Devil, and Emotions in Early Modern England (London, 2017)

Mitchell, S., Witchcraft and Magic in the Nordic Middle Ages (Philadelphia, 2011) 

Muchembled, R., A History of the Devil from the Middle Ages to the Present (Cornwall, 2003)

Myrone, M., C. Frayling and M. Warner (eds), Gothic Nightmares: Fuseli, Blake and the Romantic Imagination (London, 2006)

Petherbridge, D., Witches & Wicked Bodies (Edinburgh, 2018) 

Purkiss, D., The Witch in History: Early Modern and Twentieth-Century Representations (London, 1996, reprinted 2005)

Roper, L., Oedipus and the Devil: Witchcraft, Sexuality and Religion in Early Modern Europe (London, 1994, reprinted 2005)

Rosen, B., Witchcraft in England, 1558–1618 (Amherst, 1991)

Russell, J., Witchcraft in the Middle Ages (London,1972) 

Salisbury, J., The Beast Within: Animals in the Middle Ages (London, 1994) 

Salmon, M., ‘The Cultural Significance of Breastfeeding and Infant Care in Early Modern England and America’, Journal of Social History, 28, no. 2 (1994), pp. 247-69.

Saunders, C., Magic and the Supernatural in Medieval English Romances (Cambridge, 2010)

Sharpe, J., Instruments of Darkness: Witchcraft in Early Modern England (Philadelphia, 1997)

Sullivan, M., ‘The Witches of Dürer and Hans Baldung Grien’, Renaissance Quarterly, 53, no. 2 (2000), pp. 333-401

 Swan, C., ‘The “Preparation for the Sabbath” by Jacques De Gheyn II: The Issue of Inversion’, Print Quarterly, 16, no. 4 (1999), pp. 327–339 

Thomas, K., Man and the Natural World: Changing Attitudes in England, 1550-1800 (London, 1984)

Wilby, E., ‘The Witch’s Familiar and the Fairy in Early Modern England and Scotland’, Folklore, 111, no. 2, (2000), pp. 283–305

Willis, D., Malevolent Nurture: Witch-Hunting and Maternal Power in Early Modern England (Ithaca, 1995)

Zika, C., The Appearance of Witchcraft: Print and Visual Culture in Sixteenth-Century Europe (London, 2007)



[1] This article is based on a paper given at the conference Enemies in the Early Modern World 1453-1789: Conflict, Culture and Control, University of Edinburgh, March 2021.

[2] B. Levack, The Witch-Hunt in Early Modern Europe (3rd edn, Harlow, 2006), pp. 8-10; J. Sharpe, Instruments of Darkness: Witchcraft in Early Modern England (Philadelphia, 1997).

[3] C. Zika, The Appearance of Witchcraft: Print and Visual Culture in Sixteenth-Century Europe (London, 2007), pp. 42-51, 242; National Museum of Denmark, Copenhagen, “Vejlby Kirke, Risskov, Århus Amt” (1976), 1466, 1490, accessed April 10, 2021,; S. Mitchell, Witchcraft and Magic in the Nordic Middle Ages (Philadelphia, 2011), pp. 138-40, 182-3. Remarkably, Mitchell notes that there are approximately sixty churches with extant murals depicting the dairy stealing witch in northern Europe: forty in Sweden, four in Finland, sixteen in Denmark and three in northern Germany (p. 140).

[4] Levack, The Witch-Hunt, pp. 30-73; N. Cohn, Europe’s Inner Demons: the Demonization of Christians in Medieval Christendom (2nd edn, London, 1993), pp. 17-34; R. Muchembled, A History of the Devil From the Middle Ages to the Present (Cornwall, 2003), pp. 9-34; also see, C. Saunders, Magic and the Supernatural in Medieval English Romances (Cambridge, 2010), pp. 59-86.

[5] Muchembled, A History of the Devil, pp. 9-34, 108-11; V. Carr, ‘The Witch’s Animal Familiar in Early Modern Southern England’ (PhD diss, University of Bristol, 2017), p. 79; J. Russell, Witchcraft in the Middle Ages (London,1972), pp. 187-8.

[6] D. Petherbridge, Witches & Wicked Bodies (Edinburgh, 2018), p. 43; Zika, The Appearance of Witchcraft, pp. 162-73.

[7] Petherbridge, Witches & Wicked Bodies, pp. 58-9; C. Swan, ‘The “Preparation for the Sabbath” by Jacques De Gheyn II: The Issue of Inversion’, Print Quarterly, 16, no. 4 (1999), pp. 327-339. Linda Hults noted that de Gheyn was a Dutch ‘scientist’ and artist, and that his engravings rendered witchcraft as an ‘inversion of true science’; L. Hults, The Witch as Muse: Art, Gender, and Power in Early Modern Europe (Philadelphia, 2005), pp. 160-3; Davidson argues that de Gheyn was obviously very familiar with the literature of witchcraft, especially Reginald Scot’s publication, but that his personal beliefs about the topic remain obscure; J. Davidson, The Witch in Northern European Art, 1470-1750 (Freren, 1987), pp. 57-64.

[8] Petherbridge, Witches & Wicked Bodies, p.108.

[9] For witchcraft and the domestic sphere see; L. Roper, Oedipus and the Devil: Witchcraft, Sexuality and Religion in Early Modern Europe (London, 1994, reprinted 2005), pp. 200-27; D. Willis, Malevolent Nurture: Witch-Hunting and Maternal Power in Early Modern England (Ithaca, 1995); J. Durrant, Witchcraft, Gender and Society in Early Modern Germany (Leiden, 2009), pp. 197-8, 251-4; D. Purkiss, The Witch in History: Early Modern and Twentieth-Century Representations (London, 1996, reprinted 2005); F. Dolan, Dangerous Familiars: Representations of Domestic Crime in England, 1500–1700 (London, 1994), pp. 169-236.

[10] Davidson, The Witch, pp. 16-7; Zika, The Appearance of Witchcraft, pp. 17-27.

[11] J. Eales, Women in Early Modern England, 1500–1700 (London, 1998), p. 4; N. Z. Davis, Society and Culture in Early Modern France (Stanford, 1975), pp. 126–8; A. Fraser, The Weaker Vessel: Women’s Lot in Seventeenth-Century England (London, 1985), pp. 1–6; J. Stearne, A Confirmation and Discovery of Witchcraft (London, 1648), p. 11; R. Bernard, A Guide to Grand-Jury Men (London, 1627), pp. 87–90; James VI and I, Daemonologie (Edinburgh, 1597), p. 44; W. Gouge, Of Domesticall Duties (London, 1622).

[12] Hults, The Witch, pp. 98-9; Zika, The Appearance of Witchcraft, pp. 84-5. In learned circles it was thought that witches could not influence weather themselves, but only through the aid of a demon and God’s permission – the demon in the flask reflects this idea; Davidson, The Witch, pp. 25-6.

[13] Zika, The Appearance of Witchcraft, pp. 84-5.

[14] Petherbridge, Witches & Wicked Bodies, p. 22; Zika, The Appearance of Witchcraft, p. 87; Davidson, The Witch, p. 18; Hults, The Witch, pp. 64-73. For an alternative reading see, M. Sullivan, ‘The Witches of Dürer and Hans Baldung Grien’, Renaissance Quarterly, 53, no. 2 (2000), pp. 333-401.

[15] Davidson, The Witch, pp. 20-6; Petherbridge, Witches & Wicked Bodies, pp. 30, 44; Holts, The Witch, p. 85.

[16] N. Kwan, ‘Woodcuts and Witches: Ulrich Molitor’s “De lamiis et pythonicis mulieribus”, 1489–1669’, German History, 30, issue 4 (Dec. 2012), pp. 493-527,; Davidson, The Witch, pp. 14-9; Petherbridge, Witches & Wicked Bodies, pp. 22, 42; S. Clark, ‘Inversion, Misrule and The Meaning of Witchcraft’, Past & Present, 87, issue 1 (May 1980), pp. 98-127,

[17] M. Mencej, Styrian Witches in European Perspective: Ethnographic Fieldwork (London, 2017), pp. 318-22; S. Eaton, ‘Witchcraft and Deformity in Early Modern English Literature’, The Seventeenth Century, 35, no. 6 (2020), 10.1080/0268117X.2020.1819394, pp. 819-20.

[18] Definition from, Oxford English Dictionary.

[19] M. Gaskill, Witchfinders: A Seventeenth-Century English Tragedy (London, 2005), pp. 3, 41-2; S. Eaton, John Stearne’s Confirmation and Discovery of Witchcraft: Text, Context and Afterlife (Routledge, 2020), Chap. 3; M. Hopkins, The Discovery of Witches (London, 1647), frontispiece; Anon., Damnable Practices (London, 1619); Anon., Wonderful Discoverie of the Witchcrafts of Margaret and Phillip Flower (London, 1619). For additional examples see: Anon., Rehearsall Both Straung and True (London, 1579); Anon., A Detection of Damnable Driftes (London, 1579); Anon., Apprehension and Confession (London, 1589); Anon., A Most Certain, Strange, and True Discovery (London, 1643).

[20] T. Potts, The Wonderful Discovery of Witches (London, 1613), sig. G, M2; reprinted with notes in, J. Crossley, Potts’s Discovery of Witches in the County of Lancaster…With an Introduction and Notes by James Crossley, Esq (Manchester, 1845); Anon., Witches of Northamptonshire (London, 1612) the pamphlet’s woodcut is subtly rendered so that it appears as if the witch atop the hog has a cloven foot, thus hinting at her demonic nature.

[21] E. Askew, Brotherly reconcilement preached in Oxford (London, 1605), p. 124.

[22] Stearne, A Confirmation, pp. 26-7.

[23] C. Millar, Witchcraft, the Devil, and Emotions in Early Modern England (London, 2017), pp. 119-22; also see M. Gaskill, ‘Witchcraft and Power in Early Modern England: The Case of Margaret Moore’ in, J. Kermode and G. Walker (eds), Women, Crime and the Courts in Early Modern England (London, 1994), pp. 138-41; C. Koslofsky, ‘Knowing Skin in Early Modern Europe, c. 1450-1750’, History Compass, 12, issue 10 (2014), pp. 794-806,

[24] Willis, Malevolent Nurture; L. Jackson, ‘Witches, Wives and Mothers: Witchcraft Persecution and Women’s Confessions in Seventeenth-Century England’, Women’s History Review, 4, no. 1 (1995), pp. 63–84; Purkiss, The Witch in History, pp. 102–5, 130-4; A. Hughes, ‘Puritanism and gender’ in, Coffey and Lim (eds), The Cam- bridge companion to Puritanism, pp. 296–7.

[25] Willis, Malevolent Nurture; L. Jackson, ‘Witches, Wives and Mothers’, pp. 63–84; Purkiss, The Witch in History, pp. 102–5, 130-4; Roper, Oedipus and the Devil, pp. 20–6; M. Salmon, ‘The Cultural Significance of Breastfeeding and Infant Care in Early Modern England and America’, Journal of Social History, 28, no. 2 (1994), pp. 251–2; P. Crawford, ‘Attitudes Towards Menstruation’, Past & Present, 91, no. 1 (1981), pp. 47–73, especially p. 52.

[26] L. Houwen, ‘Howling Wolves and Other Beasts: Animals and Monstrosity in the Middle Ages’ in, B. Boehrer, M. Hand and B. Massumi (eds), Animals, Animality, and Literature (Cambridge, 2018), p. 43; J. Salisbury, The Beast Within: Animals in the Middle Ages (London, 1994), pp. 77–101, 146–66; K. Thomas, Man and the Natural World: Changing Attitudes in England, 1550-1800 (London, 1984), p. 41.

[27] E. Wilby, ‘The Witch’s Familiar and the Fairy in Early Modern England and Scotland’, Folklore, 111, no. 2, (Oct., 2000), pp. 283–305; Stearne, A Confirmation, especially pp. 16–33; Carr, ‘The Witch’s Animal Familiar’, pp. 34–9, 74, 79, 101; Millar, Witchcraft, the Devil and Emotions, pp. 81-3.

[28] B. Rosen, Witchcraft in England, 1558–1618 (Amherst, 1991), p. 32; Willis, Malevolent Nurture, p. 244; Roper, Oedipus and the Devil, pp. 200-27; Thomas, Man and the Natural World, pp. 38-9; Millar, Witchcraft, the Devil, and Emotions, pp. 130-2.






Steven Fielding, Bill Schwarz and Richard Toye, The Churchill Myths (Oxford University Press, 2020).

Steven Fielding, Bill Schwarz and Richard Toye, The Churchill Myths (Oxford University Press, 2020).


This article reviews The Churchill Myths, co-authored by Steven Fielding, Professor of Political History at the University of Nottingham, Bill Schwarz, Professor of English at Queen Mary University of London, and Richard Toye, Professor of History at the University of Exeter. The book follows the trajectory of Winston Churchill’s uses in the popular memory of the post-war period, suggesting that the legends of 1940 have remained a central element throughout, but also tracking the changing nature of elements around these stories, such as a greater attention to his personal character. Although the authors make a convincing case in many respects, it leaves some significant aspects of competing ‘Churchill myths’ and change over time underexplored. 

Key words: American politics, Brexit, British politics, Conservative Party, Cold War, film, India, memory, Second World War, Wales, Winston Churchill.

Biography: Alex Riggs is a University of Nottingham PhD History student, funded by Midlands4Cities. His research focuses on the 1970s and 1980s American left, especially their efforts to forge coalitions through electoral and grassroots politics.

A spectre is haunting Britain. The spectre of Winston Churchill. That’s the argument of Steven Fielding, Bill Schwarz and Richard Toye in The Churchill Myths. Certainly, any follower of British politics will need little convincing of Churchill’s continued relevance. As the authors point out in the book’s key rationale, Churchill has been deployed constantly in the debates around Brexit. Most prominently, Brexiteers have deployed an advocate for a ‘global Britain’ that confronted a German-dominated Europe, but Remainers too have cast him as a pro-European concerned with the implications of Britain cutting adrift from the continent.[1] Indeed, the period since the book’s finalisation propelled Churchill even further into the centre of political discourse, with the 75th anniversary of the end of the Second World War in Europe, the COVID-19 pandemic and global Black Lives Matter protests all prompting further discussion of his legacy.[2] Therefore, Fielding, Schwarz and Toye bring the subject necessary scholarly attention.

As they stress, the book’s subject is not Winston Churchill. It is about memory of him,

‘the many, contrary manifestations of the various Churchill legends and the common, invariant properties which make the range of individual stories recognisably instalments in a common process of codification, resulting in Churchill as myth’ (Emphasis original).[3]

This history is uncovered across three chapters. The first, ‘Brexit May 1940’, tackles Churchill’s political uses, highlighting his utility for a range of political actors in various conflicts and crises, from the Cold War to Brexit. The next, ‘The Churchill Syndrome’, goes over similar ground, again exploring the evolution of his British and American political deployments. It also introduces interesting theoretical concepts, particularly ‘reputational entrepreneurship’, whereby self-interested custodians use particular memories to build a sense of solidarity within communities by defining what they are for and against. This is linked with the concept of a ‘resonant core’ to memories that is always present, whose surroundings are constantly changing over time.[4] This represents important overlap with the ideas of the political theorist Michael Freeden, whose morphological approach to ideology similarly uncovers core concepts, but suggests significant fluctuations in the peripheral concepts that surround its centre over time.[5] Finally, ‘Persistence and Change in Churchill’s Mythic Memory’ is largely focused on dramatic depictions, highlighting the battles that Hollywood producers had with Churchill and his aides to get a biographical treatment to the big screen and his numerous depictions ever since.[6]

The authors convincingly show how the Churchill myths’ adaptability has kept them relevant through the post-war era. They highlight how the myth of an unshakable Churchill arose from early Cold War fears about Soviet expansionism. This was embraced by a diverse range of figures including liberal philosopher Isaiah Berlin, who praised his rhetorical ability ‘to give shape and character, colour and direction and coherence to the stream of events’, something essential for democracy to survive in this context.[7] Then as fears of national decline grew in the 1960s, Churchill became a symbol of Britain’s lost might. In the context of fears around the decline of Britain’s Empire, economy and culture, conservative commentators like the historian John Lukacs portrayed his death in 1965 as the simultaneous passing of the hierarchical, traditional society that sustained its great power status.[8] Most recently, The Churchill Factor, written by perhaps Churchill’s most enthusiastic ‘reputational entrepreneur’ Boris Johnson, depicted him as an anti-establishment figure, taking a meek, out-of-touch political elite to task for their belief in the limits of national power and using his rhetorical might to restore national esteem.[9] The authors suggest that in the age of Brexit’s political deadlocks, Johnson’s narrative of a ‘man of destiny’ smashing through national malaise has become dominant over popular views of Britain’s wartime leader.[10]

In making these arguments, the authors deploy an impressive range of material. Given the abundance of Churchill films, this medium is particularly prominent, especially 2018’s The Darkest Hour. These dramatic depictions are effective in evidencing the centrality of 1940 to Churchillian memory, with this a consistent element throughout- even 1972’s Young Winston, based on Churchill’s first autobiography, My Early Life, treats events decades before the war as foreshadowing his future greatness.[11] This medium also functions effectively in showing the changes that have occurred around this myth. Richard Burton’s 1974 portrayal presented a stoic individual, but his more recent biopics  celebrate his eccentricities, presenting his tempestuousness and bouts of depression as representative of his humanity, in contrast to the distant, stuffy Neville Chamberlain and Lord Halifax.[12] Political opinion is also drawn upon, including interventions from figures as diverse as John F. Kennedy, Margaret Thatcher and Nigel Farage, providing a good range of popular and elite discourses and showing his constant utility in the post-war period.[13]

The Churchill Myths also reveals some of the key silences in these depictions. In pinpointing the 1940 fixation of these sources, the authors highlight a contradiction in conventional attitudes- on the one hand, Churchill is the man of destiny, singlehandedly steering Britain away from defeat in 1940. Yet on the other, he appears to have been incapable of deploying these talents at any other moment, with his decades of ministerial office before and after the war rarely invoked, and practically irrelevant to the war once the United States and Soviet Union joined the conflict in 1941.[14] This means that a series of events that challenge his status as a unifying national hero, including but not limited to, his sending of troops to quell striking miners in South Wales as Home Secretary, his staunch support for imperialism, and his 1945 election defeat, are either glossed over or unexplained.[15] Even when they are mentioned, such as his pro-Empire intervention in the debate on Indian home rule in the 2002 film The Gathering Storm, they represent his honesty compared to the devious Conservative leadership, not his unrelenting imperialism.[16]

However, this point also reveals one of the book’s flaws. Although the authors are correct in highlighting 1940 as the central Churchillian myth, these silences are by no means universal. Directors may not be rushing to dramatize Churchill’s role in the Tonypandy riots, but his actions still have an important impact on memory of him in South Wales.[17] On a global scale too, the 1943 Bengal Famine has shaped Indian memory of Churchill, with the exacerbation of mass starvation by his wartime policies also fuelling a more critical history, and crucially one not centring on 1940.[18] These critiques make passing visits to the narrative, mentioned in criticisms from John McDonnell and Richard Burton, but a more detailed sense of how they fit within an analysis of the Churchill myths or why they have sprung up in particular contexts is not included.[19] Had they been, The Churchill Myths would have produced a more comprehensive analysis, one that highlights the plurality of memories implied by its title.

A closer analysis of these silences could have also brought further insights into the moments when Churchill is more obscure in political discourse. For instance, it is mentioned that John Major made little use of Churchill during his premiership and that no Churchill films were made between 1982 and 2002 and again between 2004 and 2017, yet no explanations are offered for why this was the case.[20] A more nuanced analysis of Churchill’s American deployments would have also been possible through this approach. Though the authors correctly highlight his use by all post-war American presidents, a search of Presidential public papers reveals that this has been far from equal over time. For instance, Jimmy Carter invoked Churchill thirteen times, but his successor Ronald Reagan found 125 occasions to quote the former Prime Minister.[21] Through closer examination of these deployments, a clearer understanding of why Churchill was especially useful for certain actors and less so for others could be achieved, with Reagan’s confrontational policy towards the Soviet Union making Churchill’s warnings over appeasement and post-war Soviet ambitions convenient.[22] Similarly, the defection of many Southern Democrats to the Republican Party also meant Churchillian wisdom about the merits of switching parties was applicable.[23]

 Patrick Finney’s Remembering the Road to World War Two provides a useful example of this approach. Focusing on the historiography of appeasement, Finney situates historians’ work within the various contexts they were writing in, contrasting the critical outlook of the immediate post-war years of high confidence in Britain’s great power status with the more sympathetic view of the declinist 1960s and ‘70s, where the limits on policymakers’ freedom of action were stressed.[24] Such an analysis brings important insight, and a similar approach more grounded in particular contexts would have given the book a more detailed sense of changes in this mythology over time. Moreover, given that the appeasers are ever present in stories told about Churchill, narratives that rehabilitate them by implication challenge the significance and necessity of Churchill’s intervention, addressing the content of Finney’s study would have added considerable important context.

On occasion in The Churchill Myths, such a structure is used to good effect. For instance, it takes Andrew Roberts’ biography of Churchill as an illustration of the context of internal struggles in the 1990s Conservative Party. The authors underline how Roberts highlighted Churchill’s racism not as a criticism of these attitudes, but as a critique of their inconsistency with the overly welcoming immigration policy of his 1950s premiership, a symptom of the Tory ‘wetness’ that was a concern of the party’s right in the context of John Major’s acceptance of European integration.[25] This is a perceptive analysis, and more frequent application would have added much to The Churchill Myths. This could serve as a means to ask some of the overarching questions about modern Britain, to assess whether the persistence of 1940 suggests a stagnant period, or its constant reinterpretation implies a more dynamic era. This would have also provided an opportunity to include the historiographical discussion that the book lacks, and thus put it into dialogue with this scholarship, as well as providing an opportunity to historicise these works, especially their 2000s proliferation.[26]

In summary then, The Churchill Myths provides a timely and readable intervention in contemporary debates about Churchill. In a relatively short book, the authors reveal important historical insights, showing the adaptability of Churchill over time to suit a variety of political purposes, but also the staying power of his heroic leadership in 1940 as the defining Churchill myth. Yet that brevity also leaves the reader wanting more, with issues over competing Churchill myths and its significance for broader questions in both British history and particular contexts underexplored. Fielding, Schwarz and Toye demonstrate the variety of issues that can be viewed through the lens of Churchill’s memory but leave historians with plenty of angles still to discover.

Download PDF




Primary Bibliography 

Carter, N., ‘This is the moment to learn the wartime generation’s lesson’, The Times, May 8, 2020, p.5.

Limaye, Y., ‘Churchill’s legacy leaves Indians questioning his hero status’,, accessed 01/04/2021.

Walker, P., ‘Boris Johnson says removing statues is to ‘lie about our history’’,, accessed 30/03/2021.

‘Advanced Search’,, accessed 06/04/2021. 

‘Advanced Search’,, accessed 06/04/2021.

‘Has the town of Tonypandy forgiven Winston Churchill? | ITV News’,, accessed 01/04/2021. 

‘Remarks at a Campaign Rally for Senator Don Nickles in Norman, Oklahoma’,, accessed 06/04/2021.

‘Statement on United States Defence Policy’,, accessed 06/04/2021.


Secondary Bibliography

Connelly, M., We Can Take It! Britain and the Memory of the Second World War (Abingdon, 2004). 

Fielding, S., Schwarz, B., Toye, R., The Churchill Myths (Oxford, 2020).

Finney, P., Remembering the Road to World War Two: International History, National, Identity, Collective Memory (Oxford, 2011). 

Freeden, M., ‘The Morphological Analysis of Ideology’, in M. Freeden, L. Tower Sargent and M. Stears (eds.), The Oxford Handbook of Political Ideologies (Oxford, 2013), pp.115-137. 

Noakes, L. and Pattinson, J. (eds.), British Cultural Memory and the Second World War (London, 2013). 

Smith, M., Britain and 1940: History, Myth and Popular Memory (London, 2000). 


[1] S. Fielding, B. Schwarz and R. Toye, The Churchill Myths (Oxford, 2020), pp.8-9.

[2] N. Carter, ‘This is the moment to learn the wartime generation’s lesson’, The Times, May 8, 2020, p.5; P. Walker, ‘Boris Johnson says removing statues is to ‘lie about our history’’,, accessed 30/03/2021.

[3] Fielding, Schwarz and Toye, The Churchill Myths, p.11.

[4] Fielding, Schwarz and Toye, The Churchill Myths, pp.72-73.

[5] M. Freeden, ‘The Morphological Analysis of Ideology’, in M. Freeden, L. Tower Sargent and M. Stears (eds.), The Oxford Handbook of Political Ideologies (Oxford, 2013), pp.124-25.

[6] Fielding, Schwarz and Toye, The Churchill Myths, pp.143-44.

[7] Fielding, Schwarz and Toye, The Churchill Myths, pp.40-41.

[8] Fielding, Schwarz and Toye, The Churchill Myths, pp.35-36.

[9] Fielding, Schwarz and Toye, The Churchill Myths, p.23.

[10] Fielding, Schwarz and Toye, The Churchill Myths, pp.27-28.

[11] Fielding, Schwarz and Toye, The Churchill Myths, pp.125,138

[12] Fielding, Schwarz and Toye, The Churchill Myths, pp.139-40.

[13] Fielding, Schwarz and Toye, The Churchill Myths, pp.9-10,79-80,84-85

[14] Fielding, Schwarz and Toye, The Churchill Myths, pp.111-12.

[15] Fielding, Schwarz and Toye, The Churchill Myths, pp.122-24.

[16] Fielding, Schwarz and Toye, The Churchill Myths, pp.133-34.

[17] ‘Has the town of Tonypandy forgiven Winston Churchill? | ITV News’,, accessed 01/04/2021.

[18] Y. Limaye, ‘Churchill’s legacy leaves Indians questioning his hero status’,, accessed 01/04/2021.

[19] Fielding, Schwarz and Toye, The Churchill Myths, pp.69-70,108-09.

[20] Fielding, Schwarz and Toye, The Churchill Myths, pp.86,138.

[21] ‘Advanced Search’, accessed 06/04/2021; ‘Advanced Search’,, accessed 06/04/2021.

[22] ‘Statement on United States Defence Policy’,, accessed 06/04/2021.

[23] ‘Remarks at a Campaign Rally for Senator Don Nickles in Norman, Oklahoma’,, accessed 06/04/2021.

[24] P. Finney, Remembering the Road to World War Two: International History, National Identity, Collective Memory (London, 2011), pp.192-94,200-02.

[25] Fielding, Schwarz and Toye, The Churchill Myths, pp.88-89.

[26] M. Connelly, We Can Take It! Britain and the Memory of the Second World War (Abingdon, 2004); M. Smith, Britain and 1940: History, Myth and Popular Memory (London, 2000); L. Noakes and J. Pattinson (eds.), British Cultural Memory and the Second World War (London, 2013).

F. Houghton, The Veterans’ Tale: British Military Memoirs of the Second World War (Cambridge, 2018).

F. Houghton, The Veterans’ Tale: British Military Memoirs of the Second World War (Cambridge, 2018).

Biography: William Noble is a PhD student at the University of Nottingham, funded by Midlands4Cities. His research examines the relationships between popular discourses of ‘race’ and immigration, and the concept of ‘decline’ in the post-war Midlands.

What can veterans’ memoirs of the Second World War tell us about combatants’ experiences both of conflict and post-war life? This is the question that Frances Houghton seeks to answer in The Veterans’ Tale: British Military Memoirs of the Second World War. In response, Houghton convincingly demonstrates that veterans’ memoirs are valuable to historians because of their capacity to reveal  the ex-combatants’ retrospective memory and understanding of battle; despite which they have been previously unheard on a collective level within the scholarship on war, memory, and personal narratives.[1] Houghton argues the attention they have received has been in the discipline of literary criticism, for example in Paul Fussell’s The Great War and Modern Memory and Samuel Hynes’ The Soldiers’ Tale.[2] Given the voluminous literature on the war’s impact on British society, politics and culture – which has taken account of war films, public memorials and commemorations, and many other types of sources besides – in writing the first book-length historical study of Second World War veterans’ memoirs Houghton is beginning to correct this surprising omission.[3]

Drawing influence from various disciplines, including memory studies, auto/biographical studies, and histories of the emotions, Houghton investigates how veterans reinterpreted their wartime experiences in the post-war years, by examining their relationships to four main themes: landscape; weaponry; the enemy; and comradeship. Houghton’s detailed introduction establishes the book’s theoretical and methodological underpinnings.[4] In Alessandro Portelli’s famous defence of oral history, he wrote that the value of oral testimony lies not in its ‘adherence to fact’, but in ‘its departure from it, as imagination, symbolism, and desire emerge’.[5] Houghton views veterans’ memoirs similarly, arguing that the embellishments, discrepancies, and conflicts in an individual’s memories are what make them such a rich source of evidence both about how war was experienced at the time, and how it is remembered over the subsequent years and decades.[6] In the following eight chapters, which can be roughly divided into three sections, Houghton surveys a wide range of veteran memoirs from the Army, Navy, and RAF, and from the European, North African and Atlantic theatres – comparing and contrasting the experiences of combatants in each.

Chapters 1 and 2 (the first section) survey the provenance of Second World War veteran memoirs, employing the archives of major publishing houses to examine veterans’ motives for, and the process of writing, publishing and publicly producing their war experiences in book form in post-war Britain. In Chapter 1, ‘Motive and the Veteran-Memoirist’, Houghton demonstrates how, for some veterans, factors such as post-traumatic stress disorders, post-war disillusionment, and an inability to build lasting relationships saw the war elevated to an ‘apex of memory’. Constructing memoirs could enable veterans to process their combat experiences to be able to better cope with the present. These discussions of veterans’ post-war struggles are rather brief, however, and could form the basis for a fascinating study in its own right, though Houghton’s introduction makes it clear that her focus is specifically on combat experiences.[7] However, memoirs were not only constructed for an ‘Audience of the Self’.[8] Memoirists also wrote for ‘Audiences of the Future’ – for their children, for future generations of servicemen, and/or as a general warning to the public to ‘not let it happen again’.[9] Finally, they wrote for ‘Audiences of Comrades’, both living and dead.[10]

Chapter 2 examines the process of ‘Penning and Publishing the Veterans’ Tale’, with prominent themes including: memoirists’ insistence on the reliability of their memories, despite which many went to great efforts to establish the veracity of their accounts with support from a variety of additional sources (letters, diaries, regimental records, etc.); disputes between authors and publishers, as veterans’ desires for the most authentic representation of their war experiences could clash with the publishers’ commercial incentives; and the trend for memoirists to become more candid in their accounts as censorship was eased (both of information with the 1967 amendment of the Public Records Act, and of language with the reforms of the Obscene Publications Act), and as a more ‘liberal climate’ developed from the 1960s.[11]

Chapters 3-6 (the second section analyse the ‘narrative content’ of the memoirs, and particularly their literary representations of the front line, focusing respectively on the themes of landscape, weaponry, the enemy, and comradeship.[12] Chapter 3, ‘Landscape, Nature, and Battlefields’, investigates the landscape’s role in shaping experiences of combat, and the meanings veterans projected onto these landscapes, through comparison and contrast of their experiences with the Army in North Africa, in aerial combat, and at sea.[13] In all cases, Houghton finds that landscapes were invested with distinct symbolism which allowed veterans to make sense of their battle spaces.[14] Chapter 4 similar examines veterans’ relationships with weaponry, examining the Royal Navy in the Battle of the Atlantic, Battle of Britain fighter pilots, and tank crews in the 1944 Normandy Campaign, again comparing and contrasting these varied experiences.[15] For example, while the differences between the latter two are particularly striking, these accounts all reinforced the centrality of the human experience of war, rather than the story that has sometimes been told of the Second World War of machines ‘dominating’ to the ‘exclusion of the human combatant’.[16]

Chapter 5, ‘“Distance”, Killing, and the Enemy’, similarly complicates accounts of warfare which suggest that technology depersonalised killing. Houghton finds that technology could not make the dead completely anonymous, though Navy and RAF veteran-memoirists were perhaps better able to employ the ‘grey machinery of murder’ as a psychological barrier to avoid confronting any moral qualms over their actions than those who served in the Army.[17] Some of the most poignant sections of the book are those in this chapter which deal with veterans’ contacts with the enemy; their memoirs suggest that preconceived ideas of German troops as the inhuman ‘Hun’ could not be sustained after their first personal contact with them.[18] It would be interesting to contrast these veterans’ experiences with those who fought against Japan, but Houghton chooses not to consult memoirs by those who fought in Asia, as the deeply racialised conceptions of the Japanese held by the British render veterans’ depictions of combat on that front very different to those who fought in Europe.[19] Chapter 6, ‘Comradeship, Leadership, and Martial Fraternity’, investigates the claim by psychiatrists and others that the ‘small group in combat’ was the main motivation for fighting and was key to preventing psychological breakdown during combat.[20] Houghton finds that for memoirists, the personal relationships within their small group were indeed the ‘ultimate spur’ in battle, whether this small group was a platoon, the company of a ‘little ship’ or submarine, or a seven-man Lancaster bomber crew, though again there were also important differences between the various branches.[21]

Chapters 7 and 8 (the third section) explore how memoirists used their historical records for both private and public reasons, using their memoirs both for self-fashioning and for claiming agency over their wartime experiences.[22] Chapter 7, ‘Selfhood and Coming of Age’, charts how memoirists wrote of their wartime experience as a journey from ‘Youthful Innocence’ to ‘Manhood’. Houghton compares these memoirs to Bildungsroman, interrogating how veteran-memoirists understood and reconstructed their ideas of masculinity, maturity, and selfhood in relation to their war experiences.[23] The chapter investigates memoirists’ motivations for joining the conflict, finding that despite being aware of the horrors of the First World War many memoirists, influenced by the popular culture of the period, saw warfare as an enticing ‘adventure’ situation, and the soldier as a ‘quintessential figure of heroic masculinity’.[24] However, the chapter also shows that memoirists were quickly disabused of these naïve beliefs.[25] Memoirists still saw the war as crucial to their ‘growing up’, but the model of adult masculinity they ascribed to was less the ‘soldier-hero’ model identified by Graham Dawson, but instead the ‘understated, good-humoured, kindly, and self-deprecating courage of the “little man”’ identified by Sonya Rose.[26] Crucially, this latter model of masculinity, unlike the ‘soldier-hero’ model, could include civilians on the home front, offering reassurance that veterans would be able to readapt to a civilian masculinity.[27] Whilst Houghton in her introduction acknowledges the impossibility of ignoring women’s impact on masculine identities and experiences—even in such seemingly closed-off, all-male institutions as the British military—she nonetheless does not investigate how women were represented in veterans’ memoirs, as The Veterans’ Tale is essentially concerned with memoirists’ representations of frontline combat, from which women were excluded.[28]

While all eight chapters are fascinating, it in Chapter 8, ‘History, Cultural Memory, and the Veteran-Memoirist’, that the wider importance of military memoirs to the historical and cultural memory of the war becomes clear. Here, Houghton examines how three memoirists – Alex Bowlby, Miles Tripp, and Jack Broome – used their memoirs to challenge what were in their opinion unsatisfactory official, academic, and cultural representations of the war. For example, Miles Tripp’s 1969 memoir of his service as a bomb-aimer, The Eighth Passenger, was intended to rehabilitate RAF Bomber Command’s post-war reputation, which many former aircrew felt presented them as war criminals.[29] However, it was also used by historians in debates on the morality of Bomber Command’s actions, particularly the February 1945 Dresden raid, with Tripp’s claim that he attempted to drop his bombs outside the city interpreted by many as an implicit attack on Bomber Command. The chapter examines Tripp’s vehement denials of such claims, and his opposition to his memoir being used by disgraced historian David Irving (most notorious as a Holocaust denier) to support his account of the raid, which was later discredited as he vastly inflated the number of deaths by over 100,000 as part of his attempt to castigate the RAF and establish a moral equivalence between the Nazi regime’s crimes, and the killing of German civilians.[30]

Finally, the short conclusion summarises the book’s main themes, namely memoirists’ desires to depict their personal reactions to war, and the human factors comprising the experience of battle, in contrast to military historians’ more grand and dehumanised narratives. As Houghton puts it, war memoirs ‘capture the man inside the uniform, his own understanding of his physical and psychological performance in the field, and his emotional responses’ to combat. Significantly, they offer a window into the ‘experience of battle as it endures in a veteran’s mind throughout his lifetime’. Memoirists wrote for public audiences, to educate, entertain, and warn them of the folly of war, and in an attempt to claim ownership of scholarly, official, and cultural remembrance of the conflict, but they also wrote for themselves, to ‘reconstruct shattered notions of masculine self into a coherent and meaningful image’ in the post-war years.[31]

This is an impressive and important book, but there are some small points of criticism that can be made in addition to those already raised. Houghton chooses to follow a thematic structure in her chapters, in turn dividing each chapter into an examination of the distinct experiences of combatants in each branch of the armed forces. This can make it somewhat difficult to trace the experiences of any particular memoirist, or of any branch of the armed forces, though the index is helpful in this regard. Moreover, while Houghton is up-front about why some memoirs were excluded from the scope of her study, the exclusion of accounts written during the war itself is perhaps unfortunate, as to contrast accounts written then with those written in the post-war years and decades might have helped elucidate her arguments about memoirists’ changing perspectives over time.[32]

No one study cannot be expected to cover all aspects of this vast and fascinating subject, however, and it is clear that an enormous amount of research went into this book. Houghton’s bibliography lists over ninety individual memoirists, some with multiple titles and/or editions to their name, such that the total number of memoirs consulted is over a hundred.[33] This is in addition to a huge range of other primary and secondary sources, and every point made is well substantiated with multiple examples from veterans’ memoirs. The Veterans’ Tale is successful in substantiating its central claim for the importance of veterans’ memoirs as historical sources, and in providing fresh insights into veterans’ experiences of combat as they were lived and remembered throughout veterans’ lifetimes.[34] It is now left for other historians to build on Houghton’s work in furthering our understanding of British cultural memories of the Second World War. More thorough examination of the post-war lives of ‘veteran-memoirists’, and studies of those types of memoirs Houghton excludes from her account, would be particularly fascinating.

Download PDF



Bourke, J., An Intimate History of Killing: Face-to-Face Killing in Twentieth-Century Warfare (London, 1999).

Connelly, M., We Can Take It! Britain and the Memory of the Second World War (Harlow, 2004).

Dawson, G., Soldier Heroes: British Adventure, Empire, and the Imagining of Masculinities (London, 1994).

Fussell, P., The Great War and Modern Memory (London, 1975).

Houghton, F., The Veterans’ Tale: British Military Memoirs of the Second World War (Cambridge, 2018).

Hynes, S., The Soldiers’ Tale: Bearing Witness to Modern War (London, 1998).

Portelli, A., ‘What Makes Oral History Different’, in R. Perks and A. Thomson (eds.), The Oral History Reader (London, 1998), pp. 63-74.

Rose, S. O., ‘Temperate Heroes: Concepts of Masculinity in Second World War Britain’, in S. Dudink, K. Hagemann and J. Tosh (eds.), Masculinities in Politics and War: Gendering Modern History (Manchester, 2004), pp. 177-95.

Wessely, S., ‘Twentieth-Century Theories on Combat Motivation and Breakdown’, Journal of Contemporary History, 41/2 (2006), pp. 269-86.



[1] F. Houghton, The Veterans’ Tale: British Military Memoirs of the Second World War (Cambridge, 2018), pp. 4-5.

[2] P. Fussell, The Great War and Modern Memory (London, 1975); S. Hynes, The Soldiers’ Tale: Bearing Witness to Modern War (London, 1998).

[3] See, for one example among many, M. Connelly, We Can Take It! Britain and the Memory of the Second World War (Harlow, 2004).

[4] Houghton, The Veterans’ Tale, pp. 4-6.

[5] A. Portelli, ‘What Makes Oral History Different’, in R. Perks and A. Thomson (eds.), The Oral History Reader (London, 1998), pp. 68-9.

[6] Houghton, The Veterans’ Tale, p. 2.

[7] Houghton, The Veterans’ Tale, pp. 22-6.

[8] Houghton, The Veterans’ Tale, pp. 29-39.

[9] Houghton, The Veterans’ Tale, pp. 40-45.

[10] Houghton, The Veterans’ Tale, pp. 46-52.

[11] Houghton, The Veterans’ Tale, pp. 59-65.

[12] Houghton, The Veterans’ Tale, pp. 25-6.

[13] Houghton, The Veterans’ Tale, pp. 72-82, 83-92, and 93-101 respectively.

[14] Houghton, The Veterans’ Tale, pp. 101-2.

[15] Houghton, The Veterans’ Tale, pp. 105-12, 112-21, and 121-35 respectively.

[16] Houghton, The Veterans’ Tale, pp. 135-6.

[17] J. Bourke, An Intimate History of Killing: Face-to-Face Killing in Twentieth-Century Warfare (London, 1999), p. 6.

[18] Houghton, The Veterans’ Tale, pp. 161, 166-7.

[19] Houghton, The Veterans’ Tale, p. 22.

[20] Simon Wessely, ‘Twentieth-Century Theories on Combat Motivation and Breakdown’, Journal of Contemporary History, 41/2 (2006), p. 269.

[21] Houghton, The Veterans’ Tale, p. 170.

[22] Houghton, The Veterans’ Tale, pp. 25-6.

[23] Houghton, The Veterans’ Tale, p. 207.

[24] Houghton, The Veterans’ Tale, pp. 208-21.

[25] Houghton, The Veterans’ Tale, pp. 221-42.

[26] G. Dawson, Soldier Heroes: British Adventure, Empire, and the Imagining of Masculinities (London, 1994); S. O. Rose, ‘Temperate Heroes: Concepts of Masculinity in Second World War Britain’, in Stefan Dudink, Karen Hagemann and John Tosh (eds.), Masculinities in Politics and War: Gendering Modern History (Manchester, 2004), pp. 177-95.

[27] Houghton, The Veterans’ Tale, pp. 242-3.

[28] Houghton, The Veterans’ Tale, pp. 22-3.

[29] The Lancaster bombers which Tripp served in had a seven-man crew; the ‘eighth passenger’ is an allusion to fear, specifically the way Tripp saw it as invariably accompanying the bomber crews on every mission.

[30] Houghton, The Veterans’ Tale, pp. 254-62.

[31] Houghton, The Veterans’ Tale, pp. 272-8.

[32] Houghton, The Veterans’ Tale, pp. 22-5.

[33] Houghton, The Veterans’ Tale, pp. 279-90.

[34] Houghton, The Veterans’ Tale, p. 2.

Valkyrie: The Women of the Viking World, Jóhanna Katrín Friðriksdóttir (London, 2020)

Valkyrie: The Women of the Viking World, Jóhanna Katrín Friðriksdóttir (London, 2020)

Biography: Sian Webb recently submitted her PhD thesis, ‘A Land of Five Languages: Material Culture, Communities and Identity in Northumbria, 600-867’, that was joint supervised by Chris Loveluck in Archaeology and Peter Darby in History.  She focuses on early medieval cultural history, material culture and medieval studies.

The year is 1014 and The Battle of Clontarf rages in Dublin. It is a setting which in many cases immediately sparks images of men fighting and dying for their male lords, kings, plunder and the glory of battle.  As this is happening, a man in Caithness far away on the north-eastern coast of Scotland spied twelve figures entering a weaving shed. These women were the Valkyries.  As the Irish and Norse fought in Dublin, they wove a tapestry with a thread of human entrails and loom-weights of skulls. This is how Friðriksdóttir opens her exploration of women living in the Viking world. Whilst warfare, and medieval battle in particular, is often envisioned as a strictly masculine affair in modern popular consciousness, this vignette from the thirteenth-century Njáls saga reintroduces a feminine aspect.

Valkyrie: The Women of the Viking World is a rich cultural history of the lives of Viking Age women, constituting a worthy addition to the existing scholarship on this topic.  In this endeavour, the setting is apt. Women from goddesses such as Freyja and Valkyries like Sigrún, to extraordinary, albeit human, women accounted for in Icelandic sagas, such as the Viking Guðriðr Þorbjarnardóttir, are shown to be deeply complex individuals.  They all are shown to have a rich mixture of virtues and vices with the capacity to display the full range of emotions.[1] They could be honourable, brave and wise whilst also able to make mistakes and be ruthless in their anger. These women did not appear as static images nor as moral lessons set in black and white. As Friðriksdóttir brings together the threads of her study it is evident, as she states in her introduction, that these are reflections of real people and emerge from a society wherein women were essential for their work and the wisdom they possessed.

Friðriksdóttir’s monograph emerges from a shift in the scholarly appreciation of the complexity of the culture and identities of the people, both men and women, of the Viking Age (ca. 790-1066 CE). Her study is largely confined to Viking settlements in Northern Europe and the British Isles, though she does at times bring in evidence from the Baltic region.  In this monograph, she provides an overview of the evidence at hand bringing together shared attributes shaping the lives and identities of women throughout the regions of Viking settlement.  This widening appreciation of Viking culture and society began in the 1970s with the seminal work The Viking Achievement: The Society and Culture of Early Medieval Scandinavia by Peter Foote and David Wilson, in which the authors devoted very little space to the discussion of raiding and violence, delving instead into the rich culture and artistically constructive activities of the period.[2] Two decades later, Judith Jesch opened the discussion of the place of women in Viking society.[3] This monograph brought together a wide variety of  material sources and toponymy to bring gender studies into a scholarly tradition that had long ignored the presence and involvement of women in all aspects of life.  The study of Viking women and their potential for active involvement in raiding and warfare culture was reinvigorated in 2017 by Charlotte Hedenstierna-Jonson, et al. with their paper ‘A Female Viking Warrior Confirmed by Genomics’.  The team used genome-wide sequence data to confirm the biological sex of a Viking individual in a well-furnished grave and disprove previous assumptions that the individual must have been male based on the weaponry and material culture with which it was buried. The team concluded by cautioning against basing our understanding of the past and potential of the people who lived then on generalised assumptions of cultural norms and stereotypes.

As Friðriksdóttir states, the term ‘Viking’ does not only refer to those who were chiefly interested in violence, pillaging and raiding, though these encounters did occur particularly in the earlier period of their activity. ‘Viking identities’, according to her, also encompass the mobility of the people in question. Those were groups of people actively involved in trade routes that stretched from Asia to the western fringes of Europe, in travel and in the settlement of short- and long-term colonies.[4] In these endeavours, women were more than able to take an important role, whether or not they were actively involved in raiding and warfare. As seen in the opening vignette, women were inextricably linked with spinning and weaving. Textile production opened a considerable path to social mobility, as Viking mobility relied on ships and their sails. These objects, along with the production of clothing suitable for long sea voyages in arctic seas, required years of effort that would largely be the work of professional women.[5] This mobility and the opportunity for women to engage in high-level political activity is reflected both in sagas and in presence of high-status female graves at the sites of important Viking Age power centres.[6]

In order to provide an accurate and fully developed image of Viking women, Friðriksdóttir brings together a wide variety of sources both material and textual, including runic inscriptions, the text and imagery carved on runestones, toponymy, picture stones, Viking Age art and archaeological sources alongside textual evidence provided by the sagas and from annals and other historical texts produced in Ireland and other European areas.  This approach requires a synthesis of materials from a wide variety of academic fields spanning Viking and Medieval studies and the growing appreciation of material culture in history that began in the 1990s with scholars such as Judith Jesch, and continued to gain strength through the early 2000s, with works such as the collected volumes Land, Sea and Home: Proceedings of a Conference on Viking Period Settlement, (Cardiff, July 2001) and Cædmons Hymn and Material Culture in the World of Bede: Six Essays.[7]  This approach continues to prove fruitful, attracting a growing number of scholars focusing on a range of topics on which textual sources are less forthcoming.[8]

Each individual type of source, from material culture and archaeology to textual vignettes, offers its own window into the study of the cultures and communities of the past. Without a wide source base, information from different fields of knowledge loses some key contextual elements, skewing our understanding of past societies and the identities that grew within them. By bringing a wider base of sources together to form an integrated approach, it is possible to work towards an understanding of how all pieces fit together. Insights gleaned from different sources can provide context and balance out the weaknesses and biases present in each individual type of source. This balance of sources is adeptly managed by Friðriksdóttir, leading to the wonderfully complex and richly layered depiction of the lives and opportunities of Viking women shown in Valkryie.

Friðriksdóttir draws the reader in by the textual vignettes provided by the sagas.  These texts draw readers into a portrayal of the world as seen by the saga authors and their audience, bringing a vibrancy of colour and life to the discussion of the past.  It is balanced by the evidence provided by the wide variety of complementary sources discussed above.  A wide range of women from the sagas help to provide evidence for the shape of life for women from varied backgrounds, from wealthy and influential women who controlled the lives of their families to young women who become trapped in awful cycles of poverty and abuse.  The depiction of Valkyries and deities from the sagas provide further insights into cultural understandings of the nature of women and their ability to be brave and strong or cowardly and deceitful.  Whilst Friðriksdóttir shows a careful and studied handling of the sagas, the book could benefit from a deeper discussion of the difficulties presented by these sources for the benefit of the reader.  These difficulties include the mixing of Christian and traditional belief systems for the saga authors and how this may tinge the text.  Another difficulty that could have been discussed is the chronological diversity of the sources.  The saga authors often wrote about things that happened centuries before their own time.  Ideology and cultures evolved and adapted to the new problems and opportunities presented by different times.  This introduced further potential for incidental misrepresentation of what would now have been a different culture from the saga author’s own contemporary reality.

The book is structured around a life cycle, delicately following the trails of women’s lives in the lands touched by Viking culture from birth through to death. Along the way, Friðriksdóttir examines how age, marital status and social rank affected their identities through material culture, burial and osteological archaeology, and sagas. She considers infancy and childhood for female offspring, discussing infanticide and the argument that female infants may have been more likely to be left to die from exposure in times of hardship, offering a balanced view on all sides of this argument. In this, she sets runic evidence that suggests a population balance skewed in favour of men alongside burial evidence and law codes that indicate the depth of love that Viking families could show for their daughters and the protections granted to children and pregnant women regardless of the infant’s sex.[9]

Chapter 2 focuses on the social world of teenage girls, discussing the cultural beliefs and family honour that shaped their lives and potentially restrained their opportunities. Yet, this period of youth and young adulthood also brought with it the potential for women to take a role in craft and trade work, to act as poets (skáldmær) or a more physically active role in violence and warfare.[10] The following chapter turns to adulthood, the lives and status of married women, and female agency and divorce. In it, the author discusses personal adornment, women’s involvement in craft-working and trade, and opportunities for travel and leadership roles.  Women were valued for their intelligence and abilities, personal attributes that could prove to be attractive qualities in a partner. Even after marriage, however, women were able to initiate a divorce if marital relations broke down.

Much of the adult life of fertile women would be spent in a cycle of pregnancy and the nursing of young children, along with miscarriages, healing from childbirth and the potential of death associated with childbirth. These concerns and the importance of motherhood form the core of Chapter 4. The chapter brings these topics to life through evidence from sagas showing the bravery of pregnant women, as well as mothers who could be wise leaders of their communities, loving supporters of their families and ruthless in their attempts to secure the future wealth and social rank of their sons and daughters.[11]

If women survived their childbearing years, they were statistically more likely to outlive their partners. Chapter 5 focuses on this shift in the life cycle with an examination of widowhood in Viking culture. In this endeavour, Friðriksdóttir looks at the actions of widows in sagas alongside the evidence of influential women available from runic inscriptions and grave goods.  Widows were able to consolidate considerable wealth as businesswomen and remain active in their communities as commissioners for the construction and upkeep of bridges and roads.[12] The transition between older belief systems and the new Christian religion brought additional opportunities for widows who displayed their new faith by commissioning stone crosses blending Christian and traditional imagery with runic inscriptions.[13]

The monograph concludes with the experience of elderly Viking Age women and their treatment in death. Viking Age burials indicate that communities held older women in positions of considerable influence and dignity.[14]  Evidence found in sagas, graves containing staffs, amulets and medicinal plants, and the iconography of women holding staffs and branches found on the Kirk Michael cross slab (123) on the Isle of Man suggest that older women could be valued as professional seeresses (völva or seiðkona) and for their role in traditional magic (seiðr).[15]

Overall, Friðriksdóttir builds a vivid image of the complex realities of life for women in Viking settlements. Women could be constrained by societal expectations, yet Viking Age culture allowed opportunities for both physical and social mobility.  Women took positions of importance in their families and communities from their youth to the end of their lives. They could be ruthless and vengeful or wise and honourable, characterised by a mixture of virtues and vices. The balance of sources provides a detailed consideration of the realities of life for Viking Age women, and the textual vignettes drawn from sagas make the work endlessly engaging for both academic readers and non-specialists interested in Viking history.

Download PDF



Brink, S. and Price, N. S. (eds). The Viking World. (New York, 2008).

Cambridge, E. and Hawkes, J. (eds), Crossing Boundaries: Interdisciplinary Approaches to the Art, Material Culture, Language and Literature of the Early Medieval World (Philadelphia, 2017).

Frantzen, A. J. and Hines, J. (eds), Cædmon’s Hymn and Material Culture in the World of Bede: Six Essays (Morgantown, 2007).

Fleming, R., Britain After Rome: The Fall and Rise, 400-1070 (London, 2011).

Foote, P. and Wilson, D., The Viking Achievement: The Society and Culture of Early Medieval Scandinavia. (London, 1970).

Friðriksdóttir, J. K., Valkyrie: The Women of the Viking World. (London, 2020).

Gilchrist, R., Gender and Material Culture: The Archaeology of Religious Women (New York, 1994).

Gilchrist, R., Medieval Life: Archaeology and the Life Course (Woodbridge, 2012).

Hayeur Smith, M., The Valkyries’ Loom: The Archaeology of Cloth Production and Female Power in the North Atlantic (Florida, 2020)

Hedenstierna-Jonson, C., Kjellström, A.,  Zachrisson, T., Krzewińska, M., Sobrado, V., Price, N., Günther, T., Jakobsson, M., Götherström, A. and Storå, J., ‘A Female Viking Warrior Confirmed by Genomics’, American Journal of Physical Anthropology, 164/ 4 (2017), pp. 853-860.

Hines, J., Lane, A. and Redknap, M. (eds). Land, Sea and Home: Proceedings of a Conference on Viking Period Settlement, at Cardiff, July 2001 (Abingdon, 2004).

Hyer, M. C., and Owen-Crocker, G. R., The Material Culture of Daily Living in the Anglo-Saxon World (Exeter, 2011).

Jesch, J., The Viking Diaspora (Abingdon, 2015).

Jesch, J., Women in the Viking Age (Woodbridge, 1991).

Jones, S., The Archaeology of Ethnicity: Constructing Identities in the Past and Present. (London, 1997).

Sørensen, M. L. S., ‘Gender, Material Culture, and Identity in the Viking Diaspora’, Viking and Medieval Scandinavia, 5 (2009), pp. 253-269.

Wicker, N. L., ‘Christianization, Female Infanticide, and the Abundance of Female Burials at Viking Age Birka in Sweden’, Journal of the History of Sexuality, 21 (2) (May 2012), pp. 245-262.


[1] J. K. Friðriksdóttir. Valkyrie: The Women of the Viking World. (London, 2020). Pp. 10-11.

[2] P. Foote and D. Wilson. The Viking Achievement: The Society and Culture of Early Medieval Scandinavia. (London, 1970).

[3] J. Jesch, Women in the Viking Age. (Woodbridge, 1991).

[4] Friðriksdóttir. Valkyrie. Pp. 12.

[5] Friðriksdóttir. Valkyrie. Pp. 85.

[6] Friðriksdóttir. Valkyrie. Pp. 107-108.

[7] J. Hines, A. Lane and M. Redknap (eds). Land, Sea and Home: Proceedings of a Conference on Viking Period Settlement, Cardiff, July 2001 (Abingdon, 2004); A. J. Frantzen and J. Hines (eds). Cædmon’s Hymn and Material Culture in the World of Bede: Six Essays. (Morgantown, 2007);

[8] For Viking studies, material culture is invaluable.  See: N. L. Wicker, ‘Christianization, Female Infanticide, and the Abundance of Female Burials at Viking Age Birka in Sweden’, Journal of the History of Sexuality, 21 (2) (May 2012), pp. 245-262; M. Hayeur Smith, The Valkyries’ Loom: The Archaeology of Cloth Production and Female Power in the North Atlantic. (Florida, 2020); M. L. S. Sørensen, ‘Gender, Material Culture, and Identity in the Viking Diaspora’, Viking and Medieval Scandinavia, 5 (2009), pp. 253-269. This process can be seen throughout medieval studies. See: E. Cambridge and J. Hawkes (eds), Crossing Boundaries: Interdisciplinary Approaches to the Art, Material Culture, Language and Literature of the Early Medieval World, (Philadelphia, 2017); S. Jones. The Archaeology of Ethnicity: Constructing Identities in the Past and Present (London, 1997); M. C. Hyer and G. R. Owen-Crocker. The Material Culture of Daily Living in the Anglo-Saxon World (Exeter, 2011); R. Gilchrist, Gender and Material Culture: The Archaeology of Religious Women. (New York, 1994); R. Gilchrist, Medieval Life: Archaeology and the Life Course (Woodbridge, 2012).

[9] Friðriksdóttir. Valkyrie. Pp. 24-25, 35.

[10] Friðriksdóttir. Valkyrie. Pp. 52, 54-55, 58-59.

[11] Friðriksdóttir. Valkyrie. Pp. 118, 128-129, 142.

[12] Friðriksdóttir. Valkyrie. Pp. 162.

[13] Friðriksdóttir. Valkyrie. Pp. 162-163.

[14] Friðriksdóttir. Valkyrie. Pp. 173.

[15] Friðriksdóttir. Valkyrie. Pp. 179-180, 183.

Nordic Studies in 2021: When Vikings Raid Real Life, Our Good Intentions Get Pillaged

Nordic Studies in 2021: When Vikings Raid Real Life, Our Good Intentions Get Pillaged

Beth Rogers is a PhD student at the University of Iceland in Reykjavík, Iceland, studying topics of food history and medieval Icelandic culture for her thesis, “On with the Butter: The Cultural Significance of Dairy Products in Medieval Iceland.” The project is hosted by the Institute of History at the Centre for Research in the Humanities.

Beth has written more than 30 popular and academic articles, including 2 book chapters, on such varied topics as Viking dairy culture, salt in the Viking Age and medieval period, food as a motif in the Russian Primary Chronicle and the literary structure of Völsunga saga. Her other research interests include: medieval Literature (especially sagas), military history, emotions in literature, Old Norse mythology and folklore, and cultural memory.

The impact and degree of white supremacist appropriation of Nordic culture in Scandinavian Studies has been the cause of recent public interest and scholarly debate. Viking and medieval imagery was seen displayed, for example, by participants in the Unite the Right rally in Charlottesville, Virginia, in 2017. But the most prominent display was during the violent attack on the United States Congress at the U.S. Capitol on January 6, 2021, when these images re-emerged, most visibly in the form of tattoos and clothing worn by Jake Angeli (real name Jacob Chansley),  the self-described ‘QAnon Shaman’. Angeli was instantly recognisable, wearing furs and horns and a face painted in red, white, and blue, while his bare chest blazed with black lines of Yggdrasil, Mjölnir, and the Valknut. News outlets leaped to provide context to this oddball stand-out among the mob of Americans angry at the outcome of the presidential election of November 2020, explaining the meaning of his tattoos for readers who had not seen them before or did not know of their associations with Norse mythology. The media attempted to clarify that Angeli was not part of the Antifa or the Black Lives Matter movements, but QAnon, a political and social conspiracy group which has gained prominence in recent months since its appearance on internet message boards in October 2017. Neither the Antifa nor the BLM movement is known for using any Nordic cultural symbols, yet in the immediate aftermath of the attack on the US Capitol, confusing claims that Angeli was actually part of the BLM movement spread quickly on social media. 

 Social media image circulated heavily in the days following the attack on the US Capitol. Origin unknown. A Google Reverse Image Search returned no information. 

Angeli’s interviews with BrieAnna J. Frank, reporter for The Arizona Republic, leave little doubt as to his right-wing ideological leanings. His frequent appearances at events carrying a sign stating, ‘Q sent me!’ further confused the issue. According to the BBC, QAnon is ‘a wide-ranging, completely unfounded theory that says that President Trump is waging a secret war against elite Satan-worshipping paedophiles in government, business and the media.’

Angeli himself, currently awaiting trial in federal prison (where he has experienced problems with the lack of organic food), has expressed regret over his actions. In an interview with US news program 60 Minutes, Angeli spoke from jail – in an unsanctioned interview which resulted in a ‘scolding’ for his lawyer from a judge – and insisted that, ‘I regret entering that building. I regret entering that building with every fibre of my being’ (0:43 – 0:47). His actions ‘were not an attack on [the United States]’, Angeli insisted. Instead, 

I sang a song, and that’s a part of Shamanism. It’s about creating positive vibrations in a sacred chamber. I also stopped people from stealing and vandalising that sacred space – the Senate. I also said a prayer in that sacred chamber because it was my intention to bring divinity and to bring God back into the Senate. […] That is the one very serious regret that I have, was [sic] believing that when we were waved in by police officers, that it was acceptable. (0:39 – 0:46)

Angeli awaits trial on six counts of misconduct, including violent entry and disorderly conduct in a Capitol building, as well as demonstrating in a Capitol building.

Angeli’s appropriation of Nordic symbols is of course part of a broader Viking cultural renaissance, yet don’t let all this take away from your enjoyment of the current Viking-themed pop culture extravaganza. Vikings are fun! It’s not all white supremacy. It’s wonderful to see people deeply interested and invested in the thrills, chills, twists and turns of these larger-than-life characters on our screens, set against a backdrop of Nordic culture and history that is sometimes richly coloured and always sketched in familiar lines: struggle, sacrifice, and hope. The image of the Viking in pop culture today is so unquestioned – hairy, violent, marauding – The Guardian can’t suffer through so much as a paragraph of historical context without getting distracted by their coolness. Dr Simon Trafford, Lecturer in Medieval History and Director of Studies at the University of London explains the Viking attraction:

The parallels with what we look for in our rock stars are just too obvious. The Vikings were uproarious and anti-authoritarian, but with a warrior code that values honour and loyalty. Those are evergreen themes, promising human experiences greater than what Monday morning in the office can provide.

If you caught American Gods in either series form or its original novel (published 2001; series premiere on American cable network Starz in 2017), you know that Vikings are dull-witted, filthy murderers who would slaughter their own friends and family members in frenzied sacrifice to Óðinn then wait for the wind to return to their sails, leaving very few alive left to row the longship home. If you’re a fan of Vikings (2013-2020, with a planned spin-off series titled – what else? – Vikings: Valhalla), you know that Vikings are the rock stars of history, wearing copious amounts of leather and guyliner artfully smudged around their piercing eyes as they gaze out to sea, bursting with manly intensity. You know. But do you really? 

More problematic is when these tropes, images, and signifiers are part of darker, more nebulous, and disturbing parts of history, and how that history can be forgotten, covered up, manipulated, or even wilfully ignored in the current moment. The tropes, images and signifiers of a culture which are chosen and carried forward in time take on a life of their own, often changing their meaning drastically.

Historians and armchair enthusiasts, pagans and reenactors, artists and others who enjoy learning about Nordic culture and Scandinavian history around the world groaned in unison after the Capitol invasion, aware that the United States was bringing us another fight, so soon after the dust had settled over the last one. In Charlottesville, Virginia, on August 11–12, 2017, far-right groups, including self-identified members of the alt-right, white nationalists, neo-Nazis, Klansmen, and various right-wing militias gathered to present a unified and radical right-wing front as well as to protest the proposed removal of the statue of General Robert E. Lee from Charlottesville’s former Lee Park. Symbols like the óðal rune, the cross of the Knights Templar and the black eagle of St. Maurice, among others, were splashed across the screens of horrified viewers. After the initial shock over the cultural clash in Virginia, which like the attack on the Capitol brought about death and injury, those who spend their lives plumbing the mysteries of history were left to pick up the pieces and decide what to do to avoid being painted with the same Swastika.

What has been observed in social media dissemination and discussion of Nordic cultural symbols illustrates that the general public has at best an incomplete understanding of the use of Viking symbology in connection with the German nationalist movements of the eighteenth and nineteenth centuries, which culminated in two destructive World Wars. Markers of Nordic culture have a tendency to recur throughout history, from their origins in the Viking Age through to the twentieth century and the present day: specifically, Valhalla, Vínland, the Valknut, Mjölnir (Thor’s Hammer), Yggdrasil (The World Tree), and runic inscriptions, including rune-like magical staves such as Vegvísir and the Æishjalmur. Such iconography has been deployed almost randomly (and therefore meaninglessly) to create a connection between Viking culture and an ideology of whiteness, masculinity, and power.   

Recently, a scandal erupted in the hallowed halls of the Academy over the correct next steps to take: how to continue to do what we love as researchers and teachers, but also speak to a wider community and political developments causing direct harm? Differences in opinion led to social media chaos, accusations of doxxing, threats and scathing blog posts by the two front runners in the debate. Dorothy Kim, an Assistant Professor of English at Vassar College, and Rachel Fulton Brown, an Associate Professor at the University of Chicago, squared off in a nerdy, gladiatorial smackdown. As Inside Higher Ed noted, Fulton Brown agreed that white supremacists often use medieval imagery to invoke a mythical, purely white medieval Europe. However, she disagreed with Kim’s assertion that white professors needed to explicitly state anti-white supremacist positions in the classroom. For Fulton Brown, the teaching of history by itself, immersion in the concepts and understanding of changes over time, will stem the tide of white supremacist misuse and misunderstanding. Medievalists unhappy with the handling of the issues by some institutions boycotted conferences.

As the debate raged on, white supremacy continued its dark work. A mass shooter in Christchurch, New Zealand, posted, ‘See you in Valhalla’ before killing 49 people at two mosques and injuring dozens more. Educational institutions have not, and still do not, appear to be doing enough. The Southern Poverty Law Center, which monitors hate groups throughout the US, tracked 838 hate groups’ growth and movement across the country in 2020, brought on in part by a 55% surge in the number of US hate groups since 2017 (though down from its all-time high in 2018). Political elections have put an increasing number of  populist, nationalist, and right-wing figures in office throughout Europe, spurred by rising anti-immigration sentiment, frustration with the political status quo, concerns about globalization, and fears over the loss of national identity. The issue has become so muddled that some educational material must make clear that although a given Nordic cultural symbol, such as Thor’s hammer (Mjölnir) is a hate symbol, it is also commonly used by non-racist neo-pagans and others, and so it should be carefully judged within its context before the viewer assumes the one wearing it to be a member of a hate group. Nothing is black and white. Everything is uncertain. 

Instructors and teachers have recommitted to doing better, echoing statements like that of Natalie Van Deusen, Associate Professor at the University of Alberta. In her own classes, Van Deusen makes a deliberate effort to highlight the flourishing ethnic and cultural exchange among Nordic people of the periods she covers, mainly the late eighth to early eleventh centuries. She includes in her teaching lesser-seen and -heard viewpoints, such as their relationships with the Sámi, indigenous peoples of northern Scandinavia, or trade with the East:

I strive to teach in a way that doesn’t solely focus on Norse-speaking peoples, who were by no means the only ones to occupy the Nordic region during this period, nor were they without influence from surrounding cultures. 

We have to do more, go further, explore deeper, and keep talking about this until there is no question where we stand, as individual scholars, or as people within our communities who care about accuracy (as far as it can be established), diversity (as much as the evidence supports), and education. Always education. 

Dr. Van Deusen remains more committed than ever to keeping the conversation going, saying in a recent interview, ‘I think it’s a willingness to talk when people want to talk to us about these things, and a willingness (as scholars an educators of this period) to acknowledge that this is a real issue.’ For Van Deusen, at this point

[W]e can’t not address it, and the last thing I would want is for someone to be in my class for the wrong reasons and twist my words because I didn’t explicitly say “I’m not here for validating these interpretations” – which I do now, at the beginning of each term.

This is why, through the pages of The MHR, you’ll hear more from me soon. Drawing on a range of evidence, from modern news to textual and archaeological evidence, my colleagues and I will examine the ways in which Viking culture has been and is manipulated, used, and misrepresented by those who seek to create an underlying continuity, real or imagined, stretching directly back to the people of the past known as the Vikings.

So, hold on to your butts! Like the blurry outline of a longship on the horizon, I shall return!

Download PDF

Elaine Farrell, Women, Crime and Punishment in Ireland: Life in the Nineteenth-Century Convict Prison, (Cambridge, 2020).

Elaine Farrell, Women, Crime and Punishment in Ireland: Life in the Nineteenth-Century Convict Prison, (Cambridge, 2020) Review

Women, Crime and Punishment in Ireland is a detailed resource which expands upon the existing scholarship of prison life and brings the administration of punishment in an Irish female convict prison into particular focus. Scholars have recently begun to reflect on how the implementation of nineteenth-century laws (such as the English Poor Laws) affected the wider lives of those inside an institution and this has been conducted through an analysis of the agency that imprisoned women showed in the face of increasingly punitive legislation.

Biography: Megan Yates is an ESRC PhD Student in the school of History, Politics and International Relations at the University of Leicester. Her project is collaborative, working closely with the University of Nottingham and the National Archives. In her doctoral research, she focuses on the daily experiences of vagrants within the workhouses of the nineteenth century and across the Midlands.

In this heavily researched book, Elaine Farrell effectively synthesises studies around life in nineteenth-century institutions with new understandings of the selfhood, agency and life cycles that her female convicts displayed.[1] Through the lives of Irish female convicts, the author conducts a thorough examination of Irish prison life, families, friendships, relationships, and wider network acquisition. It is in these places, Grangegorman, Mountjoy Female Penitentiary, and the Cork Female Convict Depot, that Farrell explores the experiences of incarcerated women and their interwoven identities inside and outside of the prison walls.

Farrell’s case studies detailing the convictions, treatment and opinions of these women will be a great resource for multiple branches of history, both in the institutional sense of how life in an Irish convict prison worked for these females, as well as for those studying the detailed experiences and lifecycles of people in nineteenth-century institutions.

Farrell’s focus also contributes more broadly to discourse surrounding crime and punishment, particularly the evolution of western punitive practices as she explores in her introduction the administrative pathway from Grangegorman prison to Cork Female Convict Depot. Her exploration of these prison institutions to some extent accords with Foucault’s examination of prison in Discipline and Punish and his suggestion that imprisonment was part of a much bigger carceral system that, Farrell thus goes on to argue, infiltrated every aspect of the lives researched in her case studies.[2]

The incorporation of Irish women is a novel and impressive approach to the study of lived experiences and the positionality of her complicated case studies within the context of Ireland, politically, socially and culturally. The source base for this work combines both traditional and non-traditional resources as it utilises the official record of court transcripts, prison ledgers, annual reports and legislation, alongside personal testimony and inmate letters. Farrell uses a close reading analysis to unpick the rhetoric of these sources and interprets them through her vast knowledge of the Irish penal system in the mid-late Victorian era. She combines her analyses with other historical works for both Irish and British institutions and this marries her work with historiography on the Poor Laws including broad ideas about imprisonment, settlement and resistance.[3]

Farrell employs a case-study approach to her sources through sub-chapters at the beginning of each main chapter which brings the human element to the front of her discussion. This structure takes the reader on a journey, an effective approach which envelopes the somewhat disjointed and difficult excerpts of narrative with the social circumstances of these women’s lives. It makes sense, due to the volume of material in each of her themes, to break this down here. This book is structured into five chapters to correspond with its five case studies. Farrell has themed her case studies based on wider societal issues that she interleaves with the stories of multiple women. This is important because otherwise, there is the danger of losing the human aspect of these cases and reducing them to broad-brush arguments. As proven in Farrell’s previous works, she demonstrates that these women were not exceptions and in doing so, she gives voices to ordinary women who might otherwise have been forgotten.

In the first half of Women, Crime and Punishment Farrell tells us about the everyday lived experiences of women in the convict prisons of Ireland. Farrell discusses sanitation, rebellion, work, schooling, length of incarceration and uniform. The female uniforms here stand out, because as Farrell notes, there were four different types, varying in style and colour to reflect the different status of the convicts. Farrell explores the ways in which women expressed individuality by altering their uniform through removing collars, tearing hems and wearing neckerchiefs. Although the convict uniform was implemented in English prisons by the mid-Victorian era, the differentiation of uniform based on a points system was not a practice that was replicated in the British convict system.[4] Uniform was intended to de-individualize convicts and control their appearance. Farrell suggests that in the women’s prisons she studied, the uniform was a bargaining chip for good behaviour, although as she argues, this was hardly a successful penal model.[5] Farrell’s exploration of the attitudes to the uniform from the perspective of the convicts wearing it is one small example of the many new aspects of convict life she shares throughout her chapters.

In the second sub-chapter, Farrell introduces us to the Carroll family. This complicated family of petty criminals frame the rest of her chapter which explores familial bonds maintained or dissolved during their detention. This chapter has disheartening elements whereby we learn of women who were abandoned by their families once they became institutionalised. Farrell goes through a particularly poignant case of a mother who, after five years of imprisonment, was unsuccessful in regaining custody of her child. Farrell’s dip into this story is particularly captivating as she expands upon the feelings these women would have experienced, not simply being incarcerated and imprisoned, but for some, recognising a lost relationship and maternal bond.

Farrell uses thousands of prison files including letters sent back to the prison after a convict’s release. These sources are comprehensive. The high number of exemplary cases in these chapters often encourages the narrative style to switch from one convict to another very quickly. However, this is representative of the short snippets of stories she finds in the archives. Although women such as Catherine Lavelle (depicted on the cover image) were well known and had exceptionally thorough sources, others had no more than one or two mentions in the archive. Intermingling these stories shows that Farrell is not trying to follow any one convict’s prison journey; instead, she brings to life the thorny and intricate lives of many women. Thus, Farrell’s methodology and data collection are made glaringly clear. She repeats the sample size often in this chapter and it is clear that she is confident in stating how representative this evidence is of Irish female convicts’ nineteenth century prison experience.

Women were also involved in the prison system as employees and Farrell discusses in later chapters, the bonds and relationships that could be formed between prison matrons and female convicts. In contrast, Farrell also describes the stigma and criticism female prison employees faced from their male counterparts for their ‘weakness’ in trusting and perhaps liking certain inmates. In further chapters Farrell pulls at the strings of convicts’ relationships, friendship, marital or extra-marital relations as well as their enemies and fights or conflict that occurred amongst inmates. She argues that her female convicts and convicts in general, are a lens through which to recover the voices and relationships of lower-class people who were not in a workhouse, an asylum or an industrial school but often shared traits such as poverty or mental illness. It is the first time, through this book, that the voices of such people have been centre stage. Farrell uses sources from the prison staff to form a picture of what happened to these women but also writes their stories from their perspectives in order to emphasise that they were real people with real lives and real stories to tell.

Farrell’s convict testimony and multi-dimensional sources demonstrate that the prison system was a pseudo society/community. It had its own set of rules visible through work and dietary regulations, correspondence regulation, and a points-based reward system. The females in this prison were deprived of many things but were likewise able to create friendships and maintain kinship bonds, even across institutions. This, Farrell argues, is a very personal experience and depends completely upon individual circumstances. Farrell herself is therefore right to argue in her conclusion that this book is heavily ‘saturated with further evidence of women’s agency’ in penal institutions and that her study has been possible because such significant identity documents exist in the form of letters, petitions, diaries.[6] Farrell argues that convicted women are valuable to examine because their personalities came through in the documents. Sophisticated record linkage work in archives has allowed historians, such as Farrell, to recover multiple perspectives on prison life. As Farrell says, ‘these were ordinary lives captured on paper because of an extraordinary sentence’ and this concept will contribute greatly to a number of further studies into the identities of people long forgotten.[7]

Download PDF



[1] For example, Rebellious Writing: Contesting Marginalisation in Edwardian Britain, ed. by Lauren Alex O’Hagan, Writing and Culture in the Long Nineteenth Century, 10 (New York: Peter Lang, 2020).

[2] Michel Foucault, Discipline and Punish: The Birth of the Prison, Peregrine Books, Repr (Harmondsworth: Penguin Books, 1982).

[3] K.D.M Snell, Parish and Belonging: Community, Identity, and Welfare in England and Wales, 1700-1950 (Cambridge: Cambridge University Press, 2009), pp.1-245; David Moon, The Russian Peasantry 1600-1930: The World the Peasants Made. (Hoboken: Taylor and Francis, 2014), pp.38-39.

[4] David Englander, Poverty and Poor Law Reform in Britain: From Chadwick to Booth, 1834-1914, Seminar Studies in History (London ; New York: Addison Wesley Longman, 1998), pp. 38–39.

[5] Paul Carter, Jeff James, and Steve King, ‘Punishing Paupers? Control, Discipline and Mental Health in the Southwell Workhouse (1836–71)’, Rural History, 30.2 (2019), p.164.

[6] Elaine Farrell, Women, Crime and Punishment in Ireland: Life in the Nineteenth-Century Convict Prison (New York: Cambridge University Press, 2020), p. 257.

[7] Farrell, p.260.

The Female Crime: Gender, Class and Female Criminality in Victorian Representations of Poisoning


The Victorian nineteenth century was awash with crime, murder, and violence. Not least, the ‘feminine’ art of poisoning. This was a ‘clean’ method of murder that might conveniently  rid oneself of an unhappy marriage or a love rival. Whilst poisoning cases framed interesting and salacious fiction, the conception of poisoning as a woman’s crime relates to deeper stereotypes in Victorian  society. Gender and class norms  weighed heavily, and poisoning was configured as an essentially feminine crime. This article examines, via several Old Bailey cases, the factors  responsible for the supposed link between women, poisoning, and predisposed gender and class ideals. I also consider the role that the nineteenth-century press played in establishing poisoning as a woman’s crime. The history of poisoning has been little considered due to the lack of archival material on poisoning cases. This study intends to expand the study of gender and crime in nineteenth-century Victorian Britain.

Key Words: Victorian, poisoning, murder, gender, class, trials, crime

Author Biography

Alison Morton is a postgraduate student of history at the University of Lincoln, currently studying crime and punishment in nineteenth-century Britain.


The Female Crime: Gender, Class and Female Criminality in Victorian Representations of Poisoning

Download PDF


In nineteenth-century Britain, poisoning was a sensationalised crime, often in the public eye. No case better highlights the embedded Victorian middle-class fear of the secret female poisoner than that of Christiana Edmunds in 1872. Edmunds poisoned boxes of chocolates and other sweets as part of a plot to target her love interest’s wife. Her case demonstrates how poisoning was represented in the press as a female crime. During her trial it was noted that crowds of well-dressed women came to sit in the gallery of the Old Bailey. They were described as ‘enthralled’ by the defendant, with audience numbers increasing at every session.[1] It was feared that these women were flocking to hear and learn how Edmunds conducted her crime, later meeting in groups to share the recipes and tactics she used.[2] It is interesting that this perspective existed. After all, the evidence is clear that poisoning, while popular with women, was not a uniquely female crime. For example, William Palmer, a doctor, was sentenced to hang after he poisoned his friend John Cook with strychnine in 1855.[3] [4] Furthermore, women also committed murder through means considered more ‘masculine’. Eliza Gibbons murdered her husband in 1857 by shooting him in the head,[5] and Jane Colbert was imprisoned for murdering her husband by throwing a knife at him and piercing his lung in retaliation to domestic abuse in 1854.[6] However, poisoning was closely linked with female murderers in the Victorian press, which, as this article will demonstrate, was particularly related to sensationalist journalism.

This article examines the factors that drove women to kill their husbands, in the context of several poisoning cases tried at the Old Bailey, London. Providing a general history of poisoning cases in Victorian England, it will examine the types of poison used; how methods of detection changed; legislative changes; and will consider the public perception of such crimes. It will argue that the gender ideologies of the period helped to define poisoning as a female crime. Using several cases of husband murder this article will discuss the media representation of such crimes; why women might have chosen poison as their preferred method; and how gender ideals and social expectations were presented in court. This paper also considers how women in turn utilised the press’ sensationalist image of the female poisoner, in retaliation against male violence.

Studying the testimony and evidence given in the trials of nineteenth-century crimes can tell us much about society in Victorian Britain.[7] This article draws on five trials from the Old Bailey online archive, dated between 1842 and 1886, all of which were for cases of mariticide by poisoning. The cases include those of Jane Bowler, who was tried in 1842. Jane was a working-class woman accused of murdering her husband, Joseph Bowler, with arsenic. She was found not guilty. Ann Merritt was a working-class woman who, in 1850, was accused of murdering her husband, James Merritt, with arsenic. She was convicted and sentenced to death. Ann always asserted that she was innocent, and even in her final statement before the magistrate she reiterated that she originally bought the arsenic for herself. She claimed to have intended to commit suicide because of her husband’s recent drunken behaviour, but she changed her mind. She believed her husband must have taken it in place of the acids and sodas he had in the morning: whether accidentally or not she did not disclose. . Finally, Adelaide Bartlett was a lower-middle-class woman who was accused of murdering her husband, Thomas Edwin Bartlett, by poisoning him with chloroform in 1886. George Dyson, the man who purchased the chloroform for her, and who was also her love interest, was acquitted before trial. Adelaide was ultimately found not guilty.[8]

Whilst these sources give an insight into what the court deemed to be relevant information, one of the main issues with the Old Bailey trial reports is that they only show the witness testimony, and nothing from the lawyers, judge, or jury in the courtroom. Defendant statements are frequently missing from the testimony. In some cases, for example where multiple doctors were questioned, the accounts are often highly repetitive in nature. It is also important to note that some words had different meanings in the nineteenth century and so need to be read from a nineteenth-century perspective. Other primary source material drawn upon in this paper includes press reports and cartoons, either directly associated with these trials or of a related nature. The press reports add context to the trial reports, and they can fill in the gaps in the testimonies by exemplifying the popular attitudes and opinions of Victorian society, particularly on the subjects of class and gender. These reports, too, should be read with caution. The views of the editor, journalist, or audience could influence reporting, as could the geographical location of the paper. However, the five case studies that are the focus of this paper only offer a snapshot of cases of women killing by poisoning. Reconstructing the context and social concerns surrounding female crime more generally is, therefore, essential in order to interrogate the network of ideologies surrounding women’s alleged use of poison in murder cases, and the sensationalism that characterised the reporting of these crimes.


Gender, class, and women’s crimes

In the nineteenth century, women were legally classed as secondary citizens and were discouraged from gaining a formal education or a career and were unable to own property or vote.[9] Despite, in reality, very many women demonstrating agency and activities well beyond the domestic realm, the middle-class ideology of ‘separate spheres’ dictated, in theory, that a woman’s place was attending to the private sphere of home and family life as ‘the angel of the house’. This worldview came to transcend class boundaries to a great extent: the industrialisation of the workforce brought gender issues to the forefront of labour disputes, as working-class men competed against lower-paid women who they sought to relegate to the home in consequence. Moreover, middle-class philanthropic practices such as that of district visiting saw middle-class women taking domestic ideology into lower-class homes. As a result, for much of the nineteenth-century, women across society were expected to conform to these gendered, domestic roles. However, this ideology of ‘separate spheres’ was a pervasive discourse that was not always reflected in lived experiences.[10] Most lower-class women, and some middle-class women, had to work to survive. These working lifestyles did not conform to popular standards of feminine behaviour, and put women into the public sphere ideologically reserved for men. We see, here, the intersection of class and gender. Although working-class, (and, in reality, some middle-class) women had to work beyond the domestic sphere, for practical reasons, their transgression of gender ideals was used to show why middle-class women and, thus, the middle classes generally, were ‘superior’, justifying their societal cultural, moral, and political authority.[11]

Cases involving a sexual aspect, such as adultery or the murder of a lover or a rival were seen as didactic, warning of the ‘dangers’ associated with out-of-control female expressions of sexuality, to individual victims but also to stable society. Not only was it considered that such women’s divergence from the feminine ideal was a factor in their crimes, but they acted as examples of just why a woman’s place was at home, under the control and supervision of a husband, father, or brother, for their own and societies benefit. Furthermore, an essential aspect of middle class discourse was their ‘superiority’ over the working classes, whose women were more likely to have to work outside of the home, again diverging from expected female norms of behaviour. Court proceedings, press coverage and public interest in women’s crimes thus reflected and reinforced norms of gender and class. Such cases also reveal the contradictory nature of such discourses, as the press and public revelled in the fantasies of exoticized sexual revelation.


Poisoning in Victorian Britain

Eliza Fenning was sentenced to death for poisoning in 1815, although none of her victims died as a result of her actions. Fenning attempted to murder her employers by poisoning their food with arsenic, after she was disciplined by the lady of the house for visiting the rooms of young male workers in the house whilst semi-dressed.[12] Fenning provided four statements of good character at her trial, and there was doubt of her guilt, yet she, nevertheless, received the death penalty and was later executed at Newgate prison. It was hoped that harsh punishments would act as a deterrent, amongst fears that cases of poisoning were on the rise. John Marshall, a member of the Royal College of Surgeons, published a pamphlet in 1815 describing five cases of recovery from arsenic poisoning. In this pamphlet he detailed why he thought Fenning was guilty, claiming to have witnessed Fenning double over in pain after eating some of the dumplings she had cooked in what he believed to be an attempt to divert suspicion away from herself.[13] He followed this with a piece in The Times, describing her as ‘one of the perpetrators of this dreadfully alarming and daily increasing evil’.[14] Marshall’s accounts reflected popular concerns about the increasing number of poisoning cases and the role of women such as Fenning in this surge of cases.

During the trial of Ann Merritt, who was tried and sentenced to death in 1850 for murdering her husband, the judge remarked on ‘the strange and horrible frequency of the crime which you are charged’.[15] As public concerns over poisoning grew, press reports of these crimes increased in number, reaching almost hysterical heights by the middle of the nineteenth century.[16] The rise in sensationalist reporting, and the fear that even more cases were going unreported, drew attention from both the medical and legal professions.[17] For example, the 1851 Arsenic Regulation Act prohibited shopkeepers from selling arsenic and other poisons to people they did not know. Buyers were required to sign a register with their name and the purpose for the poison. The regulation further meant that arsenic, typically a white powder, had to be mixed with either soot or indigo. This was because arsenic has a bitter taste and mixing with food or drink seemed to be a common way to hide this bitterness and administer the poison to victims.[18] By mixing it with soot or indigo, it would stand out in food and drink, reducing the likelihood that it would go undetected. The principles of this act worked on paper; however, the act relied on shopkeepers keeping well-documented ledgers, not destroying or altering their records, or selling poison illicitly. In Christiana Edmunds’s case, for example, it was discovered that the shopkeeper who had supplied the poison had pages missing from his record book.[19] Furthermore, whilst the legislation might have restricted anonymous sales, it did not help if the chemist knew the purchaser. In the case of Ann Merritt, for example, the chemist she obtained the arsenic from had sold the poison to her before, so did not feel it necessary to ask questions as prescribed by law, again showing the discrepancies between theory and practice.[20]

Legal professionals again tried to intervene five years after the Arsenic Regulation Act was introduced. In 1856 Betsy McMullen was tried for poisoning and murdering her husband. The presiding judge argued that women should be banned from buying any potentially lethal drugs and that those selling them should      be convicted of manslaughter in the event of them being used to cause harm.[21] Banning women from purchasing poisons would, in reality, have been practically difficult as common poisons such as arsenic, chloroform, and strychnine had many domestic uses as cleaning aids or medicines. Oddly, the focus of legislation and detection in this era focused specifically on arsenic. Although widely used, many of the trials, such as some of those considered in this article, related to other poisons. This special focus on arsenic was perhaps due to its particularly vicious effects and bitter, unpleasant taste. Contemporaries also remarked upon this focus on arsenic, to the exclusion of other poisons. A letter written to Northern Star and Leeds General Advertiser on 23 March 1850, signed only by ‘An Englishman’, for example, questioned why arsenic was subjected to stricter controls compared to other poisons available, many of which were subtler in nature. The writer argued that, because these other poisons were used in medicine rather than domestically, the other poisons were protected from legislation.[22] This letter was dated a year before the Arsenic Regulation Act was passed in 1851 and it is probable  that the author, like the judge in Betsy McMullen’s trial, would not have been impressed at the limited extent of this legislation.

Ironically, a case brought against a male poisoner promoted legislation to protect defendants. When William Palmer was deemed not have been afforded a fair trial in Staffordshire due to sensational and widespread newspaper representation that caused public prejudice against him, the Palmer Act of 1856 was enacted.[23] This Act enabled hearings from outside of the London area to be moved to the Central Criminal Court at the Old Bailey, to ensure fair trials for the accused.[24] Palmer’s case also indicates the influence that sensationalist journalism had over public opinion and that high profile poisoning cases had on the British legal system in the mid nineteenth century.


New methods of detection

One of the key challenges for contemporaries was determining poison as the cause of death. Other difficulties were discovering the exact substance, who administered it, and how. Testing for arsenic poisoning was developed during the early 1800s: until then there was not a test conclusive enough to differentiate between a stomach condition or an illness and a case of poisoning. This difficulty formed a point of contention that can be seen in the extensive trial transcript of Adelaide Bartlett, which discusses, across almost sixty pages, how chloroform could have gotten into the stomach of her alleged victim without any burns in the throat or mouth.[25]

During the middle of the century much changed. The 1840s bore witness to developments in both medicine and policing which had several key effects on the detection of poisoning crimes. First, rural English communities developed police and detective forces which investigated crimes that might have otherwise been abandoned and neglected.[26] At around the same time, medical professionals focused on the problem of detection, quickly leading to the development of a more conclusive and sensitive test for arsenic poisoning, commonly known as the Marsh Test.[27] Created by chemist James Marsh in 1836, it was found specifically useful in the area of forensic toxicology.[28] The result of better testing and wider investigation was a rise in documented cases and increased media coverage. However, Ann Merrit’s case highlights the continued difficulty of proving murder involving poison, mainly as it was almost impossible to determine who administered the poison to the victim. Ann Merritt was handed a death penalty based on a statement by Dr Henry Letherby, a ‘seasoned and educated toxicologist’, resulting in uproar from the public, and medical and legal professionals alike.[29] His statement implied that the average man’s stomach takes around five hours to digest food and pass into the bowel, creating a timeline that incriminated Ann Merritt.[30] R. E. Davies, of the Royal College of Surgeons, wrote a letter to the London Daily News in which he questioned Letherby’s statement. Merritt’s husband was an alcoholic and Davies presented a theory that food digests slower in a drunken man’s stomach. He argued that because of Letherby’s statement the jury in Merrit’s trial could not entertain a theory that the victim may have taken his own life, as in the time frame given it would have been virtually impossible.[31] Merritt was eventually pardoned following the outcry. An article in the Hereford Times, on 30 March 1850, explained that ‘our readers of whatever sex or party will rejoice to [know] that the efforts which have been made to save the life of Ann Merritt have been attended with success’.[32] Communication was made between the Home Secretary and the Governor of Newgate, ‘the execution of this unhappy woman would be respited during her Majesty’s pleasure’, meaning she had been detained in an asylum after being declared insane.[33] Martin Weiner observes that in the second half of the century there was a decline in prosecutions of women for serious crimes, and a larger decline in convictions and length of prison sentences.[34] The number of women executed reduced dramatically, whilst insanity verdicts for women nearly doubled. Weiner argues that the reason for this increase is that whereas Victorian juries would consider male criminals to be ‘bad’, it was becoming easier to explain female “deviants” who committed heinous crimes as ‘mad’.[35]


Press coverage

The public interest in cases of female poisoners is demonstrated by the large crowds at the trials of both Christina Edmunds and Adelaide Bartlett. In the latter case the courtroom was so crowded that one of the main doors was completely blocked.[36] There were crowds of spectators inside and outside of the courtroom. Even the apartments surrounding the Old Bailey had a considerable number of spectators watching the building.[37] The press both reflected and fed such interest, through  sensationalist journalism which fed social and moral fears of poisoning and poisoners, suggesting a threat to society more broadly.[38]

One concern was the secretive nature of the crime. George Robb argues that known poisonings were believed to be the tip of the iceberg and that for every case that was discovered dozens probably went undiscovered.[39] It does seem that the fear of unknown cases of murder caused some disquiet among a public concerned that wives were regularly killing their husbands, without detection.[40] On 16 December 1882, The Times remarked:


‘from the numerous poisonings which have only been detected by an accident or an afterthought, the inference is only reasonable that there remains a margin of poisonings which are never detected at all.[41]’


The obsessive coverage of poisonings in Britain played a slightly contradictory role. By publishing details of poisonings, the press potentially created the very problem they claimed to be concerned about, by providing details which might facilitate further poisonings.[42]

Sensationalist imagery also painted a misleading picture of poisoning as a crime conducted under the darkness of night. This sort of media representation was at best selective, and, at worst, inaccurate because poisonings also happened during daylight. An example of such sensationalist imagery appeared on the front page of the Illustrated Police News on 8 June 1889. It depicts the case of Florence Maybrick, accused of poisoning her husband, James, a wealthy Liverpudlian cotton merchant, by switching his medicine whilst he slept in his bed next to her.[43][44] The newspaper depicts multiple scenes from the crime she was accused of, including a maid finding the fly papers which Maybrick is said to have soaked to extract the arsenic, and a scene of her in prison after her arrest.[45]

It is interesting to note that the prison scene is the only one in which she is depicted showing any form of emotion. During the crime Maybrick is depicted as passionless and rather malevolent, but once she is in jail she holds her head in her hands, perhaps inferring guilt and regret. While this could be an indication of remorse, the overall depiction suggests that she is perhaps just grieved at being caught. Either way, the images imply guilt and she was indeed found guilty.[46] A similar image appeared on the front page of Reynolds’s Miscellany on 10 July 1858. Here, a woman named Joanna is shown preparing poison near a sleeping Sir John Cleveland. She is looking over her shoulder to ensure he is still sleeping and thus not aware of her actions. Meanwhile another man, who we can only assume was Joanna’s accomplice or perhaps her lover, looks on in the background.[47] She is depicted as protected under the cover of night, while Cleveland slept ‘safely’ in his bed, unaware that someone was attempting to murder him.[48] According to Lucy Williams, these women conformed to the ‘very middle-class fears of the sneaking female poisoner’.[49] Again, such representations both reflected and reinforced public fears and opinion in this period, and ultimately led to female poisoners being compared to witches and being labelled monstrous.[50] These ideas were not unique to the nineteenth century, and there is evidence of poisoning being linked to women and the comparison to witches, as far back as the sixteenth and seventeenth centuries.[51] The context of nineteenth century gender and class relations provided a framework in which that connection could be made more explicit, and more threatening. This was a useful discourse for journalists during a period of substantial expansion of the popular press. As we have seen, this combination of pre-existing public prejudices, fears and concerns, and the press coverage which reflected and fed them, influenced legislative, social, and medical perspectives on poisoning throughout the nineteenth century.


Poisoning and gender

Popular perspectives on women and gender in this period drove a view that poisoning was a largely female crime. Both men and women used poison to injure, incapacitate, and kill, however the Victorian press particularly portrayed poisoning as a female crime. Portrayed as a subversive crime, requiring no physical strength, female poisoners fed societal views of women as naturally passive but potentially dangerous and insidious when influenced by their emotions, particularly of a sexual nature. Such ideas supported and reflected a discourse of stable society requiring women to be under the supervision of men.

The idea that poisoning was a secretive crime is seen in trial judgements and contemporary press reports. In the Bartlett case, for example, the judge commented that ‘poisonings were not like crimes of sudden passion. They were necessarily mysterious and hidden in their operation’.[52] But this representation was not just about the subversive nature of poisoning. The nineteenth century, as many periods in history, considered men physically stronger and more violent than women. Judith Knelman and Martin Weiner have discussed how male crime was, therefore, expected to be more violent and on the spur of the moment in comparison to female crime, which was less physical due to women’s physical weakness.[53] Press representation promulgated this distinction. For example, in both the Maybrick and Cleveland illustrations, the victims were shown to be physically incapacitated, either by illness or simply because they were asleep, whilst the poison was administered. The female poisoner thus committed her crime in a non-violent manner.

This argument that the lack of physical force required in a case of poisoning meant the act could be attributed to women has another dimension. Knelman suggests that poisoning presented a practical, but immoral and illegal, response to the oppression of women. Or, in other words, a non-physical response to the physical violence of male partners.[54] Knelman believes any hostility and violence in a relationship comes out of a man’s attempt to control the woman and the woman’s attempt to exert her own independence and agency.[55] However, unlike an overzealous beating, poisoning could not be considered an accident because there is an element of premeditation in all poisoning cases; one had to go and acquire the poison, as well as determine how to administer it. [56] It is therefore unlikely that someone killed another by poisoning in a jealous fit of rage.

Mary Hartman argues another reason for poisoning to be considered a threat in nineteenth-century Britain, is that women who killed men represented a threat to social norms of gender. She goes on to explain that if these women were also middle- or upper-class, the worry was that they would tip the scale of social class normativity, leading to potential social non-conformity.[57] In his letter to the London Daily News in 1850, R.E. Davies commented on expectations of women in this era: ‘Lately few women have humiliated their sex by the perpetration of heinous offences. The natural attributes of Women are kindness, virtue and affection’.[58]

Davies was writing in defence of Ann Merritt and argued that women did not poison as widely as the press suggested. But his perspective shows that, in this era, women were not expected to be a threat to men. Lucy Williams has considered how female crime lay outside of the normal social expectations of their gender. Women were considered caring, kind, and calm, whereas male crime fitted within the social bounds of masculinity. However, Williams explains that, for women, murder was ‘doubly deviant’, denoting a significant departure from femininity.[59] Robb argues that ‘a woman’s ideal gender role was to “love, honour and obey”’, not maim, injure and murder.[60] Unlike Weiner and Knelman, Hartman focuses on class rather than gender, stating that middle class women were literally getting away with murder.[61] One reason for this could have been that the middle classes had access to knowledge of poisons through domestic handbooks on medicine and drugs.[62] Robb expands on this argument, stating that middle-class women committing murder by poison was particularly troubling because their outward behaviours and appearance did not indicate any criminal nature. However, working-class women were almost expected to have a criminal side. Represented as ‘rough’ and ‘degenerate’, murder was just seen as another aspect of their depraved working-class lifestyle.[63] Anger and physicality were considered masculine traits and had no place in the home or around family.[64] Women were expected to dedicate themselves to the private sphere, running the home and family; while the men would go out in to the public sphere to work, earn money, and socialise. I would argue that men and women had to look and act in a certain way to remain adequately masculine or feminine. Those that did not fit into the boundaries of gender set out by Victorian society had to be ‘understood’ within a pre-existing framework of society. To challenge the idea that women were essentially passive and non-violent, or that men were just as likely as women to use poisoning to commit harm, was to challenge the very intersection of class and gender on which the middle classes predicated their social, cultural and political authority.

Due to the divisive intersection of class and gendered ideology that underpinned them, female offenders were judged  by these standards, rather than the facts of the case, by both the court and press. Often, they were also judged in medical and psychiatric terms. Female murderers of the Victorian era were almost never presented as the women they were, whether excused or vilified. Instead they were judged on their status as ‘good women’ and the ‘social rules’ they had broken.[65] Hence a woman’s reputation played a role in the court, the jury’s view of them, and how their sentence was decided.[66] Lucy Williams and Judith Knelman both agree that the masculinity or femininity of an offender was commented on by the papers and that their personal character was also a factor of judgement.[67] Robb uses the example of Mary Ann Geering in his article. Geering was described as ‘a woman of masculine and forbidding appearance’ in a Times newspaper article representing her trial.[68] It could be argued that these women’s greatest crimes were going against their prescribed social roles.

The trial transcripts of Merritt, Bartlett, and Bowler, also devote significant attention to a discussion of the character traits of both the victim and the accused. In the Bowler case, the victim was described as gloomy and disconsolate, and it was documented that he tried to kill himself on two separate occasions. A friend of Bowler’s, Henry Clarke, told the court that he had to stop Joseph jumping into the canal, for example. On the other hand, Jane Bowler was depicted as a good mother and wife, and therefore considered of good character. This may have swayed the jury and contributed to her ultimately being found innocent..[69] Although drunkenness was not discussed in the Bowler trial (it was only hinted at), alcohol abuse was a common theme in nineteenth-century trial reports. During the Merritt trial the victim was identified as a heavy drinker, a fact which grieved his wife. Francis Toulman, a surgeon and acquaintance of the Merritts, specifically commented that Ann attended to her husband judiciously, indicating that she adhered to the ideological expectations of a Victorian wife.[70] Other witnesses said that she was devoted to her husband and her grief after her husband’s death, if genuine, was described as overwhelming.[71] Whilst it did not sway the jury at the time, in contrast to the case of Jane Bowler, it had an effect on public opinion, eventually leading to Ann Merrit’s release. During her trial, Adelaide Bartlett seemed outright offended at the suggestion she could not adequately care for her husband, showing that she took her role as nurturer very seriously.[72] Of all the trials this article addresses, Bartlett’s is the lengthiest and the most unusual in terms of the character of the accused and victim. The Bartlett’s had a platonic marriage, their relationship one of brother and sister more than husband and wife. Edwin Bartlett, Adelaide’s father-in-law, insinuated at her trial that Adelaide and her husband had a sexual relationship in the beginning, noting that they shared a bed and that she had been pregnant once before which resulted in a stillborn child. While Edwin had no reason to believe the relationship was nothing short of marital normality, later in the trial he describes them as no longer having an intimate relationship.[73] The attention to detail given in the Bartlett trial to their relationship highlights the significance that the legal system, at least, attributed to this area in poisoning cases, again underlining the centrality of gender normativity to such cases.[74]

Whilst the media often focused only on the female offender, their character, personal circumstances, and physical attributes, the trials would look at both the victim and accused. Negative revelations about the personalities of the victim could help sway the court and jury in favour of the accused. Weiner believes that juries often looked with sympathy on women when their crimes were retaliatory.[75] However, Knelman discusses the case of Elizabeth Martha Brown which was given significant coverage in a broadsheet newspaper in 1856. However, there was no mention of the character of her victim, a violent and abusive husband, anywhere in the newspaper reports. In fact, she was regarded as a ‘wretched criminal’ murdering ‘poor Anthony Brown’.[76] This language also indicates that perceptions drawn upon in press reports about the victim’s character might be used by the wider public as a way to judge whether the crime could be morally explained or not, in terms of the popular ideologies surrounding gender roles. In portraying the victim as a good man, reporters consequently portrayed Elizabeth as a cruel and wretched murderer who had no reason to commit her crime. In contrast, Lisa Appignanesi refers to the case of Louise Hartley, an eighteen-year-old who attempted to murder her father. The defence condemned the victim for his ‘unfatherly behaviour’ and displayed him as being ‘vindictive and a brute’.[77] Appignanesi argues that such press coverage was influential on public opinion and, ultimately, on the jury who acquitted the accused.[78] Unlike Elizabeth, Louise’s crimes were excusable because of the ways in which the character of her victim were portrayed.

One area that garnered significant attention in both the trial and press was a woman’s sexual agency, which was considered as evidence of a deviant nature. This is shown in the trial of Florence Maybrick, whose adulterous affair with her husband’s friend was used as evidence against her in court. However, her husband’s numerous infidelities were never mentioned,  deemed irrelevant by the Victorian sexual double standard.[79] [80] Likewise, during Jane Bowler’s trial, focus was given to her interest in Jon Dunster, a lodger who lived in her house. This interest called into question her ‘loyalty’ to her husband, despite her otherwise appearing to be a ‘dutiful’ wife.[81] The Bartlett case presents sexual license, or lack of, as a motivation for the crime. Edwin, the victim, had married Adelaide on the promise of a largely platonic relationship. According to witnesses, he even went so far as to encourage Adelaide to receive male attention and Dyson, the co-conspirator, explained how Adelaide had been ‘given’ to him by her husband.[82] During the trial it emerged that Edwin had begun to change his mind about the platonic nature of his marriage.[83] The press seized on this information, with The Times creating a motive for Adelaide to administer chloroform to her husband: to prevent his sexual advances.[84]

Some women exploited the reputation that poisoning had as ‘the female crime’ to gain power over men. A salient example of this form of intimidation was women’s response to male violence in the period following the 1888 “Jack the Ripper” murders. Men would threaten to ‘whitechapel’ their wives; women, in return, threatened to ‘white powder’ their husbands.[85] A woman’s threat to poison her husband was both equivalent to, and a response to, a man’s threat of physical violence, aggression, or intimidation in a relationship.[86] Sarah Brice, for example, threatened to poison her husband after he was accused of robbery due to the bad company he kept.[87] Although these threats were not seen through, they were used as a form of intimidation against men.[88]

Another interesting example of this occurred in 1856, when Betsy McMullen was accused of murdering her husband in Bolton by putting tartarised antimony in his tea. Her supposed motive was to claim insurance money. An investigation revealed that it was common practice for women to give their drunken husbands antimony which caused vomiting and extreme physical weakness. Locally this practice was referred to as ‘quietness’.[89] The Times commented on the poisonings, stating that there were three customary evils in Bolton: that women were poisoning their husbands while they were incapacitated and drunk, that they did this without the husband’s knowledge, and that husbands became ‘wretchedly’ drunk.[90] It is interesting that the writer made the link between the evils of the husband and that of the wife, and seemed to be suggesting that the men and their actions were as culpable as the women.

Despite the salacious press and public hysteria, Martin Weiner notes some public and court sympathy towards ‘wronged women’ in this period, at least toward the end of the nineteenth century.[91] In certain cases a woman’s personal circumstances might be used in her defence or as grounds for reprieve. An example of this can be seen in the case of Charlotte Harris. Harris was convicted and sentenced to hang for deliberately poisoning her husband over a week so that she could marry her wealthy lover. However, she was later found to be pregnant.[92] Public interest in this case built, and letters were even sent to Queen Victoria pleading for her release. Her sentence was eventually commuted to transportation to the colonies and from then on no pregnant women or new mothers were hanged in Britain..[93][94]  When Ann Merritt was sentenced to death, even after the jury recommended to the court due to accounts of her good character, her case generated  public outcry.[95] The Times was clear in indicating that this sympathy was from both men and women, and that both campaigned equally. Not necessarily for Merritt’s release, but for at least a commutation her death sentence. These campaigns were successful and her sentence was reduced to incarceration in an asylum.[96]

There was, though, no guarantee of clemency. Mary Ball was hanged after being convicted of her husband’s murder by poisoning. Although the jury recommended mercy, the judge, Lord Coleridge, pressured them into withdrawing their recommendation.[97] Murders committed in the ‘heat of the moment’ were also shown limited mercy. Many were shown to be forms of self-defence or in retaliation to any wrongdoing towards them. As poisoning was predominantly premeditated, this defence was not available to these women. [98]



Cases of female poisoners offers a fascinating and instructive window through which we can view the intersection of class and gender norms in nineteenth century British society. Also, the growing influence of the popular press on public opinion and legislative change.

The very fact that female murderers existed challenged the ideals of femininity that justified supposed national and middle-class cultural, political, and moral ‘superiority’. The middle-class ‘domestic angel’ was at the heart Britain’s concept of itself as a stable and constitutional nation at home, authorised to bring such civilisational benefits to the benighted and backward peoples of their empire far away. To ‘explain’ the contradiction, poisoning was configured, by the criminal justice system, the public and the press, as an essentially female crime. Without the proper and appropriate supervision of a man, women were liable to be overcome by sexually-related emotions, become potentially dangerous to those around them and threaten the basis of stable society. Such unsupervised women could commit heinous crimes, but ones of insidious and ‘sneaky’ passivity, in line with their ‘natural’ characteristics.

Such ideas also required a very specific definition of what constituted ‘violence’, as an act requiring physical strength in the open, rather than an equally harmful act committed with malice aforethought, committed, supposedly, in the dark of night against an unresisting victim. Through such ‘understandings’ of poisoning, the public, press, and courts tried to maintain norms of gender and class. Recognising that women, middle class or otherwise, were as capable of violence as men, or that men were liable to resort to ‘passive’ crimes such as poisoning, would challenge the entire classed and gendered edifice around which society as structured.

What are also revealed are the cracks in such discourses. The recognition of male violence and abuse towards women, and the obvious contradictions between Victorian class and gender ideals and reality, are exposed in the protests and sympathy expressed towards many women; by the very public, press, and criminal justice system that judged those women by the same standards they critiqued and objected to through such sympathy. Even women’s apparent threats to poison abusive men reveals the oppression of women and their agency in resistance; an agency Victorian society did its utmost to deny. But through this study of female poisoning, we see signs that the centre would, eventually, not hold and begin to fracture under the weight of its own contradictions.



Primary Sources

Charge of Wilful Murder, Western Daily Press, 16 October 1869, p. 3.

Chester Guardian and Record, 26 June 1878, p. 8.

Chester Guardian and Record, 27 February 1878, p. 6.

Child Murder and Attempted Suicide, The Times, 28 October 1843, p. 5.

Child Poisoned by its Mother, Manchester Courier and Lancashire General Advertiser, 16 December 1843, p. 5.

Hereford Times, 30 March 1850, p. 6.

Joanna Preparing the Poison for Sir John Cleveland, Reynolds’s Miscellany, 10 July 1868, Front Cover.

London Daily News 18 March 1850, p. 5.

Marshall John, Five Cases of Recovery of the Effects of Arsenic, (London, 1815).

Midland Circuit – Warwick, Shipping and Mercantile Gazette, 4 April 1850, p. 1.

No Headline, Brighton Herald, 31 March 1849, p. 3.

No Headline, The Irishman, 6 November 1869, p. 10.

No Headline, The Times, 16 December 1882, p. 9.

No Headline, The Times, 19 April 1886, p. 4

No Headline, The Times, 26 August 1856, p. 6.

Northern Star and Leeds General Advertiser, 23 March 1850, p.?

The Proceedings of the Old Bailey, 1674­–1913: William Palmer, May 1856, ref. t18560514-490,, accessed 04/02/2019.

The Proceedings of the Old Bailey, 1674­–1913: Eliza Fenning, April 1815, ref. t18150405-18,, accessed 04/02/2019.

The Proceedings of the Old Bailey, 1674­–1913: Adelaide Bartlett, George Dyson, April 1886, ref. t18860405-466,, accessed 04/02/2019.

The Proceedings of the Old Bailey, 1674­–1913: Ann Merritt, March 1850, ref. t18500304-599,, accessed 04/02/2019.

The Proceedings of the Old Bailey, 1674­–1913: Jane Bowler, October 1842, ref. t18421024-3062,, accessed 04/02/2019.

The Proceedings of the Old Bailey, 1674-1913: John Hutchings, 20 September 1847, ref. T18470920-2217,, accessed 11/04/2021.

The Proceedings of the Old Bailey, 1674-1913: Benjamin Alison, 2 April 1838, ref. t18380402-1088,, accessed 11/04/2021.

The Mysterious Poisoning Case at Liverpool, The Illustrated Police News, 8 June 1889, Front Cover.

Wrongs Without Redress, Lincolnshire Chronicle, 5 April 1850, p. 7.

Gilbert Dugdale, A True Discourse of the practises of Elizabeth Caldwell (London, 1604).


Secondary Sources

Appigagnesi, L., Trials of Passion: Crimes in the Name of Love and Madness (London, 2014).

Arnot, M., ‘The Murder of Thomas Sandles: Meanings of a Mid-Nineteenth-Century Infanticide’, in Mark Jackson (ed.), Infanticide: Historical Perspectives on Child Murder and Concealment, 1550-2000 (Aldershot, 2002), pp. 149–167.

D’Cruze, S., Everyday Violence in Britain, 1850-1950 (Essex, 2000).

Digby, A., ‘Victorian Values and Women in the Private Sphere’, Proceedings of the British Academy, 78, (1990), pp. 195–215.

Hartman, M., Victorian Murderesses: A True History of Thirteen Respectable French & English Women Accused of Unspeakable Crimes, (London, 1977).

Higginbotham, A., ‘“Sin of the Age”: Infanticide and Illegitimacy in Victorian London’, (1989), pp. 319–337.

Knelman, J., Twisting in the Wind: The Murderess and the English Press (London, 1998).

Morgan, S., A Victorian Woman’s Place: Public Culture in the Nineteenth Century, (London, 2007).

Robb, G., ‘“Circe in Crinoline”: Domestic Poisonings in Victorian England’, Journal of Family History, 22 (1997), pp. 176–190.

Stratman, L., The Secret Poisoner: A Century of Murder (Llandysul, 2016).

Walkowitz, J., City of Dreadful Delight: Narratives of Sexual Danger in Late-Victorian London, (Chicago, 1992).

Weiner, M., Men of Blood: Violence, Manliness, and Criminal Justice in Victorian England, (Cambridge, 2004).

Williams, L., Wayward Women: Female Offending in Victorian England (Barnsley, 2016).


[1] L. Appigagnesi, Trials of Passion: Crimes in the Name of Love and Madness (London, 2014), p. 69.

[2] G. Robb, ‘“Circe in Crinoline”: Domestic Poisonings in Victorian England’, Journal of Family History, 22 (1997), p. 178.

[3]The Proceedings of the Old Bailey, 1674-1913: William Palmer, 14 May 1856, ref. t18560514-490, (accessed 04/02/2019).

[4] Further examples of Male poisoners can be found when searching through the trial reports of The Old Bailey Online, including Benjamin Alison who murdered his wife with Laudenum in 1838. The Proceedings of the Old Bailey, 1674-1913: Benjamin Alison, 2 April 1838, ref. t18380402-1088, accessed 11/04/2021. and John Hutchings who killed his wife with arsenic in 1847. The Proceedings of the Old Bailey, 1674-1913: John Hutchings, 20 September 1847, ref. T18470920-2217, (accessed 11/04/2021).

[5] J. Knelman, Twisting in the Wind: The Murderess and the English Press (Toronto, 1998), p. 108-109.

[6] M. Weiner, Men of Blood: Violence, Manliness, and Criminal Justice in Victorian England (Cambridge, 2004), p. 132.

[7] Robb, ‘Circe in Crinoline’, p. 180.

[8]The Proceedings of the Old Bailey  5th April 1886, Adelaide Bartlett, George Dyson, ref t18860405-466, accessed 04/02/2019.

[9] A. Digby, ‘Victorian Values and Women in the Private Sphere’, Proceedings of the British Academy, 78 (1990), p. 198.

[10] S. Morgan, A Victorian Woman’s Place: Public Culture in the Nineteenth Century (London, 2007), pp. 1–2.

[12] The Proceedings of the Old Bailey, 1674­–1913: Eliza Fenning, April 1815, ref. t18150405-18,, accessed 16/03/2021.Old Bailey

[13] J. Marshall, Five Cases of Recovery of the Effects of Arsenic (London, 1815).

[14] Eliza Fenning, The Times, 27 September 1815, p. 4.

[15] The Proceedings of the Old Bailey, 1674­–1913: Ann Merritt, March 1850, ref. t18500304-599,, accessed 04/02/2019.

[16] Robb, ‘Circe in Crinoline’, p. 185.

[17]Robb, ‘Circe in Crinoline’, p. 185; J. Knelman, Twisting in the Wind (London, 1998), pp. 86.

[18] Robb, ‘Circe in Crinoline’, p. 182.

[19] Appigagnesi, Trials of Passion, (London, 2014), pp. 69.

[20] Old Bailey, Ann Merritt.

[21]The Times, 26 August 1856, p. 6.

[22]Northern Star and Leeds General Advertiser 23 March 1850.

[23] L. Stratman, The Secret Poisoner: A Century of Murder (Llandysul, 2016), pp. 181–182.

[25] Old Bailey, Adelaide Bartlett.

[26] Robb, ‘Circe in Crinoline’, p. 179.

[27] Stratman, The Secret Poisoner, p. 181.

[28] Stratman, The Secret Poisoner, p. 180.

[29] London Daily News, 18 March 1850, p.5.

[30] Old Bailey, Ann Merritt.

[31]London Daily News, 18 March 1850, p. 5.

[32] Hereford Times, 30 March 1850, p. 6

[33]Hereford Times, 30 March 1850, p. 6

[34] M. Weiner, Men of Blood: Violence, Manliness, and Criminal Justice in Victorian England (Cambridge, 2004), p. 133.

[35] M. Weiner, Men of Blood: Violence, Manliness, and Criminal Justice in Victorian England (Cambridge, 2004), p. 133.

[36]The Times, 19 April 1886, p. 4.

[37]The Times, 19 April 1886, p. 4.

[38] Stratman, The Secret Poisoner, p. 274.

[39] Robb, ‘Circe in Crinoline’, p. 185.

[40] Appigagnesi, Trials of Passion, p. 25.

[41]The Times, 16 December 1882. p. 9.

[42] Robb, ‘Circe in Crinoline’, p. 182.

[43]The Mysterious Poisoning Case at Liverpool, The Illustrated Police News, 8 June 1889, Front Cover.

[44] This is the same James Maybrick, incidentally, who was the supposed writer of a faked diary, published in 1992, identifying him as Jack the Ripper. (Accessed 26/4/21).

[45]The Mysterious Poisoning Case at Liverpool, The Illustrated Police News, 8 June 1889, Front Cover.

[46] Florence Maybrick was released in 1904, after a review of her case showed that it was unsafe (her husband had been self-prescribing medicines), to significant public sympathy.

[47]Joanna Preparing the Poison for Sir John Cleveland, Reynolds’s Miscellany, 10 July 1868, Front Cover.

[48]Joanna Preparing the Poison for Sir John Cleveland, Reynolds’s Miscellany, 10 July 1868, Front Cover.

[49] L. Williams, Wayward Women (Barnsley, 2016), p. 29.

[50] Appigagnesi, Trials of Passion, (London, 2014), p. 25.

[51] G. Dugdale, A True Discourse of the practises of Elizabeth Caldwell (London, 1604).

[52]The Times, 19 April 1886, p. 4.

[53] Knelman, Twisting in the Wind, p. 86.

[54] Knelman, Twisting in the Wind, p. 86–87.

[55] Knelman, Twisting in the Wind, p. 86.

[56] Robb, ‘Circe in Crinoline’, p. 185.

[57] Hartman. Victorian Murderesses, p. 1.

[58] London Daily News, 18 March 1850, p. 5.

[59] Williams, Wayward Women, p. 29.

[60] Robb, ‘Circe in Crinoline’, p. 184.

[61] Hartman. Victorian Murderesses, p. 1.

[62] Robb, ‘Circe in Crinoline’, p. 182.

[63] Robb, ‘Circe in Crinoline’, p. 178.

[64] Williams, Wayward Women, p. 82.

[65] Hartman. Victorian Murderesses, p. 255.

[66] Robb, ‘Circe in Crinoline’, p. 183.

[67] Knelman, Twisting in the Wind, p. 93.

[68] Robb, ‘Circe in Crinoline’, p. 178.

[69] The Proceedings of the Old Bailey, 1674­–1913: JaneBowler, October 1842, ref. t18421024-3062,, accessed 04/02/2019

[70] Old Bailey, Ann Merritt.

[71] Old Bailey, Ann Merritt.

[72] Old Bailey, Adelaide Bartlett.

[73] Old Bailey, Adelaide Bartlett.

[74] Old Bailey, Adelaide Bartlett.

[75] M. Weiner, Men of Blood: Violence, Manliness, and Criminal Justice in Victorian England (Cambridge, 2004), p. 134.

[76] Knelman, Twisting in the Wind, p. 105.

[77] Appigagnesi, Trials of Passion, p. 114.

[78] Appigagnesi, Trials of Passion, p. 114.

[79] Robb, ‘Circe in Crinoline’, p. 184.

[80] For wider context on the sexual double standard the recommended reading is Judith R. Walkowitz’s Prostitution and Victorian Society: Women, Class and the State

[81] Old Bailey, Jane Bowler.

[82] Old Bailey, Adelaide Bartlett.

[83]Old Bailey, Adelaide Bartlett.

[84]Old Bailey, Adelaide Bartlett.

[85] J. Walkowitz, City of Dreadful Delight, (Chicago, 1992), pp. 219–20.

[86] Robb, ‘Circe in Crinoline’, p. 187.

[87]Robb, ‘Circe in Crinoline’, p. 187.

[88]Robb, ‘Circe in Crinoline’, p. 187.

[89] Robb, ‘Circe in Crinoline’, p. 179.

[90] The Times, 26 August 1856, p. 6.

[91] Weiner, ‘Men of Blood’, pp. 131-132.

[92] Weiner, ‘Men of Blood’, p. 133.

[93] Weiner, ‘Men of Blood, p. 131.

[94]Transportation was seen as a cost effective and positive form of punishment; it removed convicted criminals from British society, and the country’s prisons or asylums but in its own right it could be a death sentence.

[95]Old Bailey , Ann Merritt.

[96] ‘Wrongs Without Redress’, Lincolnshire Chronicle, 5 April 1850, p. 7.

[97] Weiner, ‘Men of Blood’, p. 131.

[98] Weiner, ‘Men of Blood’: Violence, Manliness, and Criminal Justice in Victorian England (Cambridge, 2004), pp. 131.




The Problem with Prison – From an Academic Who’s Been There

The Problem with Prison – From an Academic Who’s Been There

Gary F. Fisher is an inter-disciplinary teacher and researcher in the liberal arts tradition. He received his doctorate in Classics from the University of Nottingham in 2020 and has published research on a variety of subjects, ranging from the history of education to twentieth-century travel literature. He is currently employed by Lincoln College.

The National Records of Scotland recently released their much-delayed statistics concerning the number of drug-related deaths that occurred during the year of 2019. They revealed a continued rise to a record high, firmly cementing Scotland as the drug deaths capital of Europe. This, combined with the fact that Scotland also boasts the largest prison population per capita in Western Europe, has prompted a slew of articles proposing various types of reform. So far, so familiar. ‘Prisons in this country are a mess’ is one of the few sentiments that appears to be shared on both sides of the political aisle in the UK. It is a sentiment that you can find in both the New Statesman and The Spectator. As the Oxford History of the Prison has noted, it is a sentiment that has persisted for over two centuries, at least since the prison reformer John Howard roundly criticised the condition of eighteenth-century Britain’s criminal justice system in his 1777 The State of Prisons.

That, sadly, is where the unity ends. While Britons can collectively agree that they are not happy with the state of our prisons, we differ wildly in our proposed solutions. More than that, we cannot even agree on precisely what the problem is. On the one hand, there are those who believe prisons have become too soft, that they are limp-wristed holiday camps in which society’s foulest enter through a revolving door to be waited on hand and foot before being released all too quickly upon an unsuspecting public. On the other hand, there are those who view prisons as brutal and draconian institutions, in which cruel and oppressive restrictions on individuals’ human rights serve to entrench and reinforce criminal behaviour to no palpable social benefit. The former will question what punishment or deterrent is offered by giving potentially violent and dangerous criminals free access to entertainment resources and educational courses, luxuries that law-abiding citizens would only be able to access at personal expense. The latter will appeal to examples of ‘humane incarceration’, typically exhibited in Scandinavian countries, and cite the reduced rates of re-offending associated with a more rehabilitative approach. Representatives of these two broad camps regularly spar on the airwaves, yet these clashes rarely serve to advance the dialogue. One can watch two daytime television debates entitled ‘Are we too soft on prisoners?’, one from ten years ago and one from only last year, and see almost the exact same points and rebuttals being made, with no significant innovations in reasoning having being made in the intervening decade. Neither side will engage with, let alone be convinced by, the arguments of their opponents. In fact, it’s questionable whether they even hear each other. 

Courting one of these two sides has become something of a prerequisite for elected office in the UK. Upon his election in 2019, Boris Johnson immediately sought to placate the ‘tough on crime’ crowd by introducing reforms to expand sentence durations and prison numbers. On the other side of the debate, since her election as Scottish First Minister in 2014 Nicola Sturgeon has consistently cultivated favour amongst those who favour a progressive approach to criminal justice by introducing reforms to move the focus of Scottish criminal justice away from punishment and towards rehabilitation. This politicisation of debate has hardly helped matters and, frustrated by the fruitlessness of dialogue on the subject, one criminal defence lawyer recently penned a plea in Scottish Legal News. Iain Smith implored legislators to ‘move beyond tokenistic, meaningless terms like being “hard” or “soft” on crime’, and instead adopt a ‘smart’ approach that focuses on achieving the goal of reducing offending through solutions that occur outside the walls of a prison. As well intentioned as Smith’s plea may be, it seems unlikely to gain traction. 

Highly conscious of this fatalistic public attitude towards the prison system, I first passed through the gates of a category C men’s prison at the beginning of 2020. I should stress that I entered voluntarily, joining the staff as the manager of the prison’s library. It was not long into my career as the librarian that I found the same two broad camps that occupied public dialogue had carried through into the four walls of the prison. There were those staff who pined for the days of ‘proper prison’ and suggested that the modern officer ought to be renamed the ‘custody butler’ for the amount of waiting on their charges that they were increasingly expected to perform. Drawn against them were those who embraced the sector’s increased emphasis on supporting rather than simply securing their residents, and quietly derided those individuals they believed to be insufficiently ‘pro-prisoner’ in the execution of their duties. The incompatibility of these two schools was driven home to me when the library found itself in possession of several copies of a workout guide that could be completed in a cell with no equipment. One of my colleagues was excited by the prospect of distributing this to the men in their cells so that they could continue to exercise while the ongoing pandemic limited their yard time. The other asked ‘why would we want to help them get stronger?’ 

This was more than a simple difference in methods: it was a disagreement as to what the fundamental goal of prisons actually was. To some, their goal was to punish, and anything other than Dickensian horror would represent a betrayal of that goal. To others, their aim was to reform and support, and any interruption to their counselling sessions and wellbeing workshops constituted a frustration of that aim. No wonder these two sides are unreceptive to each other’s arguments: they’re arguing from different pages. They don’t merely disagree on what is wrong with prisons, they disagree on what a ‘correct’ prison would look like.

The reality is that the punitive and rehabilitative approaches, while not necessarily completely mutually exclusive, are at the very least highly counter-productive. It is somewhere between hard and impossible to adopt one without at the very least undermining the other. During my time in the prison library, I witnessed how these two approaches played against each other, with the prisoners caught in the middle. To illuminate this, consider a few examples. Upon being sentenced, a newly convicted offender will be transported to their holding facility in a secure transport vehicle, colloquially known as a ‘sweatbox’. Inside the sweatbox the offender is crammed into a tiny cubicle smaller than an airplane bathroom. So small is this space that taller offenders are unable to properly sit down within them, instead having to half-stand, half-sit for the duration of their indeterminately long journey from court to prison. The transport’s blacked-out windows make the prisoner invisible to the world outside and he is assigned his number and stripped of all personal effects. With this removal of identity completed, the prisoner is then assigned a counsellor and mental health support worker to help him explore the roots of his criminality and come to terms with the very personality that has just been stripped from him. 

A prisoner is encouraged to explore his faith and has regular contact with a large team of professionally-trained chaplains from a variety of religious denominations. But if he indulges too deeply in this faith he will find himself earning the attention of counter-extremism professionals who will monitor him and his reading habits closely and reprimand him or extend his sentence if they deem it appropriate. 

He is given a wide variety of learning opportunities and encouraged to attain qualifications that will hold him in good stead upon his return to the outside world. But he is also barred from accessing the greatest source of learning and information ever created: The World Wide Web. He instead has to make do with the finite materials that the already overstretched library and education services are able to provide and risk a revocation of his learning privileges should a book be returned delayed, damaged or dog-eared.

Of course, those who believe their job is to punish and those who believe their job is to rehabilitate are frustrated with each other. They are not merely pursuing different objectives, they are actively cancelling out each other’s efforts. I am not the first to have noted the incompatibility of these goals. In 2019 James Bourke, the governor of H.M.P. Winchester, was reported in The Daily Telegraph criticising the culture of ‘fantasy’ surrounding what prisons are able to achieve, and how the present attempt to achieve both punishment and rehabilitation has left both goals unfulfilled. Although conditions in prison may be a horrifying deterrent for the ‘nice, white middle-class’ people who legislate for them, Bourke claims their structure and security means they can act as a ‘place of refuge’ for those from more troubled backgrounds. Meanwhile, the idea that a five-week prison sentence is enough time to successfully rehabilitate a convicted criminal after years, if not decades, of accumulated suffering and habitual criminality is derided as a ‘fantasy’. In attempting to both deter criminal behaviour  through harsh punishment and rehabilitate convicted criminals through reformative support, prisons seem doomed to fail both goals.  

This leads back to the condition of public debate about criminal justice reform. Despite the great amounts of ink spilled and spittle launched debating the conditions of our prisons, we never seem to have actually tackled this most fundamental of issues: what do we actually want our prisons to achieve? Do we want to rehabilitate, or do we want to punish? Neither goal is palpably ridiculous, but we can’t have our cake and eat it too. As it stands, the same arguments will continue to be repeated, drug-related deaths will continue to rise, and the one thing that we’ll all be able to agree on will be that we’re not happy with the state of our prisons.

In the BBC’s recent political thriller Roadkill, the Prime Minister, played by Helen McRory, informs Hugh Laurie’s Justice Minister that,  ‘We lock people up. We’re famous for it. We like locking people up. It’s in our character’. For the foreseeable future, that seems unlikely to change. 

Download PDF