Katie Barclay, Caritas: Neighbourly Love and the Early Modern Self (2021)

Katie Barclay, Caritas: Neighbourly Love and the Early Modern Self (2021)

Abstract

In this article, Lucy Morgan reviews Caritas: Neighbourly Love and the Early Modern Self by Katie Barclay, published in hardback and e-book in January 2021. This book explores the Christian concept of caritas as an expression of neighbourly love and how it was experienced by lower-order Scottish people from 1660 to 1830. Barclay uses legal depositions and correspondence to examine the emotional and bodily aspects of caritas, positing that in a loving community, marital relationships were the ideal upon which all other social relationships were based. The author goes on to discuss how children were raised into the beliefs of caritas, what happened to those who rejected caritas’s principles, and how itinerant individuals who lived outside of the normal boundaries of society still had a role within the loving community.

 Keywords: Emotion, early modern, history, community, Christianity

Biography: Lucy Morgan is a first-year PhD student at the University of Sheffield. She is interested in the relationship between manhood and paternity in early modern England.

Katie Barclay’s book, Caritas: Neighbourly Love and the Early Modern Self, is the most recent instalment in Oxford University Press’s ‘Emotions in History’ series. Barclay describes caritas as an aspirational form of Christian neighbourly love, practiced by Catholics and Protestants across early modern Europe, with the purpose of creating loving communities of neighbours who supported and relied on each other. This book focuses on the lives of ‘lower-order’ Scots from 1660 to 1830, examining 2,000 cases from the Scottish Justiciary Court as well as sampling some surviving correspondence, providing a wide temporal and geographic overview of caritas throughout this period.[1] Barclay aims to understand ‘how individuals enculturated [caritas], how they performed it, negotiated it, and occasionally rejected it’ within their everyday lives, through a study of ‘behaviour, gesture, material culture practices, and ritual’.[2] Caritas, literally translating into English as ‘charity’, was a force which went beyond the Ten Commandments’ dictates to not covet your neighbour’s house or wife. Barclay positions caritas as the opposite of lust; instead of selfish, individualistic, sinful love, caritas was a selfless, ethical, moral love which encouraged individuals to connect with others.[3]

Scotland provides an excellent setting for a study of this type; during this period, it was still dominated by the village and small-town structure where close-knit communities determined the social life and economic success of an area. Barclay also notes, however, that geographic mobility of Scottish people was rapidly increasing at this time, resulting in an itinerant population who do not necessarily fit into historians’ perceptions of community living. The legal sources, mostly depositions, used by Barclay are invaluable to a study of emotions—they provide a great deal of first-person information about how lower-order people felt about the behaviour of their neighbours. The lives of lower-order people at this time blurred the boundaries between household and community—they were far more likely to live in single rooms or as multiple families to a house in comparison to their higher-order counterparts—and as a result of that forced closeness, their experience of caritas was physical as well as emotional. Caritas was expressed through virtue and grace, and therefore encompassed neighbourly love through non-action—for example, not starting a fight or not reporting bad behaviour—as well as expressions of the traditionally Catholic concept of “good works” like feeding the poor. Where cynical Protestants might interpret expressions of “good works” as selfish ploys only done to get to heaven, the emphasis on charity within caritas made “good works” acceptable within the Kirk and therefore central to the loving community.[4]

Barclay draws on a rich historiography of neighbourliness and familial love, as well as being influenced by more recent feminist philosophical works on the place of love in modern society, such as Adam Philips and Barbara Taylor’s On Kindness. Her approach rejects family nuclearisation models, instead suggesting that lower order-Scots retained extended horizontal and vertical familial-neighbourly networks throughout her study. This book broaches a gap in the field of emotional history, providing a link between individual and communal emotional experiences in the past. She employs William Reddy’s concept of the ‘emotional regime’, where certain emotions can be studied as “dominant norms” within a society, alongside Monique Scheer’s concept of ‘emotional embodiment’, where emotions are “practiced” through expression and reciprocation between individuals.[5] This reinforces the idea that loving communities were sustained through nature and culture. Barclay’s work uses what she describes as the ‘new history of emotions’ to prove that there was a “self” in the early modern period, arguing that caritas existed to relate “the self” to “the other” through individual and shared expressions of neighbourly love.[6]

The first two chapters of Barclay’s book deal with the induction into caritas and the education of children, both at home and through theological teaching. This provides an excellent entry point for readers who are also unfamiliar with the concept caritas, although these chapters could have benefitted from a further explanation of the etymology of the word. Barclay explains the meaning and translation of caritas in the introduction, but how that specific word was chosen remains uncertain throughout the text. It is most noticeable in these chapters, where Kirk and legal books are quoted extensively but none explicitly mention caritas (although charity and neighbours are mentioned often). It may be that this is due to a translation of these books from Latin into English, but a clarification of whether this term was ever used in an early modern Scottish context would have been interesting, if not beneficial, to the rest of the work. The neighbourly love advocated for by caritas was bodily and intimate, reinforced through spoken language as well as physical closeness in eating, working, and living together. As such, Barclay centres marriage and its ideals of reciprocal love and support as the foundation of caritas. This is firstly evidenced through a study of the communal celebration of marriage post-ceremony, with Barclay noting that wedding rituals retained complexity even after the introduction of banns-reading simplified the marriage sermon. Barclay then moves into an analysis of depositions relating to marriage breakdown, showing how members of the wider community were invited to provide testimony on the state of their neighbour’s marriages, indicating that all members of a loving community had a strong understanding of the role of marriage, including the ‘increase of mankind’ and the prevention of ‘uncleanness’ (sexual immorality).[7] This in turn influenced all other relationships, such as parent-child, employer-employee, and neighbour-neighbour, encouraging both moral policing but also forgiveness of moral infringements to maintain a peaceful equilibrium within the loving community. Childhood is used to examine how the uneven distribution of caritas was accepted within society. Privileging certain groups over others was acceptable, as long as any disparities reflected the ordered social hierarchy, such as the prioritisation of the education of sons over daughters. While Barclay notes that Enlightenment ideals about an ‘expectation of love’ for all children, including affectionate treatment, was present in Scotland by the eighteenth century, this is only examined in the case of parents caring for versus neglecting their own children.[8] It would be interesting if further study reversed this perspective and expanded upon the child’s place within the loving community during marriage breakdown or parental death. Similarly, future work could explore whether or not caritas played a role when adults cared for children other than their own.

The third and fourth chapters examine the reception of immoral actions under caritas; their practice, discovery, and reformation. Barclay navigates the existing historiography on the top-down or bottom-up nature of early modern discipline, drawing on Lyndal Roper’s Holy Household and Martin Ingram’s Carnal Knowledge, pointing out that while the Calvinsitic Kirk believed that all people were born with the Original Sin, for many lower order Scots, irregular marriage (marriage without an official Kirk ceremony, legal throughout the period) and premarital sex were ‘disorderly but not immoral’.[9] By pointing out that for many lower-order Scots, their home would just be a single room likely with one or more shared walls, Barclay suggests that neighbours were probably constantly aware of each other’s actions. These homes were shared and porous, and Barclay implies that there was no conception of public and private space for the people in her study. Tolerance of others therefore became a crucial part of caritas. Intermittent bad behaviour was permissible, but not prolonged threats to the social order. As such, while the keeping and telling of secrets was technically immoral, it also became a ‘central mechanism’ of caritas ‘through which peace and harmony’ was enabled.[10] Evoking Amanda Vickery’s Behind Closed Doors, Barclay describes the social rituals of making or revealing secrets as seen in legal contexts. In cases of violent altercations or elopements, the identification of overheard voices was critically important. Wordless sounds of fighting or crying were equally legally pertinent—the victim and the aggressor could be determined by who was louder or seemed more upset. Similarly, materiality as obstructing sight or sound was crucial—the act of closing doors became almost always suggestive of wrongdoing. This approach is innovative and could be adopted into other studies of the early modern home and the family, where scholars are often hampered by a lack of evidence around certain practices or behaviours. At many parts of this book, having no evidence is crucial; for example, Barclay shows that irregular marriages were often not disclosed to the community until it became necessary for a woman to prove that she was married, usually because she was visibly pregnant.[11] The “husband” might then come forward and claim such a marriage had never happened, resulting in the situation ballooning into a legal case where the caritas and the reputation of many members of the loving community would be affected.

The strongest chapter of Barclay’s book is the last, titled ‘Living Outside of Love’. Even by the end of the eighteenth century, Scotland remained mostly rural and non-industrial, meaning that vagrancy and itinerant work were still highly visible facets of communal living. Although these people did not live within the established boundaries of a local community, Barclay locates them within caritas, which gave them a place in the loving community, evidencing not only a ‘pragmatic’ approach to community but also a ‘comradery’ between those who lived on the road.[12] This chapter deftly unites the historian’s usually disparate understandings of work, vagrancy and communal living, showing how caritas encouraged communities to permit begging as a form of Christian charity, but also showing its limits and how it was possible to take advantage of the selflessness of caritas by accepting hospitality without giving back to the community. Barclay also discusses the ramifications of banishment as a punishment for extreme infringements of caritas, usually in cases murder or infanticide, indicating that deliberate exclusion from a community had significant local and personal impact. Although banishments were not usually permanent, it nevertheless indicates that loneliness and social exclusion were seen as the obvious repercussions for communal non-conformity, leading Barclay to conclude that ‘attachment to land and place was critical to the imagining of the social order’.[13]

By giving caritas a position of centrality within early modern understandings of community, Barclay is able to show how it informed a wide range of individual, often contradictory, choices. Barclay’s approach to emotional ethics allows for a more nuanced approach to bodily and emotional experiences which are lost in more prescriptive studies of law or social hierarchy. The methodology of this book could be applied to any Christian sect across Europe in the early modern period as a comparative study, and if Barclay wishes to expand on her own work, an investigation of old age’s place in caritas would be much appreciated. As much of the book deals with education and sexual immorality, children and the unmarried necessarily take the forefront for much of the work. However, Barclay intriguingly mentions that in some parts of Scotland, community elders presided over formal Kirk disciplines alongside the minister. Although Barclay states that these courts became increasingly marginal and ineffective throughout the period, age as an indicator of wisdom and community leadership could enrich further studies of caritas.

Overall, this is a strong work which breathes life into its subject matter, allowing for an examination of complex social and personal issues including domestic abuse and violence, premarital and extramarital sexuality, and the material and immaterial boundaries of society and community. Crucially, Barclay’s finding that almost all immoral and even some criminal actions could be redeemed through caritas provides a new perspective for researchers interested in society, religion, and acceptable behaviour in the early modern period.

Download PDF

 

Bibliography

Barclay, K., Caritas: Neighbourly Love and the Early Modern Self (Oxford, 2021).

Ingram, M., Carnal Knowledge: Regulating Sex in England, 1470–1600 (Cambridge, 2017).

Philips, A., and Taylor, B., On Kindness (London, 2009).

Reddy, W., The Navigation of Feeling: A Framework for the History of Emotions (Cambridge, 2001).

Roper, L., The Holy Household: Women and Morals in Reformation Augsburg (Oxford, 1989).

Scheer, M., ‘Are emotions a kind of practice (and is that what makes them have a History)? A Bourdieuian approach to understanding emotion’, History and Theory 51/2 (2012), pp. 193–220.

 

Notes

[1] K. Barclay, Caritas: Neighbourly Love and the Early Modern Self (Oxford, 2021), 4 and 19–21.

[2] Barclay, Caritas, p. 14, 24.

[3] Barclay, Caritas, p. 3.

[4] Barclay, Caritas, p. 3.

[5] See William Reddy The Navigation of Feeling 2001; and Monique Scheer, ‘Are Emotions a Kind of Practice?’

[6] Barclay, Caritas, p. 3.

[7] Barclay, Caritas, p. 39.

[8] Barclay, Caritas, p. 64.

[9] Barclay, Caritas, p. 95.

[10] Barclay, Caritas, p. 118.

[11] Barclay, Caritas, pp. 95–97.

[12] Barclay, Caritas, p. 150.

[13] Barclay, Caritas, p. 166.

Milanovic, B. Global Inequality: a New Approach for the Age of Globalisation (London, 2016)

Global Inequality: a New Approach for the Age of Globalisation (London, 2016). Milanovic, B.

Abstract

Milanovic’s Global Inequality: A New Approach for the Age of Globalisation seeks to create a new model for explaining the patterns in the growth and decline on inequality in the world, remodelling Kuznet’s hypothesis to take account of the rise of a “global plutocracy” and wage stagnation in the Western middle classes. In doing so, he has some pessimistic forecasts for the immediate future of the middle classes in the West and makes timely warnings against neglecting income and wealth inequality in favour of ‘existential inequality’.

Biography: Sam Tarran is an MA student in History at the University of Birmingham, focusing on political institutions in medieval Europe. His undergraduate degree is from Mansfield College, University of Oxford.

Global Inequality: A New Approach for the Age of Globalisation is not a new book. Despite the fact that it was published relatively recently, it still feels necessary to recount the context in which Milanovic wrote and released it. So much has happened since that it can feel like another world, already part of ‘history’. At the time, inequality both within and between countries felt important. The Occupy movement had ended but was still in the mind’s eye. The world was just beginning to creep out of the Great Recession. The public of many countries, after nearly seven years of economic decline, wage pressure and austerity policies, were feeling restless. Many, after Occupy, public sector cuts and the (initial) electoral success of Syriza in Greece, thought it would be the political left that would make hay. It is only with hindsight that many now claim that it was historically predictable that the populist right would capitalise.

Milanovic published his book in 2016. Since then, we have had Trump, Brexit, the ‘cultural wars’ and Covid-19. The arguments over economic inequality now feel traditional and unfashionable. The book is still, therefore, underread, but despite it falling foul of current Twitter trends, it somehow feels timely. Overall, Milanovic’s analysis and argument – well-written and engaging as it is – would benefit from a deeper survey of pre-modern historical periods and the world outside the West. He also discusses potential solutions that are politically and socially impossible.  Where Milanovic is most interesting is his warning against overly focusing on ‘existential inequality’, defined as the unequal “legal treatment of different groups” based on factors such as race, disability, sexual preference and gender (p. 226). This, he argues, is unhelpful as it feeds identity politics, splitting the public into communitarian interest groups who, once their own campaigns and needs are sated, will not aid others. This inhibits the collective action needed to create real change and blocks the discussion of the “harder” questions of how to solve wealth and income inequality both domestically and globally. Reducing income inequality, he points out, also helps with gender and race equality, although he does not offer any substantive examples. If Milanovic’s point was relevant then, his pleas are urgent now. We have, somehow, forgotten about income and wealth inequality.

Milanovic seeks to analyse inequality on a global level and ‘not as a national phenomenon only, as had been done for the past century’ (p. 2). In this, he alludes to the works of Karl Marx and Adam Smith, a previous generation of political economists who sought explanatory power on a planetary scale. There were, however, other works published post-2010 on this topic. Milanovic himself cites Thomas Pikkety’s Capitalism in the Twenty First Century, published in English in 2013. There was also Jason Hickel’s The Divide: a Brief Guide to Global Inequality and Its Solutions, published in 2017. The Great Recession had also spurred new academic interest in the subject from a historical perspective. Histories of Global Inequality: New Perspectives, is a collection of works (edited by Steven Jensen and Christian Christiansen) which seeks to ‘historicise Piketty’. Piketty’s work is reformist, seeking to save capitalism from within by recommending measures such as a global redistribution of wealth, effectively scaling up the tax-and-spend policies of the West. These suggestions were enthusiastically met by some at the time, but their chance of success appears difficult to envision. Witness, for example, the new ‘progressive’ President of the United States threatening sanctions on Britain and the EU for daring to implement new taxes on America’s digital giants. Milanovic is similarly bold in his recommendations, but understandably less optimistic about their success. Despite the many graphs, statistics and discussion of the Gini co-efficient, the work remains very accessible. The language is largely in plain English and the analysis, even for a layman such as this reader, is relatively easy to follow.

From the outset, he sets out themes that, post-Brexit and post-Trump, we recognise instantly: the rise of a global middle class based primarily in Asia, the income stagnation of the middle to lower-middle classes in the West, falling social mobility, and the ‘emergence of a global plutocracy’ (p. 3). Milanovic re-posits the Kuznets hypothesis – that industrialisation leads to higher and later reduced income inequality – as “waves” to take account of our secular stagnation, where periods of intense technological innovation increase inequality before political pressure and educational attainment reduces it again. The technological changes post-1980 have facilitated the rise of finance and tech billionaires who, through the newfound portability of their skills and capital, have become disconnected from their national polities. Milanovic is cold in his analysis of this group until the final chapter, when he just holds back from condemning the hypocrisy of a global top 10% that produces 50% of world carbon emissions flying in private jets to preach about climate change.

Like many economists and economic historians, Milanovic is most comfortable when discussing his model in the context of periods and countries from which we have plenty of data. His analysis therefore skews modern and towards the US and UK. He is on less firm footing when he moves into the early modern, medieval and non-Western worlds. At one point, he freely uses the term ‘capitalist’ in relation to cities in the late Middle Ages. He would have been better discussing income inequality and ‘surpluses’ in the High Middle Ages, when monasteries thrived, guilds were formed, trade increased, fens were drained, and a Latin aristocracy expanded its influence into the former European periphery.[1] Ironically, it was in the following period, in the aftermath of the Black Death, that political agitation among the middling sort began, just as the polities of Europe were starting to expand their bureaucratic reach and depth.[2] A discussion of these periods and a reconciliation of this apparent contradiction – one may expect agitation to come during the period of greater growth and inequality – to his model would have been illuminating. Instead, Milanovic largely skips straight from the fall of Rome to the ‘commercial revolution’ of sixteenth century Italy. The skipping of the medieval period and eurocentrism is a common problem in attempts at global histories, particularly those focused on inequality. The sources, unfortunately, are inconvenient. Medieval records can be patchy and incomplete. Records from outside the West present problems of language and cultural interpretation. A collaborative approach is necessary to bridge these gaps, but pressures of time and resources can make this difficult.

Milanovic, from focusing on inequality within countries, moves to inequality between countries. Here, he does not seek to explain why the West is richer than the East, which has been done elsewhere. Instead, he predicts that the fast-growing economies of the Global South, namely in Asia, will soon begin to move through the Kuznets waves in much the same way as the West. In doing so, they will face similar challenges to the West in the nineteenth and twentieth centuries as the middle classes seek enfranchisement. Further, the Citizenship Premium that we in the West currently enjoy for being born in wealthier countries is slowly being eroded as location becomes a less significant factor in one’s income potential.

So, what are Milanovic’s predictions for the future? He wisely cautions against specific forecasts in Chapter 4, pointing out that economists often extrapolate existing trends in the future without taking account of the unexpected, referencing Taleb’s Black Swans, large-scale, often unpredicted events with far-reaching consequences, such as tsunamis or political assassinations. Few writers of the 1970s, for example, predicted the Reagan-Thatcher boom and the collapse of communism. Indeed, some even predicted a convergence between the two, with the Soviet bloc liberalising slightly and the West becoming more statist. He therefore resists specific predictions but does venture some ideas. His belief that the Great Convergence will continue, no matter what happens to China’s economic growth, is hardly controversial. He shows, however, that the Convergence is largely an Asian phenomenon, which means global inequality is likely to continue to grow unless Africa also begins to converge.

He is also pessimistic about the future of Western middle classes, who he argues will continue to be squeezed by globalisation, automation, and the more debateable assertion that education has, more or less, hit its quantitative and qualitative limit. As capital and, increasingly, labour become more difficult to tax, his solution is more punitive inheritance taxes to reduce the power of “endowments” and tax policies that encourage share ownership and financial asset ownership among the lower and middle classes. He compares the systems of Taiwan and Canada to illustrate how this approach can be as effective as a traditional tax-and-spend system. In the next breath, however, he says this is pointless without increased accessibility to quality education, contradicting his own comments on education earlier on.

This, unfortunately, is the pattern of the book: some good economic analysis, followed by political assertions that are often less reinforced by evidence, then proposed solutions that are politically unpalatable. Another example is his proposal to resolve the conflict between the high numbers of migrants and the typical internal resistance to immigrations: implement discriminatory policies against migrants which make clear they do not share the privileges of ‘native’ citizens. As Milanovic himself intimates, this is unlikely to be accepted by anyone, except, ironically, the migrants themselves. In that sense, Milanovic falls into the classic economist (perhaps, even, academic) trap of proposing ‘evidence-based policy’ that ignores specific social and political contexts. His predictions for Asian societies may also be helped by a consideration of the local and non-economic forces impacting on politics and demographics in the region. To some extent, we have the benefit of hindsight here. China has continued to grow apace, but its burgeoning middle class seems as compliant and pacified as ever thanks to its continued prosperity and Xi’s increasingly aggressive nationalism. This may prevent China moving up its own Kuznets wave any time soon.

His model and his analysis would have greater power and more nuance if it were more greatly supplemented by extended historical discussion of the non-Western and non-modern worlds and the powerful forces that exist outside of economics. Nevertheless, Milanovic’s work is comprehendible, sweeping, and at times gripping. Its structure is logical and helpful. He manages to explain complex economic phenomena for a general audience without being patronising. Further, its message – that inequality will continue to exist despite globalisation, and that we ignore it at our peril – feels incredibly relevant as it struggles to cut through the noise of contemporary political fashions.

Download PDF

 

Bibliography

Bartlett, R., The Making of Europe: Conquest, Colonisation and Cultural Change, 950-1350 (London, 1994)

Christiansen, C.O., & Jensen, S., Histories of Global Inequality: New Perspectives (London, 2019)

Hickel, J., The Divide: a Brief Guide to Global Inequality and Its Solutions (London, 2017)

Piketty, T., Capital in the Twenty First Century (Paris, 2013)

Watts, J., The Making of Polities: Europe, 1300-1500 (Cambridge, 2009)

 

Notes

[1] See, for example, R. Bartlett, The Making of Europe: Conquest, Colonisation and Cultural Change, 950-1350 (London, 1994)

[2] J. Watts, The Making of Polities: Europe, 1300-1500 (Cambridge, 2009)

Jury Nullification: The Short History of a Little Understood Power

Jury Nullification: The Short History of a Little Understood Power

Richard Marshall is a PhD student in History at the University of Plymouth. His doctoral research explores the place of trial by jury in the politics, culture and society of late eighteenth-century English radicalism. He is supervised by Dr James Gregory and Dr Claire Fitzpatrick.

Keywords: Juries, Nullification, Legal History, Popular Justice, Repression, Liberties, Rights

Jury nullification (or Jury Equity) is perhaps the greatest safeguard against unjust laws or excessive punishment to exist in Britain. It is the practice whereby a jury delivers a verdict contrary to the evidence, law and judicial directions by acquitting a defendant they believe beyond a reasonable doubt guilty by the letter of the law, but on grounds of conscience think should not be punished.

This little understood power has existed since a 1670 ruling declaring that no juror could be punished for a ‘wrong’ verdict, which when coupled with the double jeopardy rule leaves the door ajar for juries to essentially override laws.

For centuries it was considered a critical element of the constitution, a check against tyrannical government and unfair laws but above all a mechanism that permitted the people to directly influence, comment upon and force reform of laws they deemed morally suspect. What’s more, it was widely understood and discussed as a normal part of the legal process. The idea a jury of ‘freeborn Englishmen’ could not deviate from a judge’s charge or the legal letter was an almost universally rejected one. Indeed, it was not uncommon for jurors to be encouraged to ignore or reject judicial guidance and actively consider not just the evidence but the interests of their community, religion, nation, and constitution.[1] When an English juror entered the jury box in the eighteenth century, he was expected to act for his country and discretion was the norm. Granted not everyone accepted the idea but none denied the existence of nullification nor the rights and powers of jurors as the ultimate arbiters of justice. Most today though will never have heard of this power. For too many in the modern legal fraternity nullification is a dirty word, with a myriad of objections raised against it.[2] Most notably, they argue it is ineffective at procuring meaningful change and encourages prejudice among jurors.

I was brought to think about the history of this murky power by the recently introduced Police, Crime, Sentencing and Courts Bill, the politically interested attempt to attack freedoms of protest, speech and the rights of the traveller community all at once. Regarding protest, it seeks to lower the legal test required for police to act against otherwise legitimate protest, with its most egregious element being the new offence of ‘public nuisance’. This will carry a maximum ten-year prison term and is defined as causing ‘serious harm’ to the public through ‘serious annoyance, serious inconvenience or serious loss of amenity’. Meanwhile the Bill also proposes to criminalise trespass, an effort to attack the freedoms of the traveller community opposed by the majority of policing bodies including both the National Police Chiefs Council and the Association of Police and Crime Commissioners.

The backlash has been immense. Politicians, lawyers, civil rights organisations, charities and many others have spoken out against the Bill, as have many thousands in mass demonstrations across the nation. It would not be unfair to suggest that a significant minority, perhaps more, oppose the provisions of this Bill as infringements on liberty. This was the very sort of legislation radicals of the past would have hoped and indeed encouraged jurors to nullify.

But does this mean we should today? For me, the answer is an unquestionable yes. Nullification exists for just these circumstances, to frustrate legislation passed or enforced spuriously or for political reasons. And the past provides plentiful justifications and precedents for its use.

In the first instance, our national history is littered with cases where juries, in spite of evidence and judicial direction, acquitted defendants to the benefit of society. Arguably the most well-known is that of Clive Ponting, the British civil servant who leaked documents relating to the sinking of the General Belgrano during the Falklands Conflict. His trial for allegedly breaching the Official Secrets Act was a cause célèbre and his unexpected acquittal, contrary to the judge’s direction, caused enough consternation for the Conservative government to remove the ‘Public Interest Defence’ Ponting relied on from Official Secrets legislation in 1989.

But the Ponting case is an outlier. For one, the acquittal had a basis in law thanks to the Public Interest Defence. A true act of nullification occurs where the law does not provide an adequate escape route and its provisions are thus arbitrary. The new Policing Bill promises to be such a law. To justify employing nullification against such an Act, there are a myriad of examples where jurors have acquitted in the face of such legislation. Take for instance the case which established the practice, the trial of two Quakers William Penn and William Meed in 1670.

The pair were charged under the Conventicle Act which restricted non-Anglicans to meetings of no more than five people. Meed and Penn were arrested preaching to several hundred.

Despite overwhelming evidence and threats from the judge, the jury refused to convict, believing the law to be morally wrong. The subsequent legal ruling that no juror could be punished merely for their decision still reverberates today and is even commemorated at the Old Bailey. This commemoration strikes me as almost a burlesque given the staunch opposition of the courts and Crown to jurors being permitted to understand the ramifications.

Download PDF

Bibliography

Primary Sources

Hawles, J., The Englishman’s Right: A Dialogue between a Barrister at Law and a Juryman &c. (London, 1680).

 

Secondary Sources

Darbyshire, P., ‘The Lamp that Shows that Freedom Lives—Is It Worth the Candle?’ Criminal Law Review (1991), pp. 740–752.

Handler, P., ‘Forgery and the End of the “Bloody Code” in Early Nineteenth-Century England’, Historical Journal, 48/3 (2005), pp. 683–702.

Harling, P., ‘The Law of Libel and the Limits of Repression, 1790–1832’, Historical Journal, 44/1 (2001), pp. 107–134.

McGowen, R., ‘From Pillory to Gallows: The Punishment of Forgery in the Age of the Financial Revolution’, Past and Present, 165 (1999), pp. 107–140.

Wharam, A., The Treason Trials, 1794 (Leicester, 1992).

Notes

[1] Guides for jurors often encouraged them to remain independent of the judge and emphasised that they represented their fellow citizens in court. A powerful and frequently reprinted example was J. Hawles, The Englishman’s Right: A Dialogue between a Barrister at Law and a Juryman &c. (London, 1680).

[2] See P. Darbyshire, ‘The Lamp that Shows that Freedom Lives—Is It Worth the Candle?’ Criminal Law Review (1991), pp. 740–752.

The Northern Question: A History of a Divided Country by Tom Hazeldine

The Northern Question: A History of a Divided Country by Tom Hazeldine

Abstract

In this review of Tom Hazeldine’s The Northern Question, David Civil explores how Britain’s geographic cultural constructions and regional inequalities have impacted on the nation’s politics from the Industrial Revolution to Brexit. The Northern Question: A History of a Divided Country, Tom Hazeldine, London, Verso Books, 2020, ISBN: 9781786634061; 290pp.; Price: £20.00.

Biography: David Civil is the MHR’s Spotlight Editor and completed his PhD in History on the concept of meritocracy in 2020. 

A Northern parliamentary seat flipping in a by-election from Labour to Conservative for the first time in its history may immediately bring to mind recent, post-Brexit political developments. We might instinctively reach for the language of ‘left-behind’ voters or highlight the significance of ‘Red Wall’ constituencies to explain this unprecedented electoral shift. And yet this by-election in Workington took place in 1976. It saw the return of 35-year-old Richard Page for a Conservative Party under the new leadership of Margaret Thatcher. In 1979 the seat would return to Labour, until the 2019 General Election when it flipped to the Conservatives as part of their assault on the so-called ‘Red Wall’. The notion of ‘Workington Man’ appeared to capture the impact of Brexit in ripping apart old loyalties and traditional voting patterns. Yet as Page’s triumph in 1976 demonstrates, these developments have a long history. The Editor of the New Left Review, Tom Hazeldine sets out to explore this history in his recent book, The Northern Question: A History of a Divided Country. Post-Brexit Northern England, Hazeldine claims, has propelled itself to the ‘foreground of national attention for the first time since the socio-economic crisis of the Thatcher years’ (p. xii). By exploring the interaction between nation-state, social class and geographical region, The Northern Question hopes to ‘let some light in through several windows’ to illuminate a debate whose importance is only going to grow in the next decade and beyond (p. xiv).

Hazeldine’s historical survey begins in Chapter 2, framing the North as the ‘Badlands’ which for the most part refused to adopt the ‘intense feudalism of the Midlands and the South’ (p. 31). If the burdens of serfdom were ‘generally lighter’ in the North ‘rural benightedness – the absence of towns and literacy – was correspondingly deeper’ (p. 31). For Hazeldine, it is clear that evidence of Northern disadvantage vis-à-vis their Southern neighbour is visible even during the early modern period. Even under pre-modern conditions ‘not even a member of the royal line could scrabble together quite enough strength in the North to reign in defiance of Establishment opinion’ (p. 33). In a theme that runs throughout The Northern Question, ‘the sine qua non of governing England was to have its southern heartland on side’ (p. 33). This whistle-stop tour ends with the historical event or process which gave the North its defining characteristics in the national imagination: the Industrial Revolution. Chapter 3 traces how the forces of industrialism and revolt intersected throughout the nineteenth century as the exponential growth of the factory system was accompanied by Luddites, Chartists, and democratic reformers. Despite these upheavals, ‘the pre-industrial mould of British politics remained unbroken, with fateful consequences for the North once its commercial fortunes began to slide’ (p. 70). Drawing on Elizabeth Gaskell’s 1854 novel North and South, Hazeldine claims that ‘even in the land of long chimneys’, business survival still hinged on the ‘attitude taken by the traditional landowning and monied interests of the South’ (p. 70). Chapter 4, focusing on the ‘capital-goods phase of the manufacturing revolution’, expands on these themes to demonstrate how despite the mirage of Northern prosperity, ‘Liverpool and Manchester wealth holders were second only to Londoners in their readiness to put money into foreign undertakings’. It is at this point, at the start of the twentieth-century, that Hazeldine’s ‘declinist’ thesis begins to emerge in full force: ‘For the North it was a case of so far and no further: from now on, it would have to sink or swim with its nineteenth-century coal mines, textile mills, steelworks and shipyards’ (pp. 72–73).

The repercussions of The First World War loosened Lancashire’s grip on British India, the biggest outlet for its cotton goods. In Hazeldine’s words, ‘outpaced by late-start competitors in Europe, America and the Far East, dependent on imperial privileges approaching expiry, the world’s first industrial region was about to experience the ground giving way beneath it’ (pp. 89–90). Chapter 5, entitled ‘Dereliction’, gives an indication of Hazeldine’s assessment of the British state’s response. Despite the parliamentary breakthrough of the Labour Party, the two wings of the labour movement took turns to court disaster through the timidity of its leaders in the face of the General Strike and the Great Depression. By 1934 unemployment fell back to single digits in London and the South East but in the North and Scotland remained stuck above twenty percent (p. 93, pp. 105–06). It would take the full mobilisation of national resources over the course of The Second World War to reduce the concentration of economic activity in the South East and to achieve ‘the rationalisation of Outer Britain that peacetime politics at Westminster had singularly failed to deliver’ (p. 113). Chapter 6 analyses how this rationalisation was squandered by both Labour and Conservative governments determined to keep the pound strong and British imperialism intact. Regional policy, as proclaimed by post-war governments of both stripes, came to involve ‘merely a modest stimulus to private-sector investment and job creation, aimed at indemnifying the ruling parties against accusations of neglect, should slump conditions return to the industrial towns of northern England, Scotland and Wales’ (p. 116). Instead of overhauling the Northern manufacturing base ‘before the resumption of commercial competition from continental Europe and Japan’, Hazeldine claims, Attlee was ploughing ‘limitless amounts of money’ into a ‘clandestine nuclear-weapons programme’ (p. 120, 115). This neglect, he argues, continued into the 1960s. Harold Wilson’s modernisation programme, which helped the Labour Party win the 1964 General Election, was sacrificed, Hazeldine claims, on the altar of ‘slavish monetary orthodoxy’ (p. 134). Wilson’s ‘prolonged exposure to the official mind acculturated him to the innermost impulses of the British state, instilling a reverence for the monarchy, for centralisation and for the pound sterling’ (p. 132). In Hazeldine’s terms, throughout the twentieth century the representatives of labour were just as ‘complacent as those of capital’ (p. 122).

In relative terms then the North did not ‘tread water’ during the ‘golden age of capitalism’ and during the 1970s and 1980s, Hazeldine claims, ‘it would be forced below the waterline, never to re-emerge’ (p. 136). This intensification of Northern decline is explored in Chapters 7 and 8. As growth slowed and the seemingly existential crisis of stagflation began to grip the British economy in the 1970s, mass redundancies developed on top of existing regional disparities. Unemployment provoked a profound response from the industrial working-class and the 1974 ‘Who Governs Britain?’ General Election which brought down the conservative government of Ted Heath represented a ‘remarkable victory for the industrial ranks of Outer Britain’ (p. 144). This working-class momentum was undone in Hazeldine’s account, however, by conservative Labour governments led consecutively by Harold Wilson and Jim Callaghan. An alternative political economy advanced by Tony Benn, the Secretary of State for Industry between 1974 and 1975, and embodied in the Kirkby Manufacturing and Engineering cooperative, was ostracised in favour of ‘sound money and an open market economy’ (pp. 145–46). For Hazeldine, the Labour government’s cowardice in the face of the forces of capital reached its apogee with the IMF crisis of 1976. This represented the moment that the financial crisis of the post-war state was ‘resolved on City terms at the cost of a lingering recession in Outer Britain’ (p. 150). In these chapters Hazeldine deploys his central argument with the greatest persuasiveness. By highlighting how social democrats like Callaghan and Dennis Healey laid the groundwork for neoliberalism, The Northern Question argues there was something inherent in the British state which meant that at regular historical intervals its default was to side with the forces of capital at the expense of the interests of labour. In this sense, by using the North as a lens, Hazeldine portrays Thatcherism more as an intensification or consolidation of pre-existing patterns of political economy rather than as a revolutionary ideology. In Hazeldine’s account Thatcherism governed from the south and ‘tightened the austerity introduced by Callaghan’s Labour’ (p. 159). Rather than portraying the period as a ‘marketplace of ideas’ where alternative visions of Britain’s political economy competed for ascendancy, Hazeldine characterises the 1970s as one long march to free-market neoliberalism.[1]

While this is a contestable interpretation of the ideological shifts of the 1970s and 1980s, the fact that neoliberalism consolidated its grip over British politics in the 1990s and 2000s is less open to question. The final two chapters of The Northern Question tells the story of this consolidation and brings the narrative up to the contemporary moment where neoliberal ascendancy appears to be assailed on all sides. Hazeldine’s lens becomes a little blurred when exploring New Labour: on the one hand, he repeats familiar tropes that the Blairite Labour Party represented Thatcher’s ‘greatest achievement’ (p. 177); on the other, he highlights how deindustrialisation in the North was ‘buffered by the stimulants’ of higher public spending, which increased by over six percent a year in real terms between 1999 and 2006 (p. 180). These stimulants would quickly be withdrawn, however, following the 2008 financial crisis. The North East lost fourteen percent of its public sector workforce under the coalition while the South East shed less than three percent. Under David Cameron’s prime ministership median household wealth in London increased by fourteen percent while it fell eight percent in Yorkshire and the Humber. For Hazeldine, Brexit and the 2017 General Election were consequences of these regional disparities. Three parallel voter insurgencies left their mark on the post-2008 distemper:

 

a Brexit revolt that originated in opposition to Maastricht among City mavericks and Tory voters in the market-town South, but then pivoted to attract northern working-class communities left behind by the New Labour boom and reeling from Conservative-Lib Dem austerity; the left-inclined 2014 Yes campaign for Scottish independence [. . .] and a Corbynist upsurge pitting a millennial precariat against a still largely Blairite Parliamentary Labour Party (p. 197).

 

While Corbyn managed to hold together an electoral coalition from amongst these various insurgencies in 2017—making the Labour Party’s first Commons gains in a General Election in twenty years—this was achieved by capturing major Northern cities and obfuscation on Brexit. In the aftermath of the election the Brexit movement and the Corbynite Labour Party increasingly faced in opposite directions. The final chapter of The Northern Question entitled ‘Taking a Stand’ explores the consequences of this divergence. Considering his damning indictment of Harold Wilson’s leadership of the Labour Party, it is surprising to see Hazeldine claim that Corbyn should have stuck to the ‘Wilson model’ of ‘personal neutrality’ over Brexit. In the end Corbyn and John McDonnell, ‘pinned their economic programme to a Brexit stance indistinguishable from that of the London establishment against which a large part of the Leave vote had been directed’ (pp. 212–13). In terms of seats, the General Election of 2019 turned on a Labour collapse in the deindustrialised small towns and former pit villages of the Midlands and the North. In these ‘red wall’ regions, 750,000 Leave voters switched to the Conservatives while hundreds of thousands more stayed at home (p. 207). If it is straightforward to demonstrate that Brexit cut across traditional political loyalties and divisions, explaining why remains one of the most contested issues in contemporary Britain. Too many commentators rush to proclaim a loosely conceptualised cultural politics as the key dividing line, yet it is clearly more complicated than this. For Hazeldine, ‘Brexit handed a political weapon to a class and region that had been denied one by Labourist hegemony for so long’ (p. 220). Yet as Will Davies has recently argued, much of what is labelled ‘populism’ is ‘really a longing for some version of the state that predated neoliberal reforms’. The slogan ‘take back control’ appealed to older Brexit voters precisely because they could remember a time when the state was ‘in command of its own economy and able to deliver social security to its own citizens’, a product of the very ‘labourist hegemony’ that Hazeldine deplores.[2] Throughout The Northern Question post-war social democracy is attacked for its repeated capitulations to the forces of capital and characterised as an ideological formation that includes everyone from Dennis Healey to, staggeringly, Dominic Cummings who, Hazeldine claims, ‘wrapped the official Vote Leave campaign in social-democratic colours’ (p. 203).

While it is clearly possible to attack post-war social democracy for its failure to adequately subdue the forces of capital, the reduction of this changing, nuanced and electorally successful ideological formation to a handmaiden of capitalist power speaks to a broader problem with The Northern Question. Namely, that Hazeldine lacks a compelling explanation for why the North has been so neglected by successive governments or rulers over at least two centuries beyond the fact that the British state inherently sided with the forces of capital (a byword for ‘the South’) over those of labour (a byword for ‘the North’). Now there is of course a large amount of truth in this reasoning. British institutions proved uniquely adept at preserving the interests of landed and financial interests and co-opting dissenters to stifle unrest. While the interests of the North might have been consistently marginalised, however, they were marginalised for different reasons over the course of the nineteenth and twentieth centuries. At its worst Hazeldine’s approach simply lumps social democrats and neoliberals together into a large amalgam called ‘the establishment’. By adopting Hazeldine’s perspective we learn nothing about the consequences of the profound ideological and conceptual shifts of modern British history; about the impact of the welfare state or the rise of the free market. This largely stems from Hazeldine’s ‘declinism’.[3] The Northern Question is heavily indebted to the work of the political theorist Tom Nairn. Alongside fellow New Left intellectual Perry Anderson, Nairn advanced the thesis that the British polity was uniquely conditioned by the absence of a truly bourgeois revolutionary moment.[4] Instead the aristocracy absorbed the emergent bourgeoisie from the early-nineteenth century, preserving the ancien régime and dooming any attempt at modernisation to failure.[5] There is no acknowledgement in The Northern Question, however, to the fact that this thesis has been powerfully challenged in the intervening decades.[6] Declinist critiques are bound-up with a variety of cultural assumptions about the nature of work, Britain’s place in the world and its transition to a welfare state. The latter, which played an important role in the development of Britain’s service economy, is barely mentioned by Hazeldine. In many ways, as the likes of Jim Tomlinson have argued, New Left critiques like The Northern Question end up mirroring a Thatcherite narrative of modern British history if for largely different reasons.[7]

At the start of his account, Hazeldine informs the reader that to understand the North we must ‘delve into the politics of Westminster and Whitehall, observing these proceedings from a northern perspective, to see what English history looks like when stood upon its head’ (p. 23). Later on, he is critical of recent ‘party-political musings’ which, while treating the North as a significant electoral player, tend to project southern assumptions onto the region. These musings, he claims, ‘are not the same as asking what the region itself wants’ (p. 214). Yet voices from the region are exactly what The Northern Question lacks.

The most engaging sections of the book are those where Hazeldine explores the cultural products of the region, from the ‘Angry Young Men’ of the late 1950s to the ‘feel-good musicals of the New Labour boom’ embodied in The Full Monty (p. 220, 18). By the end of The Northern Question, it is difficult to know what the North is beyond a byword for industrialism and manufacturing or to identify what makes it distinct. This partly stems from Hazeldine’s materialist approach and partly from the reductionism of the North-South binary itself as a lens. A much more fruitful approach would be to explore how this division of Britain became ingrained in the national imagination, compressing other regional or local identities. The Midlands, for example, is rarely the subject of such historical scrutiny. Where the Midlands is cited, it is usually grafted onto the North or South and rarely treated as a region in its own right. Hazeldine falls prey to a similar trap. ‘On a statistical basis’, he argues, ‘much more of the Midlands belongs on the northern side of the regional divide’ (p. 13). Hazeldine acknowledges that cities like Birmingham followed their own ‘distinct trajectory’ and has a ‘different story to tell’ than that explored in The Northern Question (p. 14). Yet he is forced to acknowledge where significant movements and processes spill over from ill-defined, contested and largely imagined regional boundaries. While it is undoubtedly true to say that the North-South divide continues to dominate discussions of regional inequality in modern Britain, there is a pressing need to explore the Midlands as a space of identity formation and political upheaval. The recently launched ‘Midlands Identities Project‘, a one-day interdisciplinary conference supported by both The MHR and the Institute of Historical Research, could therefore not be timelier.

While the North-South divide might be an unrepresentative cultural construction, it undoubtedly retains a considerable grip over the national political, economic and cultural imagination. Hazeldine concludes The Northern Question with a powerful fact: the contemporary North accounts for one-quarter of the UK’s population and parliamentary constituencies as well as one-fifth of its GDP. While these economic indicators may be trending downwards, they do so from a great height. In this sense, Hazeldine argues, ‘the problem of the North isn’t going away anytime soon’ (p. 222). The North may benefit ‘from open bidding between the major parties for support’ but it is far from clear that it will settle for what Hazeldine identifies as the ‘post-war norm’: the ‘dispending of palliatives to take the edge of structural economic change’ (p. 214). If his book fails to grapple with the contradictions and contestations inherent in Britain’s image of the North today, as well as amongst those who live there and lay claim to a Northern identity, Hazeldine’s historical survey is a timely reminder that the contours and constraints of contemporary politics are always liable to change, often in unexpected and surprising ways.

Download PDF

 

Bibliography

Anderson, P., ‘Origins of the Present Crisis’, New Left Review, 23 (1963), pp. 26–53.

Blackburn, D., ‘Penguin Books and the Marketplace for Ideas’, in L. Black, H. Pemberton & P. Thane (Eds.), Reassessing 1970s Britain (Manchester, 2013), pp. 224–51.

Davies, W., This is Not Normal: The Collapse of Liberal Britain (London, 2020). 

Edgerton, D., Warfare State: Britain, 1920–1970 (Cambridge, 2006). 

English, R., & Kenny, M., ‘Public Intellectuals and the Question of British Decline’, British Journal of Politics and International Relations, 3/3 (2001), pp. 259–83.

Hall, P.A., ‘Social Learning and the State: The Case of Economic Policymaking in Britain’, Comparative Politics, 5/3, (1993), pp. 275–96. 

Nairn, T., ‘The British Political Elite’, New Left Review, 23 (1963), pp. 19–25.

Tomlinson, J., ‘Thrice Denied: “Declinism” as a Recurrent Theme in British History in the Long Twentieth Century’, Twentieth Century British History, 20/2 (2009), pp. 227–51.

Notes

[1] For the 1970s as a ‘marketplace of ideas’, see: P.A. Hall, ‘Social Learning and the State: The Case of Economic Policymaking in Britain’, Comparative Politics, 5/3, (1993), pp. 275–96; D. Blackburn, ‘Penguin Books and the Marketplace for Ideas’, in L. Black, H. Pemberton & P. Thane (Eds.), Reassessing 1970s Britain (Manchester, 2013), pp. 224–51.

[2] W. Davies, This is Not Normal: The Collapse of Liberal Britain (London, 2020), p. 16.

[3] For the notion of ‘declinism’ see:  J. Tomlinson, ‘Thrice Denied: “Declinism” as a Recurrent Theme in British History in the Long Twentieth Century’, Twentieth Century British History, 20/2 (2009), pp. 227–51.

[4] See for example: T. Nairn, ‘The British Political Elite’, New Left Review, 23 (1963), pp. 19–25; P. Anderson, ‘Origins of the Present Crisis’, New Left Review, 23 (1963), pp. 26–53.

[5] For a good overview of this account, see: R. English & M. Kenny, ‘Public Intellectuals and the Question of British Decline’, British Journal of Politics and International Relations, 3/3 (2001), pp. 259–83.

[6] See for example: D. Edgerton, Warfare State: Britain, 1920–1970 (Cambridge, 2006).

[7] Tomlinson, ‘Thrice Denied’, p. 235.

 

Early English Books Online: Mass Digitization and the Archive

Early English Books Online: Mass Digitization and the Archive

Abstract

This review examines the originations and contemporary usage of the online archive Early English Books Online (EEBO). Highlighting the recent advancements in digital historiography, alongside considerations of inherent archival bias, this article demonstrates a variety of circumstances in which the scholar is encouraged to look beyond the digital archive itself. EEBO here is proposed as a resource capable of profound innovation, one of preservationist historical necessity, and a logical further extension of scholarship dating all the way back to the early twentieth century and the Short-Title Catalogue. Yet also EEBO is a resource of human construction, and therefore must be approached with the same considerations one would the physical archive, giving careful thought to the intersection of material and print culture, and the ways in which they correlate. 

Biography: Conner Wilson is a postgraduate student at the University of Birmingham studying Shakespeare, his contemporaries, and Early Modern theatre culture.

Over the course of the past two decades, the mass digitization of the archive has radically transformed the breadth of primary source material readily available to the modern scholar. Online archives such as Early English Books Online (EEBO), Eighteenth Century Collections Online (ECCO), Manuscript Pamphleteering in Early Stuart England (MPESE), and the Old Bailey Proceedings Online, along with numerous others, have become inundated with modern methodological approaches to historiography, with most, if not all, Masters and PhD programs requiring some compulsory module towards navigating these resources. On one hand, this “revolution”[1] of digitization as Tim Hitchcock describes it, represents a turning point for historians, as researchers embrace the advantages of immediacy and accessibility in the information age; yet, in a field where visual, material, and print culture so often coincide, how do we determine the accuracy in which these online archival substitutions can produce the unique phenomenological experience associated with resource tangibility? Or, for instance, how do researchers overcome the implicit bias of search bar algorithms in tandem with imperfect and outdated Optical Character Recognition (OCR)? While much of what has been written about EEBO tends to exist in a binary dialectic of good vs. bad, helpful vs. unhelpful, accurate vs. inaccurate, this article will aim to circumnavigate such finite categorizations, and acknowledge both the trepidations of scholars who fear misuse, and embrace the growing computational literacy of historical fields. This reciprocal analysis, alongside a detailed historical account of the creation of the database, presents EEBO as not too dissimilar to the physical archive: proposing that, with both the digital and the material, it is ultimately the historian’s job to determine relevancy and overcome inherent bias.

 

EEBO’s Beginnings

The origins of EEBO can be traced all the way back to the early twentieth century. In 1918 on commission from The Bibliographical Society, scholars A. W. Pollard and G. R. Redgrave began the monumental task of creating a unified catalogue covering all extant books, printed between 1475 and 1640, across Great Britain and North America. It was a project which would take nearly 8 years of research and require an immense amount of interlibrary cooperation, however, by 1926 Pollard and Redgrave’s work: A Short-Title Catalogue of Books Printed in England, Scotland, & Ireland and of English Books Printed Abroad, 1475–1640, was finally ready for publication.[2]2 This Short-Title Catalogue or STC, as it is frequently abbreviated, immediately proved to be an invaluable road map for scholars in the sourcing of rare and out-of-print books. The STC covered the holdings of a myriad of libraries, provided bibliographic information on nearly 26,000 extant texts, and managed a scope of information which was unprecedented. The scholars had successfully proved that the cross unification of resources and information was possible on a massive scale, so far as researchers were willing to acknowledge that it was “dangerous work for any one to handle lazily.”[3] This cautionary caveat would come to permeate historical research well into the information age. Considering now the contemporary trepidations around EEBO, is it fitting here to include Pollard and Redgrave’s initial caution of the STC that “in so large a work based on such varied sources, probably every kind of error will be found represented.”[4] Perhaps, historians have always been cautiously self-aware of the dangers of mass bibliographic consolidation and the seductive illusion of an entirely comprehensive historical archive. Yet, despite this, the pairs’ work has unequivocally become one of the most influential and enduring enterprises towards the sourcing of Early Modern texts. Fourteen years later, with the danger of WWII fast approaching, and the advent of a new technological system, Microfilm, the American Council of Learned Societies felt the processing and photographing of Early English vulnerable texts was a project which could not be delayed, and the Short-Title Catalogue should become the bedrock from which the selection committee would work. Six million pages were prioritized for this microfilmic reproduction process with an ultimate objective of storing the facsimiles securely in America, farther from the increasingly volatile Western Front.[5] This decision to integrate the microphotographic imaging process with the STC would serve as the basis for what is now EEBO, with many of the original images captured by this commission populating the contemporary database today. It is imperative to understand that while much work has been done since the original publication of the STC (notably Donald Wing’s subsequent, yet separate, catalogue expanding the breadth of titles from 1641 to 1700)[6] and on microfilm reproductions themselves (with STC titles continuing to be photographed well into the 1990s) the digital visual make-up of EEBO began nearly 40 years before the advent of the internet; suffice it to say, the microphotographic process was not designed with considerations towards its ultimate digital transference. EEBO, as we know it now, would finally come into existence with the birth of the Text Creation Partnership (TCP) in 1999. This interlibrary effort to “create texts to a common standard suitable for search, display, navigation, and reuse” is the process on which the second half of this article will focus more specifically, as it has come to define the contemporary successes and pitfalls of the database.[7]

 

OCR, Comprehensive Digital Archives, and Material Culture

Perhaps the most extraordinary feat the EEBO-TCP partnership has undertaken, is its avoidance of common OCR problems, by abandoning the technology altogether. The implementation of a “double-keyed”[8] transcription system, with human editors coding from the original microfilm images, boasts a “99.995%”[9] accuracy rating per-text-entered, thereby enabling the current sophistication level offered in the simple and advanced search bar functions. While immensely expensive and labor intensive, this effort ensures a consistent accuracy which has previously proven difficult in Early Modern typeface transcriptions, yet it does, however, simultaneously shatter the illusion of an entirely comprehensive archive. As Ian Gadd notes, “EEBO does not include every copy of every edition published prior to 1701… nor even does it include a copy of every surviving edition published prior to 1701.”[10] This is an important distinction in that the textual variances between subsequent editions of Early Modern books can prove to be drastic. One need look no further than Quarto 1 and Quarto 2 of Hamlet (both available on EEBO), to detect a noticeably alternate print of the infamous, “To be, or not to be, that is the question” (Tragedy of Hamlet 23)[11] which instead reads, “To be, or not to be, I there’s the point.” (Tragicall Historie of Hamlet 15)[12] Fortunately for Shakespeare, the infamy of his work secures an archival placeholder for the various editions of his plays, however, it is near impossible to discern a similar degree of canonical entirety for the multitudes of lesser-known authors present on EEBO. If a scholar either unknowingly or willfully ignores this fact, the dangers of misrepresentation, false negatives, and false positives are relatively high. Additionally, given that EEBO is computationally manual, the quick inclusion of a subsequent textual edition is seemingly non-existent, and the notion that EEBO could be entirely comprehensive rapidly falls away simply considering the sheer labor intensity of the archive, which is tremendous. This is not to suggest either that EEBO advertises itself as a comprehensive archive (as neither did Redgrave and Pollard consider their work entirely comprehensive), but instead give caution to the scholars who may be first using the resource. One would not assume a physical library could possibly contain every text on a single subject and the same principle must be applied to the digital.

Regarding the physical tangibility of primary source material, EEBO presents both clear advantages and disadvantages. Considering the preservationist origins of the online archive, the sheer volume of scholars who now have access to the texts without having to physically handle the pages is an immense victory for the longevity of Early Modern books. The reality that pages are turned less frequently, less exposed to light, able to maintain a consistent temperature, and are simply less prone to accidental human contamination, will keep these resources accessible to those who need them for many years to come.[13] Healthy shelf life in correlation with the Early Modern book was already a precarious relationship, and the digital archive aids in keeping these texts in the hands of those most qualified to handle their longevity. On the other hand, EEBO all but abandons the material culture of the printed book, as the researcher is, of course, not actually manipulating the original artifact. Books on EEBO all appear to be roughly the same size and dimensions, which is simply not the case.[14] Furthermore, microfilm does little to aid in the capturing of handwritten notes of previous owners, thereby potentially overlooking additional valuable historiographical evidence. For example, many of the digital reproductions of seminal works now existent on the internet today, do little to account for things such as transportability or mobility of the original object. An Early Modern book capable of fitting in its owner’s pocket carries significantly different cultural weight than one which sits on the lectern of a library or lecture hall. Expanding on this work, the researcher may begin to unlock information such as the author’s contemporary popularity or their cordiality with publishing companies. A book existent in multiple different contemporaneous languages may reveal an author’s audience reach, their financial stability, or the sociological circle of which they were a member, all of which in turn can affect literary analysis.

While some of this information may be discerned from EEBO, the researcher must continue to be diligent and thorough with historiographical information beyond the text itself. This ultimately asks the important question at the intersection of material and print culture: can we consider the text of primary source material in a vacuum, or does removing the physical life of the object detract vital information which in turn can affect textual analysis? The answer to this, of course, depends on the author, the text, the book itself, the contemporary and historical associations of the material, the previous owner(s), the type of research being conducted, and a litany of other potential factors, yet still, the contemporary historian must not be swayed into ignoring the material world which exists behind the digital reproduction, as there is certainly valuable information existent there.

 

Conclusions

In 2001 John Jowett and Gabriel Egan authored one of the earliest reviews of EEBO, writing that “the potential for generating new research in early modern studies is considerable indeed… and electronic products such as EEBO… enable new forms of scholarly study which were not possible using paper and film technologies.” 16 Two decades later, this observation still holds true. EEBO has provided massive amounts of information to scholars over the years, ushering in exciting and new historical discoveries, which otherwise may have gone unrealized, overlooked, or been significantly delayed. Moreover, the access which current students now have to primary source material is unprecedented and evolving the very fabric of how academic arguments are conducted. In the midst of this exciting growth, it is vital for the researcher to remember that they must not rely solely on what is convenient. All archives, whether digital or physical, are ultimately human constructions and therefore contain certain limitations and biases both consciously and unconsciously. The above examples outlined are merely a few of the considerations scholars should take into account when conducting online research. As always, the historian shoulders the burden of accuracy, thoroughness, and overcoming bias, but when used diligently, the potentialities of EEBO are immense.

Download PDF

Bibliography

De But, R., ‘Managing Risks: what are the agents of deterioration’, <https://artsandculture. google.com/exhibit/managing-riskswhat-are-the-agents-of-deterioration-trinity-college-dublin library/PQKyBVnbqWmqLw?hl=en https://artsandculture.google.com/exhibit/managing-riskswhat are-the-agents-of-deterioration-trinity-college-dublin-library/PQKyBVnbqWmqLw?hl=en>, accessed 8.4.2021.

Gadd, I., ‘The Use and Misuse of Early English Books Online’, Literature Compass, 6/3 (2009), p. 680- 692.

Gavin, M. ‘How to Think about EEBO’, Textual Cultures, 11 / ½ (2017), pp. 70-102.

Heil, J. and Samuelson, T., ‘Book History in the Early Modern OCR Project or, Bringing Balance to the Force’, Journal for Early Modern Cultural Studies, 13 / 4 (2013), pp. 93-94.

Hitchcock, T., ‘Confronting the Digital’, Cultural and Social History, 10/ 1 (2013), p. 9-23.

Jowett, J. and Egan, G., ‘Review of the Early English Books Online (EEBO)’, Interactive Early Modern Literary Studies (2001), pp. 1-13.

Nagle, B. ‘Introduction’, in Wing, D. (ed.), Short-title catalogue of books printed in England, Scotland, Ireland, Wales, and British America and of English books printed in other countries, 1641-1700 (New York, 1945) p. 10.

Shakespeare, W., The Tragedy of Hamlet Prince of Denmarke, Printed by George Eld for Iohn Smethwicke, and are to be sold at his shoppe in Saint Dunstons Church yeard in Fleetstreet. Vnder the Diall, (London, 1611).

Shakespeare, W., The Tragicall Historie of Hamlet, Prince of Denmarke, Printed [by Valentine Simmes] for N[icholas] L[ing] and Iohn Trundell, (London, 1603).

‘Text Creation Partnership’, <https://textcreationpartnership.org/>, accessed 31.3. 2021.

‘The results of keying instead of OCR’, <https://textcreationpartnership.org/using-tcp content/results-of-keying/>, accessed 31.3.2021.

Notes

[1] T. Hitchcock, ‘Confronting the Digital’, Cultural and Social History, 10/ 1 (2013), p. 9.

[2] M. Gavin, ‘How to Think about EEBO’, Textual Cultures, 11 / ½ (2017), pp. 70-102.

[3] B. Nagle, ‘Introduction’, in D. Wing (ed.), Short-title catalogue of books printed in England, Scotland, Ireland, Wales, and British America and of English books printed in other countries, 1641-1700 (New York, 1945) p. 10.

[4] Nagle, ‘Introduction’, 10.

[5] Gavin, ‘How to Think about EEBO’, pp. 70-102.

[6] I. Gadd, “The Use and Misuse of Early English Books Online”, Literature Compass, 6/3 (2009): p. 683.

[7] ‘Text Creation Partnership’, < https://textcreationpartnership.org/>, accessed 31.3. 2021.

[8] J. Heil and T. Samuelson, ‘Book History in the Early Modern OCR Project or, Bringing Balance to the Force’, Journal for Early Modern Cultural Studies, 13 / 4 (2013), pp. 93-94.

[9] ‘The results of keying instead of OCR’, <https://textcreationpartnership.org/using-tcp-content/results-of-keying/>, accessed 31.3.2021.

[10] I. Gadd, ‘The Use and Misuse of Early English Books Online’, Literature Compass, 6/3 (2009), p. 686. (Italics mine).

[11] W. Shakespeare, The Tragedy of Hamlet Prince of Denmarke, Printed by George Eld for Iohn Smethwicke, and are to be sold at his shoppe in Saint Dunstons Church yeard in Fleetstreet. Vnder the Diall, (London, 1611), p. 23.

[12] W. Shakespeare, The Tragicall Historie of Hamlet, Prince of Denmarke, Printed [by Valentine Simmes] for N[icholas] L[ing] and Iohn Trundell, (London, 1603), p. 15.

[13]  R. de But, ‘Managing Risks: what are the agents of deterioration’, <https://artsandculture.google.com/exhibit/managing-risks what-are-the-agents-of-deterioration-trinity-college-dublin-library/PQKyBVnbqWmqLw?hl=en>, accessed 8.4.2021.

[14] I. Gadd, ‘The Use and Misuse of Early English Books Online’, Literature Compass, 6/3 (2009), p. 682.

Steven Fielding, Bill Schwarz and Richard Toye, The Churchill Myths (Oxford University Press, 2020).

Steven Fielding, Bill Schwarz and Richard Toye, The Churchill Myths (Oxford University Press, 2020).

Abstract

This article reviews The Churchill Myths, co-authored by Steven Fielding, Professor of Political History at the University of Nottingham, Bill Schwarz, Professor of English at Queen Mary University of London, and Richard Toye, Professor of History at the University of Exeter. The book follows the trajectory of Winston Churchill’s uses in the popular memory of the post-war period, suggesting that the legends of 1940 have remained a central element throughout, but also tracking the changing nature of elements around these stories, such as a greater attention to his personal character. Although the authors make a convincing case in many respects, it leaves some significant aspects of competing ‘Churchill myths’ and change over time underexplored. 

Key words: American politics, Brexit, British politics, Conservative Party, Cold War, film, India, memory, Second World War, Wales, Winston Churchill.

Biography: Alex Riggs is a University of Nottingham PhD History student, funded by Midlands4Cities. His research focuses on the 1970s and 1980s American left, especially their efforts to forge coalitions through electoral and grassroots politics.

A spectre is haunting Britain. The spectre of Winston Churchill. That’s the argument of Steven Fielding, Bill Schwarz and Richard Toye in The Churchill Myths. Certainly, any follower of British politics will need little convincing of Churchill’s continued relevance. As the authors point out in the book’s key rationale, Churchill has been deployed constantly in the debates around Brexit. Most prominently, Brexiteers have deployed an advocate for a ‘global Britain’ that confronted a German-dominated Europe, but Remainers too have cast him as a pro-European concerned with the implications of Britain cutting adrift from the continent.[1] Indeed, the period since the book’s finalisation propelled Churchill even further into the centre of political discourse, with the 75th anniversary of the end of the Second World War in Europe, the COVID-19 pandemic and global Black Lives Matter protests all prompting further discussion of his legacy.[2] Therefore, Fielding, Schwarz and Toye bring the subject necessary scholarly attention.

As they stress, the book’s subject is not Winston Churchill. It is about memory of him,

‘the many, contrary manifestations of the various Churchill legends and the common, invariant properties which make the range of individual stories recognisably instalments in a common process of codification, resulting in Churchill as myth’ (Emphasis original).[3]

This history is uncovered across three chapters. The first, ‘Brexit May 1940’, tackles Churchill’s political uses, highlighting his utility for a range of political actors in various conflicts and crises, from the Cold War to Brexit. The next, ‘The Churchill Syndrome’, goes over similar ground, again exploring the evolution of his British and American political deployments. It also introduces interesting theoretical concepts, particularly ‘reputational entrepreneurship’, whereby self-interested custodians use particular memories to build a sense of solidarity within communities by defining what they are for and against. This is linked with the concept of a ‘resonant core’ to memories that is always present, whose surroundings are constantly changing over time.[4] This represents important overlap with the ideas of the political theorist Michael Freeden, whose morphological approach to ideology similarly uncovers core concepts, but suggests significant fluctuations in the peripheral concepts that surround its centre over time.[5] Finally, ‘Persistence and Change in Churchill’s Mythic Memory’ is largely focused on dramatic depictions, highlighting the battles that Hollywood producers had with Churchill and his aides to get a biographical treatment to the big screen and his numerous depictions ever since.[6]

The authors convincingly show how the Churchill myths’ adaptability has kept them relevant through the post-war era. They highlight how the myth of an unshakable Churchill arose from early Cold War fears about Soviet expansionism. This was embraced by a diverse range of figures including liberal philosopher Isaiah Berlin, who praised his rhetorical ability ‘to give shape and character, colour and direction and coherence to the stream of events’, something essential for democracy to survive in this context.[7] Then as fears of national decline grew in the 1960s, Churchill became a symbol of Britain’s lost might. In the context of fears around the decline of Britain’s Empire, economy and culture, conservative commentators like the historian John Lukacs portrayed his death in 1965 as the simultaneous passing of the hierarchical, traditional society that sustained its great power status.[8] Most recently, The Churchill Factor, written by perhaps Churchill’s most enthusiastic ‘reputational entrepreneur’ Boris Johnson, depicted him as an anti-establishment figure, taking a meek, out-of-touch political elite to task for their belief in the limits of national power and using his rhetorical might to restore national esteem.[9] The authors suggest that in the age of Brexit’s political deadlocks, Johnson’s narrative of a ‘man of destiny’ smashing through national malaise has become dominant over popular views of Britain’s wartime leader.[10]

In making these arguments, the authors deploy an impressive range of material. Given the abundance of Churchill films, this medium is particularly prominent, especially 2018’s The Darkest Hour. These dramatic depictions are effective in evidencing the centrality of 1940 to Churchillian memory, with this a consistent element throughout- even 1972’s Young Winston, based on Churchill’s first autobiography, My Early Life, treats events decades before the war as foreshadowing his future greatness.[11] This medium also functions effectively in showing the changes that have occurred around this myth. Richard Burton’s 1974 portrayal presented a stoic individual, but his more recent biopics  celebrate his eccentricities, presenting his tempestuousness and bouts of depression as representative of his humanity, in contrast to the distant, stuffy Neville Chamberlain and Lord Halifax.[12] Political opinion is also drawn upon, including interventions from figures as diverse as John F. Kennedy, Margaret Thatcher and Nigel Farage, providing a good range of popular and elite discourses and showing his constant utility in the post-war period.[13]

The Churchill Myths also reveals some of the key silences in these depictions. In pinpointing the 1940 fixation of these sources, the authors highlight a contradiction in conventional attitudes- on the one hand, Churchill is the man of destiny, singlehandedly steering Britain away from defeat in 1940. Yet on the other, he appears to have been incapable of deploying these talents at any other moment, with his decades of ministerial office before and after the war rarely invoked, and practically irrelevant to the war once the United States and Soviet Union joined the conflict in 1941.[14] This means that a series of events that challenge his status as a unifying national hero, including but not limited to, his sending of troops to quell striking miners in South Wales as Home Secretary, his staunch support for imperialism, and his 1945 election defeat, are either glossed over or unexplained.[15] Even when they are mentioned, such as his pro-Empire intervention in the debate on Indian home rule in the 2002 film The Gathering Storm, they represent his honesty compared to the devious Conservative leadership, not his unrelenting imperialism.[16]

However, this point also reveals one of the book’s flaws. Although the authors are correct in highlighting 1940 as the central Churchillian myth, these silences are by no means universal. Directors may not be rushing to dramatize Churchill’s role in the Tonypandy riots, but his actions still have an important impact on memory of him in South Wales.[17] On a global scale too, the 1943 Bengal Famine has shaped Indian memory of Churchill, with the exacerbation of mass starvation by his wartime policies also fuelling a more critical history, and crucially one not centring on 1940.[18] These critiques make passing visits to the narrative, mentioned in criticisms from John McDonnell and Richard Burton, but a more detailed sense of how they fit within an analysis of the Churchill myths or why they have sprung up in particular contexts is not included.[19] Had they been, The Churchill Myths would have produced a more comprehensive analysis, one that highlights the plurality of memories implied by its title.

A closer analysis of these silences could have also brought further insights into the moments when Churchill is more obscure in political discourse. For instance, it is mentioned that John Major made little use of Churchill during his premiership and that no Churchill films were made between 1982 and 2002 and again between 2004 and 2017, yet no explanations are offered for why this was the case.[20] A more nuanced analysis of Churchill’s American deployments would have also been possible through this approach. Though the authors correctly highlight his use by all post-war American presidents, a search of Presidential public papers reveals that this has been far from equal over time. For instance, Jimmy Carter invoked Churchill thirteen times, but his successor Ronald Reagan found 125 occasions to quote the former Prime Minister.[21] Through closer examination of these deployments, a clearer understanding of why Churchill was especially useful for certain actors and less so for others could be achieved, with Reagan’s confrontational policy towards the Soviet Union making Churchill’s warnings over appeasement and post-war Soviet ambitions convenient.[22] Similarly, the defection of many Southern Democrats to the Republican Party also meant Churchillian wisdom about the merits of switching parties was applicable.[23]

 Patrick Finney’s Remembering the Road to World War Two provides a useful example of this approach. Focusing on the historiography of appeasement, Finney situates historians’ work within the various contexts they were writing in, contrasting the critical outlook of the immediate post-war years of high confidence in Britain’s great power status with the more sympathetic view of the declinist 1960s and ‘70s, where the limits on policymakers’ freedom of action were stressed.[24] Such an analysis brings important insight, and a similar approach more grounded in particular contexts would have given the book a more detailed sense of changes in this mythology over time. Moreover, given that the appeasers are ever present in stories told about Churchill, narratives that rehabilitate them by implication challenge the significance and necessity of Churchill’s intervention, addressing the content of Finney’s study would have added considerable important context.

On occasion in The Churchill Myths, such a structure is used to good effect. For instance, it takes Andrew Roberts’ biography of Churchill as an illustration of the context of internal struggles in the 1990s Conservative Party. The authors underline how Roberts highlighted Churchill’s racism not as a criticism of these attitudes, but as a critique of their inconsistency with the overly welcoming immigration policy of his 1950s premiership, a symptom of the Tory ‘wetness’ that was a concern of the party’s right in the context of John Major’s acceptance of European integration.[25] This is a perceptive analysis, and more frequent application would have added much to The Churchill Myths. This could serve as a means to ask some of the overarching questions about modern Britain, to assess whether the persistence of 1940 suggests a stagnant period, or its constant reinterpretation implies a more dynamic era. This would have also provided an opportunity to include the historiographical discussion that the book lacks, and thus put it into dialogue with this scholarship, as well as providing an opportunity to historicise these works, especially their 2000s proliferation.[26]

In summary then, The Churchill Myths provides a timely and readable intervention in contemporary debates about Churchill. In a relatively short book, the authors reveal important historical insights, showing the adaptability of Churchill over time to suit a variety of political purposes, but also the staying power of his heroic leadership in 1940 as the defining Churchill myth. Yet that brevity also leaves the reader wanting more, with issues over competing Churchill myths and its significance for broader questions in both British history and particular contexts underexplored. Fielding, Schwarz and Toye demonstrate the variety of issues that can be viewed through the lens of Churchill’s memory but leave historians with plenty of angles still to discover.

Download PDF

 

Bibliography

 

Primary Bibliography 

Carter, N., ‘This is the moment to learn the wartime generation’s lesson’, The Times, May 8, 2020, p.5.

Limaye, Y., ‘Churchill’s legacy leaves Indians questioning his hero status’, https://www.bbc.co.uk/news/world-asia-india-53405121, accessed 01/04/2021.

Walker, P., ‘Boris Johnson says removing statues is to ‘lie about our history’’, https://www.theguardian.com/politics/2020/jun/12/boris-johnson-says-removing-statues-is-to-lie-about-our-history-george-floyd, accessed 30/03/2021.

‘Advanced Search’, https://www.presidency.ucsb.edu/advanced-search?field-keywords=Churchill&field-keywords2=&field-keywords3=&from%5Bdate%5D=01-01-1961&to%5Bdate%5D=04-05-2021&person2=200295&items_per_page=25, accessed 06/04/2021. 

‘Advanced Search’, https://www.presidency.ucsb.edu/advanced-search?field-keywords=Churchill&field-keywords2=&field-keywords3=&from%5Bdate%5D=01-01-1961&to%5Bdate%5D=04-05-2021&person2=200296&items_per_page=25, accessed 06/04/2021.

‘Has the town of Tonypandy forgiven Winston Churchill? | ITV News’, https://www.youtube.com/watch?v=AhYQ13z4dYc, accessed 01/04/2021. 

‘Remarks at a Campaign Rally for Senator Don Nickles in Norman, Oklahoma’, https://www.presidency.ucsb.edu/documents/remarks-campaign-rally-for-senator-don-nickles-norman-oklahoma, accessed 06/04/2021.

‘Statement on United States Defence Policy’, https://www.presidency.ucsb.edu/documents/statement-united-states-defense-policy, accessed 06/04/2021.

 

Secondary Bibliography

Connelly, M., We Can Take It! Britain and the Memory of the Second World War (Abingdon, 2004). 

Fielding, S., Schwarz, B., Toye, R., The Churchill Myths (Oxford, 2020).

Finney, P., Remembering the Road to World War Two: International History, National, Identity, Collective Memory (Oxford, 2011). 

Freeden, M., ‘The Morphological Analysis of Ideology’, in M. Freeden, L. Tower Sargent and M. Stears (eds.), The Oxford Handbook of Political Ideologies (Oxford, 2013), pp.115-137. 

Noakes, L. and Pattinson, J. (eds.), British Cultural Memory and the Second World War (London, 2013). 

Smith, M., Britain and 1940: History, Myth and Popular Memory (London, 2000). 

Notes

[1] S. Fielding, B. Schwarz and R. Toye, The Churchill Myths (Oxford, 2020), pp.8-9.

[2] N. Carter, ‘This is the moment to learn the wartime generation’s lesson’, The Times, May 8, 2020, p.5; P. Walker, ‘Boris Johnson says removing statues is to ‘lie about our history’’, https://www.theguardian.com/politics/2020/jun/12/boris-johnson-says-removing-statues-is-to-lie-about-our-history-george-floyd, accessed 30/03/2021.

[3] Fielding, Schwarz and Toye, The Churchill Myths, p.11.

[4] Fielding, Schwarz and Toye, The Churchill Myths, pp.72-73.

[5] M. Freeden, ‘The Morphological Analysis of Ideology’, in M. Freeden, L. Tower Sargent and M. Stears (eds.), The Oxford Handbook of Political Ideologies (Oxford, 2013), pp.124-25.

[6] Fielding, Schwarz and Toye, The Churchill Myths, pp.143-44.

[7] Fielding, Schwarz and Toye, The Churchill Myths, pp.40-41.

[8] Fielding, Schwarz and Toye, The Churchill Myths, pp.35-36.

[9] Fielding, Schwarz and Toye, The Churchill Myths, p.23.

[10] Fielding, Schwarz and Toye, The Churchill Myths, pp.27-28.

[11] Fielding, Schwarz and Toye, The Churchill Myths, pp.125,138

[12] Fielding, Schwarz and Toye, The Churchill Myths, pp.139-40.

[13] Fielding, Schwarz and Toye, The Churchill Myths, pp.9-10,79-80,84-85

[14] Fielding, Schwarz and Toye, The Churchill Myths, pp.111-12.

[15] Fielding, Schwarz and Toye, The Churchill Myths, pp.122-24.

[16] Fielding, Schwarz and Toye, The Churchill Myths, pp.133-34.

[17] ‘Has the town of Tonypandy forgiven Winston Churchill? | ITV News’, https://www.youtube.com/watch?v=AhYQ13z4dYc, accessed 01/04/2021.

[18] Y. Limaye, ‘Churchill’s legacy leaves Indians questioning his hero status’, https://www.bbc.co.uk/news/world-asia-india-53405121, accessed 01/04/2021.

[19] Fielding, Schwarz and Toye, The Churchill Myths, pp.69-70,108-09.

[20] Fielding, Schwarz and Toye, The Churchill Myths, pp.86,138.

[21] ‘Advanced Search’, https://www.presidency.ucsb.edu/advanced-search?field-keywords=Churchill&field-keywords2=&field-keywords3=&from%5Bdate%5D=01-01-1961&to%5Bdate%5D=04-05-2021&person2=200295&items_per_page=25 accessed 06/04/2021; ‘Advanced Search’, https://www.presidency.ucsb.edu/advanced-search?field-keywords=Churchill&field-keywords2=&field-keywords3=&from%5Bdate%5D=01-01-1961&to%5Bdate%5D=04-05-2021&person2=200296&items_per_page=25, accessed 06/04/2021.

[22] ‘Statement on United States Defence Policy’, https://www.presidency.ucsb.edu/documents/statement-united-states-defense-policy, accessed 06/04/2021.

[23] ‘Remarks at a Campaign Rally for Senator Don Nickles in Norman, Oklahoma’, https://www.presidency.ucsb.edu/documents/remarks-campaign-rally-for-senator-don-nickles-norman-oklahoma, accessed 06/04/2021.

[24] P. Finney, Remembering the Road to World War Two: International History, National Identity, Collective Memory (London, 2011), pp.192-94,200-02.

[25] Fielding, Schwarz and Toye, The Churchill Myths, pp.88-89.

[26] M. Connelly, We Can Take It! Britain and the Memory of the Second World War (Abingdon, 2004); M. Smith, Britain and 1940: History, Myth and Popular Memory (London, 2000); L. Noakes and J. Pattinson (eds.), British Cultural Memory and the Second World War (London, 2013).

F. Houghton, The Veterans’ Tale: British Military Memoirs of the Second World War (Cambridge, 2018).

F. Houghton, The Veterans’ Tale: British Military Memoirs of the Second World War (Cambridge, 2018).

Biography: William Noble is a PhD student at the University of Nottingham, funded by Midlands4Cities. His research examines the relationships between popular discourses of ‘race’ and immigration, and the concept of ‘decline’ in the post-war Midlands.

What can veterans’ memoirs of the Second World War tell us about combatants’ experiences both of conflict and post-war life? This is the question that Frances Houghton seeks to answer in The Veterans’ Tale: British Military Memoirs of the Second World War. In response, Houghton convincingly demonstrates that veterans’ memoirs are valuable to historians because of their capacity to reveal  the ex-combatants’ retrospective memory and understanding of battle; despite which they have been previously unheard on a collective level within the scholarship on war, memory, and personal narratives.[1] Houghton argues the attention they have received has been in the discipline of literary criticism, for example in Paul Fussell’s The Great War and Modern Memory and Samuel Hynes’ The Soldiers’ Tale.[2] Given the voluminous literature on the war’s impact on British society, politics and culture – which has taken account of war films, public memorials and commemorations, and many other types of sources besides – in writing the first book-length historical study of Second World War veterans’ memoirs Houghton is beginning to correct this surprising omission.[3]

Drawing influence from various disciplines, including memory studies, auto/biographical studies, and histories of the emotions, Houghton investigates how veterans reinterpreted their wartime experiences in the post-war years, by examining their relationships to four main themes: landscape; weaponry; the enemy; and comradeship. Houghton’s detailed introduction establishes the book’s theoretical and methodological underpinnings.[4] In Alessandro Portelli’s famous defence of oral history, he wrote that the value of oral testimony lies not in its ‘adherence to fact’, but in ‘its departure from it, as imagination, symbolism, and desire emerge’.[5] Houghton views veterans’ memoirs similarly, arguing that the embellishments, discrepancies, and conflicts in an individual’s memories are what make them such a rich source of evidence both about how war was experienced at the time, and how it is remembered over the subsequent years and decades.[6] In the following eight chapters, which can be roughly divided into three sections, Houghton surveys a wide range of veteran memoirs from the Army, Navy, and RAF, and from the European, North African and Atlantic theatres – comparing and contrasting the experiences of combatants in each.

Chapters 1 and 2 (the first section) survey the provenance of Second World War veteran memoirs, employing the archives of major publishing houses to examine veterans’ motives for, and the process of writing, publishing and publicly producing their war experiences in book form in post-war Britain. In Chapter 1, ‘Motive and the Veteran-Memoirist’, Houghton demonstrates how, for some veterans, factors such as post-traumatic stress disorders, post-war disillusionment, and an inability to build lasting relationships saw the war elevated to an ‘apex of memory’. Constructing memoirs could enable veterans to process their combat experiences to be able to better cope with the present. These discussions of veterans’ post-war struggles are rather brief, however, and could form the basis for a fascinating study in its own right, though Houghton’s introduction makes it clear that her focus is specifically on combat experiences.[7] However, memoirs were not only constructed for an ‘Audience of the Self’.[8] Memoirists also wrote for ‘Audiences of the Future’ – for their children, for future generations of servicemen, and/or as a general warning to the public to ‘not let it happen again’.[9] Finally, they wrote for ‘Audiences of Comrades’, both living and dead.[10]

Chapter 2 examines the process of ‘Penning and Publishing the Veterans’ Tale’, with prominent themes including: memoirists’ insistence on the reliability of their memories, despite which many went to great efforts to establish the veracity of their accounts with support from a variety of additional sources (letters, diaries, regimental records, etc.); disputes between authors and publishers, as veterans’ desires for the most authentic representation of their war experiences could clash with the publishers’ commercial incentives; and the trend for memoirists to become more candid in their accounts as censorship was eased (both of information with the 1967 amendment of the Public Records Act, and of language with the reforms of the Obscene Publications Act), and as a more ‘liberal climate’ developed from the 1960s.[11]

Chapters 3-6 (the second section analyse the ‘narrative content’ of the memoirs, and particularly their literary representations of the front line, focusing respectively on the themes of landscape, weaponry, the enemy, and comradeship.[12] Chapter 3, ‘Landscape, Nature, and Battlefields’, investigates the landscape’s role in shaping experiences of combat, and the meanings veterans projected onto these landscapes, through comparison and contrast of their experiences with the Army in North Africa, in aerial combat, and at sea.[13] In all cases, Houghton finds that landscapes were invested with distinct symbolism which allowed veterans to make sense of their battle spaces.[14] Chapter 4 similar examines veterans’ relationships with weaponry, examining the Royal Navy in the Battle of the Atlantic, Battle of Britain fighter pilots, and tank crews in the 1944 Normandy Campaign, again comparing and contrasting these varied experiences.[15] For example, while the differences between the latter two are particularly striking, these accounts all reinforced the centrality of the human experience of war, rather than the story that has sometimes been told of the Second World War of machines ‘dominating’ to the ‘exclusion of the human combatant’.[16]

Chapter 5, ‘“Distance”, Killing, and the Enemy’, similarly complicates accounts of warfare which suggest that technology depersonalised killing. Houghton finds that technology could not make the dead completely anonymous, though Navy and RAF veteran-memoirists were perhaps better able to employ the ‘grey machinery of murder’ as a psychological barrier to avoid confronting any moral qualms over their actions than those who served in the Army.[17] Some of the most poignant sections of the book are those in this chapter which deal with veterans’ contacts with the enemy; their memoirs suggest that preconceived ideas of German troops as the inhuman ‘Hun’ could not be sustained after their first personal contact with them.[18] It would be interesting to contrast these veterans’ experiences with those who fought against Japan, but Houghton chooses not to consult memoirs by those who fought in Asia, as the deeply racialised conceptions of the Japanese held by the British render veterans’ depictions of combat on that front very different to those who fought in Europe.[19] Chapter 6, ‘Comradeship, Leadership, and Martial Fraternity’, investigates the claim by psychiatrists and others that the ‘small group in combat’ was the main motivation for fighting and was key to preventing psychological breakdown during combat.[20] Houghton finds that for memoirists, the personal relationships within their small group were indeed the ‘ultimate spur’ in battle, whether this small group was a platoon, the company of a ‘little ship’ or submarine, or a seven-man Lancaster bomber crew, though again there were also important differences between the various branches.[21]

Chapters 7 and 8 (the third section) explore how memoirists used their historical records for both private and public reasons, using their memoirs both for self-fashioning and for claiming agency over their wartime experiences.[22] Chapter 7, ‘Selfhood and Coming of Age’, charts how memoirists wrote of their wartime experience as a journey from ‘Youthful Innocence’ to ‘Manhood’. Houghton compares these memoirs to Bildungsroman, interrogating how veteran-memoirists understood and reconstructed their ideas of masculinity, maturity, and selfhood in relation to their war experiences.[23] The chapter investigates memoirists’ motivations for joining the conflict, finding that despite being aware of the horrors of the First World War many memoirists, influenced by the popular culture of the period, saw warfare as an enticing ‘adventure’ situation, and the soldier as a ‘quintessential figure of heroic masculinity’.[24] However, the chapter also shows that memoirists were quickly disabused of these naïve beliefs.[25] Memoirists still saw the war as crucial to their ‘growing up’, but the model of adult masculinity they ascribed to was less the ‘soldier-hero’ model identified by Graham Dawson, but instead the ‘understated, good-humoured, kindly, and self-deprecating courage of the “little man”’ identified by Sonya Rose.[26] Crucially, this latter model of masculinity, unlike the ‘soldier-hero’ model, could include civilians on the home front, offering reassurance that veterans would be able to readapt to a civilian masculinity.[27] Whilst Houghton in her introduction acknowledges the impossibility of ignoring women’s impact on masculine identities and experiences—even in such seemingly closed-off, all-male institutions as the British military—she nonetheless does not investigate how women were represented in veterans’ memoirs, as The Veterans’ Tale is essentially concerned with memoirists’ representations of frontline combat, from which women were excluded.[28]

While all eight chapters are fascinating, it in Chapter 8, ‘History, Cultural Memory, and the Veteran-Memoirist’, that the wider importance of military memoirs to the historical and cultural memory of the war becomes clear. Here, Houghton examines how three memoirists – Alex Bowlby, Miles Tripp, and Jack Broome – used their memoirs to challenge what were in their opinion unsatisfactory official, academic, and cultural representations of the war. For example, Miles Tripp’s 1969 memoir of his service as a bomb-aimer, The Eighth Passenger, was intended to rehabilitate RAF Bomber Command’s post-war reputation, which many former aircrew felt presented them as war criminals.[29] However, it was also used by historians in debates on the morality of Bomber Command’s actions, particularly the February 1945 Dresden raid, with Tripp’s claim that he attempted to drop his bombs outside the city interpreted by many as an implicit attack on Bomber Command. The chapter examines Tripp’s vehement denials of such claims, and his opposition to his memoir being used by disgraced historian David Irving (most notorious as a Holocaust denier) to support his account of the raid, which was later discredited as he vastly inflated the number of deaths by over 100,000 as part of his attempt to castigate the RAF and establish a moral equivalence between the Nazi regime’s crimes, and the killing of German civilians.[30]

Finally, the short conclusion summarises the book’s main themes, namely memoirists’ desires to depict their personal reactions to war, and the human factors comprising the experience of battle, in contrast to military historians’ more grand and dehumanised narratives. As Houghton puts it, war memoirs ‘capture the man inside the uniform, his own understanding of his physical and psychological performance in the field, and his emotional responses’ to combat. Significantly, they offer a window into the ‘experience of battle as it endures in a veteran’s mind throughout his lifetime’. Memoirists wrote for public audiences, to educate, entertain, and warn them of the folly of war, and in an attempt to claim ownership of scholarly, official, and cultural remembrance of the conflict, but they also wrote for themselves, to ‘reconstruct shattered notions of masculine self into a coherent and meaningful image’ in the post-war years.[31]

This is an impressive and important book, but there are some small points of criticism that can be made in addition to those already raised. Houghton chooses to follow a thematic structure in her chapters, in turn dividing each chapter into an examination of the distinct experiences of combatants in each branch of the armed forces. This can make it somewhat difficult to trace the experiences of any particular memoirist, or of any branch of the armed forces, though the index is helpful in this regard. Moreover, while Houghton is up-front about why some memoirs were excluded from the scope of her study, the exclusion of accounts written during the war itself is perhaps unfortunate, as to contrast accounts written then with those written in the post-war years and decades might have helped elucidate her arguments about memoirists’ changing perspectives over time.[32]

No one study cannot be expected to cover all aspects of this vast and fascinating subject, however, and it is clear that an enormous amount of research went into this book. Houghton’s bibliography lists over ninety individual memoirists, some with multiple titles and/or editions to their name, such that the total number of memoirs consulted is over a hundred.[33] This is in addition to a huge range of other primary and secondary sources, and every point made is well substantiated with multiple examples from veterans’ memoirs. The Veterans’ Tale is successful in substantiating its central claim for the importance of veterans’ memoirs as historical sources, and in providing fresh insights into veterans’ experiences of combat as they were lived and remembered throughout veterans’ lifetimes.[34] It is now left for other historians to build on Houghton’s work in furthering our understanding of British cultural memories of the Second World War. More thorough examination of the post-war lives of ‘veteran-memoirists’, and studies of those types of memoirs Houghton excludes from her account, would be particularly fascinating.

Download PDF

 

Bibliography

Bourke, J., An Intimate History of Killing: Face-to-Face Killing in Twentieth-Century Warfare (London, 1999).

Connelly, M., We Can Take It! Britain and the Memory of the Second World War (Harlow, 2004).

Dawson, G., Soldier Heroes: British Adventure, Empire, and the Imagining of Masculinities (London, 1994).

Fussell, P., The Great War and Modern Memory (London, 1975).

Houghton, F., The Veterans’ Tale: British Military Memoirs of the Second World War (Cambridge, 2018).

Hynes, S., The Soldiers’ Tale: Bearing Witness to Modern War (London, 1998).

Portelli, A., ‘What Makes Oral History Different’, in R. Perks and A. Thomson (eds.), The Oral History Reader (London, 1998), pp. 63-74.

Rose, S. O., ‘Temperate Heroes: Concepts of Masculinity in Second World War Britain’, in S. Dudink, K. Hagemann and J. Tosh (eds.), Masculinities in Politics and War: Gendering Modern History (Manchester, 2004), pp. 177-95.

Wessely, S., ‘Twentieth-Century Theories on Combat Motivation and Breakdown’, Journal of Contemporary History, 41/2 (2006), pp. 269-86.

 

Notes

[1] F. Houghton, The Veterans’ Tale: British Military Memoirs of the Second World War (Cambridge, 2018), pp. 4-5.

[2] P. Fussell, The Great War and Modern Memory (London, 1975); S. Hynes, The Soldiers’ Tale: Bearing Witness to Modern War (London, 1998).

[3] See, for one example among many, M. Connelly, We Can Take It! Britain and the Memory of the Second World War (Harlow, 2004).

[4] Houghton, The Veterans’ Tale, pp. 4-6.

[5] A. Portelli, ‘What Makes Oral History Different’, in R. Perks and A. Thomson (eds.), The Oral History Reader (London, 1998), pp. 68-9.

[6] Houghton, The Veterans’ Tale, p. 2.

[7] Houghton, The Veterans’ Tale, pp. 22-6.

[8] Houghton, The Veterans’ Tale, pp. 29-39.

[9] Houghton, The Veterans’ Tale, pp. 40-45.

[10] Houghton, The Veterans’ Tale, pp. 46-52.

[11] Houghton, The Veterans’ Tale, pp. 59-65.

[12] Houghton, The Veterans’ Tale, pp. 25-6.

[13] Houghton, The Veterans’ Tale, pp. 72-82, 83-92, and 93-101 respectively.

[14] Houghton, The Veterans’ Tale, pp. 101-2.

[15] Houghton, The Veterans’ Tale, pp. 105-12, 112-21, and 121-35 respectively.

[16] Houghton, The Veterans’ Tale, pp. 135-6.

[17] J. Bourke, An Intimate History of Killing: Face-to-Face Killing in Twentieth-Century Warfare (London, 1999), p. 6.

[18] Houghton, The Veterans’ Tale, pp. 161, 166-7.

[19] Houghton, The Veterans’ Tale, p. 22.

[20] Simon Wessely, ‘Twentieth-Century Theories on Combat Motivation and Breakdown’, Journal of Contemporary History, 41/2 (2006), p. 269.

[21] Houghton, The Veterans’ Tale, p. 170.

[22] Houghton, The Veterans’ Tale, pp. 25-6.

[23] Houghton, The Veterans’ Tale, p. 207.

[24] Houghton, The Veterans’ Tale, pp. 208-21.

[25] Houghton, The Veterans’ Tale, pp. 221-42.

[26] G. Dawson, Soldier Heroes: British Adventure, Empire, and the Imagining of Masculinities (London, 1994); S. O. Rose, ‘Temperate Heroes: Concepts of Masculinity in Second World War Britain’, in Stefan Dudink, Karen Hagemann and John Tosh (eds.), Masculinities in Politics and War: Gendering Modern History (Manchester, 2004), pp. 177-95.

[27] Houghton, The Veterans’ Tale, pp. 242-3.

[28] Houghton, The Veterans’ Tale, pp. 22-3.

[29] The Lancaster bombers which Tripp served in had a seven-man crew; the ‘eighth passenger’ is an allusion to fear, specifically the way Tripp saw it as invariably accompanying the bomber crews on every mission.

[30] Houghton, The Veterans’ Tale, pp. 254-62.

[31] Houghton, The Veterans’ Tale, pp. 272-8.

[32] Houghton, The Veterans’ Tale, pp. 22-5.

[33] Houghton, The Veterans’ Tale, pp. 279-90.

[34] Houghton, The Veterans’ Tale, p. 2.

Valkyrie: The Women of the Viking World, Jóhanna Katrín Friðriksdóttir (London, 2020)

Valkyrie: The Women of the Viking World, Jóhanna Katrín Friðriksdóttir (London, 2020)

Biography: Sian Webb recently submitted her PhD thesis, ‘A Land of Five Languages: Material Culture, Communities and Identity in Northumbria, 600-867’, that was joint supervised by Chris Loveluck in Archaeology and Peter Darby in History.  She focuses on early medieval cultural history, material culture and medieval studies.

The year is 1014 and The Battle of Clontarf rages in Dublin. It is a setting which in many cases immediately sparks images of men fighting and dying for their male lords, kings, plunder and the glory of battle.  As this is happening, a man in Caithness far away on the north-eastern coast of Scotland spied twelve figures entering a weaving shed. These women were the Valkyries.  As the Irish and Norse fought in Dublin, they wove a tapestry with a thread of human entrails and loom-weights of skulls. This is how Friðriksdóttir opens her exploration of women living in the Viking world. Whilst warfare, and medieval battle in particular, is often envisioned as a strictly masculine affair in modern popular consciousness, this vignette from the thirteenth-century Njáls saga reintroduces a feminine aspect.

Valkyrie: The Women of the Viking World is a rich cultural history of the lives of Viking Age women, constituting a worthy addition to the existing scholarship on this topic.  In this endeavour, the setting is apt. Women from goddesses such as Freyja and Valkyries like Sigrún, to extraordinary, albeit human, women accounted for in Icelandic sagas, such as the Viking Guðriðr Þorbjarnardóttir, are shown to be deeply complex individuals.  They all are shown to have a rich mixture of virtues and vices with the capacity to display the full range of emotions.[1] They could be honourable, brave and wise whilst also able to make mistakes and be ruthless in their anger. These women did not appear as static images nor as moral lessons set in black and white. As Friðriksdóttir brings together the threads of her study it is evident, as she states in her introduction, that these are reflections of real people and emerge from a society wherein women were essential for their work and the wisdom they possessed.

Friðriksdóttir’s monograph emerges from a shift in the scholarly appreciation of the complexity of the culture and identities of the people, both men and women, of the Viking Age (ca. 790-1066 CE). Her study is largely confined to Viking settlements in Northern Europe and the British Isles, though she does at times bring in evidence from the Baltic region.  In this monograph, she provides an overview of the evidence at hand bringing together shared attributes shaping the lives and identities of women throughout the regions of Viking settlement.  This widening appreciation of Viking culture and society began in the 1970s with the seminal work The Viking Achievement: The Society and Culture of Early Medieval Scandinavia by Peter Foote and David Wilson, in which the authors devoted very little space to the discussion of raiding and violence, delving instead into the rich culture and artistically constructive activities of the period.[2] Two decades later, Judith Jesch opened the discussion of the place of women in Viking society.[3] This monograph brought together a wide variety of  material sources and toponymy to bring gender studies into a scholarly tradition that had long ignored the presence and involvement of women in all aspects of life.  The study of Viking women and their potential for active involvement in raiding and warfare culture was reinvigorated in 2017 by Charlotte Hedenstierna-Jonson, et al. with their paper ‘A Female Viking Warrior Confirmed by Genomics’.  The team used genome-wide sequence data to confirm the biological sex of a Viking individual in a well-furnished grave and disprove previous assumptions that the individual must have been male based on the weaponry and material culture with which it was buried. The team concluded by cautioning against basing our understanding of the past and potential of the people who lived then on generalised assumptions of cultural norms and stereotypes.

As Friðriksdóttir states, the term ‘Viking’ does not only refer to those who were chiefly interested in violence, pillaging and raiding, though these encounters did occur particularly in the earlier period of their activity. ‘Viking identities’, according to her, also encompass the mobility of the people in question. Those were groups of people actively involved in trade routes that stretched from Asia to the western fringes of Europe, in travel and in the settlement of short- and long-term colonies.[4] In these endeavours, women were more than able to take an important role, whether or not they were actively involved in raiding and warfare. As seen in the opening vignette, women were inextricably linked with spinning and weaving. Textile production opened a considerable path to social mobility, as Viking mobility relied on ships and their sails. These objects, along with the production of clothing suitable for long sea voyages in arctic seas, required years of effort that would largely be the work of professional women.[5] This mobility and the opportunity for women to engage in high-level political activity is reflected both in sagas and in presence of high-status female graves at the sites of important Viking Age power centres.[6]

In order to provide an accurate and fully developed image of Viking women, Friðriksdóttir brings together a wide variety of sources both material and textual, including runic inscriptions, the text and imagery carved on runestones, toponymy, picture stones, Viking Age art and archaeological sources alongside textual evidence provided by the sagas and from annals and other historical texts produced in Ireland and other European areas.  This approach requires a synthesis of materials from a wide variety of academic fields spanning Viking and Medieval studies and the growing appreciation of material culture in history that began in the 1990s with scholars such as Judith Jesch, and continued to gain strength through the early 2000s, with works such as the collected volumes Land, Sea and Home: Proceedings of a Conference on Viking Period Settlement, (Cardiff, July 2001) and Cædmons Hymn and Material Culture in the World of Bede: Six Essays.[7]  This approach continues to prove fruitful, attracting a growing number of scholars focusing on a range of topics on which textual sources are less forthcoming.[8]

Each individual type of source, from material culture and archaeology to textual vignettes, offers its own window into the study of the cultures and communities of the past. Without a wide source base, information from different fields of knowledge loses some key contextual elements, skewing our understanding of past societies and the identities that grew within them. By bringing a wider base of sources together to form an integrated approach, it is possible to work towards an understanding of how all pieces fit together. Insights gleaned from different sources can provide context and balance out the weaknesses and biases present in each individual type of source. This balance of sources is adeptly managed by Friðriksdóttir, leading to the wonderfully complex and richly layered depiction of the lives and opportunities of Viking women shown in Valkryie.

Friðriksdóttir draws the reader in by the textual vignettes provided by the sagas.  These texts draw readers into a portrayal of the world as seen by the saga authors and their audience, bringing a vibrancy of colour and life to the discussion of the past.  It is balanced by the evidence provided by the wide variety of complementary sources discussed above.  A wide range of women from the sagas help to provide evidence for the shape of life for women from varied backgrounds, from wealthy and influential women who controlled the lives of their families to young women who become trapped in awful cycles of poverty and abuse.  The depiction of Valkyries and deities from the sagas provide further insights into cultural understandings of the nature of women and their ability to be brave and strong or cowardly and deceitful.  Whilst Friðriksdóttir shows a careful and studied handling of the sagas, the book could benefit from a deeper discussion of the difficulties presented by these sources for the benefit of the reader.  These difficulties include the mixing of Christian and traditional belief systems for the saga authors and how this may tinge the text.  Another difficulty that could have been discussed is the chronological diversity of the sources.  The saga authors often wrote about things that happened centuries before their own time.  Ideology and cultures evolved and adapted to the new problems and opportunities presented by different times.  This introduced further potential for incidental misrepresentation of what would now have been a different culture from the saga author’s own contemporary reality.

The book is structured around a life cycle, delicately following the trails of women’s lives in the lands touched by Viking culture from birth through to death. Along the way, Friðriksdóttir examines how age, marital status and social rank affected their identities through material culture, burial and osteological archaeology, and sagas. She considers infancy and childhood for female offspring, discussing infanticide and the argument that female infants may have been more likely to be left to die from exposure in times of hardship, offering a balanced view on all sides of this argument. In this, she sets runic evidence that suggests a population balance skewed in favour of men alongside burial evidence and law codes that indicate the depth of love that Viking families could show for their daughters and the protections granted to children and pregnant women regardless of the infant’s sex.[9]

Chapter 2 focuses on the social world of teenage girls, discussing the cultural beliefs and family honour that shaped their lives and potentially restrained their opportunities. Yet, this period of youth and young adulthood also brought with it the potential for women to take a role in craft and trade work, to act as poets (skáldmær) or a more physically active role in violence and warfare.[10] The following chapter turns to adulthood, the lives and status of married women, and female agency and divorce. In it, the author discusses personal adornment, women’s involvement in craft-working and trade, and opportunities for travel and leadership roles.  Women were valued for their intelligence and abilities, personal attributes that could prove to be attractive qualities in a partner. Even after marriage, however, women were able to initiate a divorce if marital relations broke down.

Much of the adult life of fertile women would be spent in a cycle of pregnancy and the nursing of young children, along with miscarriages, healing from childbirth and the potential of death associated with childbirth. These concerns and the importance of motherhood form the core of Chapter 4. The chapter brings these topics to life through evidence from sagas showing the bravery of pregnant women, as well as mothers who could be wise leaders of their communities, loving supporters of their families and ruthless in their attempts to secure the future wealth and social rank of their sons and daughters.[11]

If women survived their childbearing years, they were statistically more likely to outlive their partners. Chapter 5 focuses on this shift in the life cycle with an examination of widowhood in Viking culture. In this endeavour, Friðriksdóttir looks at the actions of widows in sagas alongside the evidence of influential women available from runic inscriptions and grave goods.  Widows were able to consolidate considerable wealth as businesswomen and remain active in their communities as commissioners for the construction and upkeep of bridges and roads.[12] The transition between older belief systems and the new Christian religion brought additional opportunities for widows who displayed their new faith by commissioning stone crosses blending Christian and traditional imagery with runic inscriptions.[13]

The monograph concludes with the experience of elderly Viking Age women and their treatment in death. Viking Age burials indicate that communities held older women in positions of considerable influence and dignity.[14]  Evidence found in sagas, graves containing staffs, amulets and medicinal plants, and the iconography of women holding staffs and branches found on the Kirk Michael cross slab (123) on the Isle of Man suggest that older women could be valued as professional seeresses (völva or seiðkona) and for their role in traditional magic (seiðr).[15]

Overall, Friðriksdóttir builds a vivid image of the complex realities of life for women in Viking settlements. Women could be constrained by societal expectations, yet Viking Age culture allowed opportunities for both physical and social mobility.  Women took positions of importance in their families and communities from their youth to the end of their lives. They could be ruthless and vengeful or wise and honourable, characterised by a mixture of virtues and vices. The balance of sources provides a detailed consideration of the realities of life for Viking Age women, and the textual vignettes drawn from sagas make the work endlessly engaging for both academic readers and non-specialists interested in Viking history.

Download PDF

 

Bibliography

Brink, S. and Price, N. S. (eds). The Viking World. (New York, 2008).

Cambridge, E. and Hawkes, J. (eds), Crossing Boundaries: Interdisciplinary Approaches to the Art, Material Culture, Language and Literature of the Early Medieval World (Philadelphia, 2017).

Frantzen, A. J. and Hines, J. (eds), Cædmon’s Hymn and Material Culture in the World of Bede: Six Essays (Morgantown, 2007).

Fleming, R., Britain After Rome: The Fall and Rise, 400-1070 (London, 2011).

Foote, P. and Wilson, D., The Viking Achievement: The Society and Culture of Early Medieval Scandinavia. (London, 1970).

Friðriksdóttir, J. K., Valkyrie: The Women of the Viking World. (London, 2020).

Gilchrist, R., Gender and Material Culture: The Archaeology of Religious Women (New York, 1994).

Gilchrist, R., Medieval Life: Archaeology and the Life Course (Woodbridge, 2012).

Hayeur Smith, M., The Valkyries’ Loom: The Archaeology of Cloth Production and Female Power in the North Atlantic (Florida, 2020)

Hedenstierna-Jonson, C., Kjellström, A.,  Zachrisson, T., Krzewińska, M., Sobrado, V., Price, N., Günther, T., Jakobsson, M., Götherström, A. and Storå, J., ‘A Female Viking Warrior Confirmed by Genomics’, American Journal of Physical Anthropology, 164/ 4 (2017), pp. 853-860.

Hines, J., Lane, A. and Redknap, M. (eds). Land, Sea and Home: Proceedings of a Conference on Viking Period Settlement, at Cardiff, July 2001 (Abingdon, 2004).

Hyer, M. C., and Owen-Crocker, G. R., The Material Culture of Daily Living in the Anglo-Saxon World (Exeter, 2011).

Jesch, J., The Viking Diaspora (Abingdon, 2015).

Jesch, J., Women in the Viking Age (Woodbridge, 1991).

Jones, S., The Archaeology of Ethnicity: Constructing Identities in the Past and Present. (London, 1997).

Sørensen, M. L. S., ‘Gender, Material Culture, and Identity in the Viking Diaspora’, Viking and Medieval Scandinavia, 5 (2009), pp. 253-269.

Wicker, N. L., ‘Christianization, Female Infanticide, and the Abundance of Female Burials at Viking Age Birka in Sweden’, Journal of the History of Sexuality, 21 (2) (May 2012), pp. 245-262.

Notes

[1] J. K. Friðriksdóttir. Valkyrie: The Women of the Viking World. (London, 2020). Pp. 10-11.

[2] P. Foote and D. Wilson. The Viking Achievement: The Society and Culture of Early Medieval Scandinavia. (London, 1970).

[3] J. Jesch, Women in the Viking Age. (Woodbridge, 1991).

[4] Friðriksdóttir. Valkyrie. Pp. 12.

[5] Friðriksdóttir. Valkyrie. Pp. 85.

[6] Friðriksdóttir. Valkyrie. Pp. 107-108.

[7] J. Hines, A. Lane and M. Redknap (eds). Land, Sea and Home: Proceedings of a Conference on Viking Period Settlement, Cardiff, July 2001 (Abingdon, 2004); A. J. Frantzen and J. Hines (eds). Cædmon’s Hymn and Material Culture in the World of Bede: Six Essays. (Morgantown, 2007);

[8] For Viking studies, material culture is invaluable.  See: N. L. Wicker, ‘Christianization, Female Infanticide, and the Abundance of Female Burials at Viking Age Birka in Sweden’, Journal of the History of Sexuality, 21 (2) (May 2012), pp. 245-262; M. Hayeur Smith, The Valkyries’ Loom: The Archaeology of Cloth Production and Female Power in the North Atlantic. (Florida, 2020); M. L. S. Sørensen, ‘Gender, Material Culture, and Identity in the Viking Diaspora’, Viking and Medieval Scandinavia, 5 (2009), pp. 253-269. This process can be seen throughout medieval studies. See: E. Cambridge and J. Hawkes (eds), Crossing Boundaries: Interdisciplinary Approaches to the Art, Material Culture, Language and Literature of the Early Medieval World, (Philadelphia, 2017); S. Jones. The Archaeology of Ethnicity: Constructing Identities in the Past and Present (London, 1997); M. C. Hyer and G. R. Owen-Crocker. The Material Culture of Daily Living in the Anglo-Saxon World (Exeter, 2011); R. Gilchrist, Gender and Material Culture: The Archaeology of Religious Women. (New York, 1994); R. Gilchrist, Medieval Life: Archaeology and the Life Course (Woodbridge, 2012).

[9] Friðriksdóttir. Valkyrie. Pp. 24-25, 35.

[10] Friðriksdóttir. Valkyrie. Pp. 52, 54-55, 58-59.

[11] Friðriksdóttir. Valkyrie. Pp. 118, 128-129, 142.

[12] Friðriksdóttir. Valkyrie. Pp. 162.

[13] Friðriksdóttir. Valkyrie. Pp. 162-163.

[14] Friðriksdóttir. Valkyrie. Pp. 173.

[15] Friðriksdóttir. Valkyrie. Pp. 179-180, 183.

Nordic Studies in 2021: When Vikings Raid Real Life, Our Good Intentions Get Pillaged

Nordic Studies in 2021: When Vikings Raid Real Life, Our Good Intentions Get Pillaged

Beth Rogers is a PhD student at the University of Iceland in Reykjavík, Iceland, studying topics of food history and medieval Icelandic culture for her thesis, “On with the Butter: The Cultural Significance of Dairy Products in Medieval Iceland.” The project is hosted by the Institute of History at the Centre for Research in the Humanities.

Beth has written more than 30 popular and academic articles, including 2 book chapters, on such varied topics as Viking dairy culture, salt in the Viking Age and medieval period, food as a motif in the Russian Primary Chronicle and the literary structure of Völsunga saga. Her other research interests include: medieval Literature (especially sagas), military history, emotions in literature, Old Norse mythology and folklore, and cultural memory.

The impact and degree of white supremacist appropriation of Nordic culture in Scandinavian Studies has been the cause of recent public interest and scholarly debate. Viking and medieval imagery was seen displayed, for example, by participants in the Unite the Right rally in Charlottesville, Virginia, in 2017. But the most prominent display was during the violent attack on the United States Congress at the U.S. Capitol on January 6, 2021, when these images re-emerged, most visibly in the form of tattoos and clothing worn by Jake Angeli (real name Jacob Chansley),  the self-described ‘QAnon Shaman’. Angeli was instantly recognisable, wearing furs and horns and a face painted in red, white, and blue, while his bare chest blazed with black lines of Yggdrasil, Mjölnir, and the Valknut. News outlets leaped to provide context to this oddball stand-out among the mob of Americans angry at the outcome of the presidential election of November 2020, explaining the meaning of his tattoos for readers who had not seen them before or did not know of their associations with Norse mythology. The media attempted to clarify that Angeli was not part of the Antifa or the Black Lives Matter movements, but QAnon, a political and social conspiracy group which has gained prominence in recent months since its appearance on internet message boards in October 2017. Neither the Antifa nor the BLM movement is known for using any Nordic cultural symbols, yet in the immediate aftermath of the attack on the US Capitol, confusing claims that Angeli was actually part of the BLM movement spread quickly on social media. 

 Social media image circulated heavily in the days following the attack on the US Capitol. Origin unknown. A Google Reverse Image Search returned no information. 

Angeli’s interviews with BrieAnna J. Frank, reporter for The Arizona Republic, leave little doubt as to his right-wing ideological leanings. His frequent appearances at events carrying a sign stating, ‘Q sent me!’ further confused the issue. According to the BBC, QAnon is ‘a wide-ranging, completely unfounded theory that says that President Trump is waging a secret war against elite Satan-worshipping paedophiles in government, business and the media.’

Angeli himself, currently awaiting trial in federal prison (where he has experienced problems with the lack of organic food), has expressed regret over his actions. In an interview with US news program 60 Minutes, Angeli spoke from jail – in an unsanctioned interview which resulted in a ‘scolding’ for his lawyer from a judge – and insisted that, ‘I regret entering that building. I regret entering that building with every fibre of my being’ (0:43 – 0:47). His actions ‘were not an attack on [the United States]’, Angeli insisted. Instead, 

I sang a song, and that’s a part of Shamanism. It’s about creating positive vibrations in a sacred chamber. I also stopped people from stealing and vandalising that sacred space – the Senate. I also said a prayer in that sacred chamber because it was my intention to bring divinity and to bring God back into the Senate. […] That is the one very serious regret that I have, was [sic] believing that when we were waved in by police officers, that it was acceptable. (0:39 – 0:46)

Angeli awaits trial on six counts of misconduct, including violent entry and disorderly conduct in a Capitol building, as well as demonstrating in a Capitol building.

Angeli’s appropriation of Nordic symbols is of course part of a broader Viking cultural renaissance, yet don’t let all this take away from your enjoyment of the current Viking-themed pop culture extravaganza. Vikings are fun! It’s not all white supremacy. It’s wonderful to see people deeply interested and invested in the thrills, chills, twists and turns of these larger-than-life characters on our screens, set against a backdrop of Nordic culture and history that is sometimes richly coloured and always sketched in familiar lines: struggle, sacrifice, and hope. The image of the Viking in pop culture today is so unquestioned – hairy, violent, marauding – The Guardian can’t suffer through so much as a paragraph of historical context without getting distracted by their coolness. Dr Simon Trafford, Lecturer in Medieval History and Director of Studies at the University of London explains the Viking attraction:

The parallels with what we look for in our rock stars are just too obvious. The Vikings were uproarious and anti-authoritarian, but with a warrior code that values honour and loyalty. Those are evergreen themes, promising human experiences greater than what Monday morning in the office can provide.

If you caught American Gods in either series form or its original novel (published 2001; series premiere on American cable network Starz in 2017), you know that Vikings are dull-witted, filthy murderers who would slaughter their own friends and family members in frenzied sacrifice to Óðinn then wait for the wind to return to their sails, leaving very few alive left to row the longship home. If you’re a fan of Vikings (2013-2020, with a planned spin-off series titled – what else? – Vikings: Valhalla), you know that Vikings are the rock stars of history, wearing copious amounts of leather and guyliner artfully smudged around their piercing eyes as they gaze out to sea, bursting with manly intensity. You know. But do you really? 

More problematic is when these tropes, images, and signifiers are part of darker, more nebulous, and disturbing parts of history, and how that history can be forgotten, covered up, manipulated, or even wilfully ignored in the current moment. The tropes, images and signifiers of a culture which are chosen and carried forward in time take on a life of their own, often changing their meaning drastically.

Historians and armchair enthusiasts, pagans and reenactors, artists and others who enjoy learning about Nordic culture and Scandinavian history around the world groaned in unison after the Capitol invasion, aware that the United States was bringing us another fight, so soon after the dust had settled over the last one. In Charlottesville, Virginia, on August 11–12, 2017, far-right groups, including self-identified members of the alt-right, white nationalists, neo-Nazis, Klansmen, and various right-wing militias gathered to present a unified and radical right-wing front as well as to protest the proposed removal of the statue of General Robert E. Lee from Charlottesville’s former Lee Park. Symbols like the óðal rune, the cross of the Knights Templar and the black eagle of St. Maurice, among others, were splashed across the screens of horrified viewers. After the initial shock over the cultural clash in Virginia, which like the attack on the Capitol brought about death and injury, those who spend their lives plumbing the mysteries of history were left to pick up the pieces and decide what to do to avoid being painted with the same Swastika.

What has been observed in social media dissemination and discussion of Nordic cultural symbols illustrates that the general public has at best an incomplete understanding of the use of Viking symbology in connection with the German nationalist movements of the eighteenth and nineteenth centuries, which culminated in two destructive World Wars. Markers of Nordic culture have a tendency to recur throughout history, from their origins in the Viking Age through to the twentieth century and the present day: specifically, Valhalla, Vínland, the Valknut, Mjölnir (Thor’s Hammer), Yggdrasil (The World Tree), and runic inscriptions, including rune-like magical staves such as Vegvísir and the Æishjalmur. Such iconography has been deployed almost randomly (and therefore meaninglessly) to create a connection between Viking culture and an ideology of whiteness, masculinity, and power.   

Recently, a scandal erupted in the hallowed halls of the Academy over the correct next steps to take: how to continue to do what we love as researchers and teachers, but also speak to a wider community and political developments causing direct harm? Differences in opinion led to social media chaos, accusations of doxxing, threats and scathing blog posts by the two front runners in the debate. Dorothy Kim, an Assistant Professor of English at Vassar College, and Rachel Fulton Brown, an Associate Professor at the University of Chicago, squared off in a nerdy, gladiatorial smackdown. As Inside Higher Ed noted, Fulton Brown agreed that white supremacists often use medieval imagery to invoke a mythical, purely white medieval Europe. However, she disagreed with Kim’s assertion that white professors needed to explicitly state anti-white supremacist positions in the classroom. For Fulton Brown, the teaching of history by itself, immersion in the concepts and understanding of changes over time, will stem the tide of white supremacist misuse and misunderstanding. Medievalists unhappy with the handling of the issues by some institutions boycotted conferences.

As the debate raged on, white supremacy continued its dark work. A mass shooter in Christchurch, New Zealand, posted, ‘See you in Valhalla’ before killing 49 people at two mosques and injuring dozens more. Educational institutions have not, and still do not, appear to be doing enough. The Southern Poverty Law Center, which monitors hate groups throughout the US, tracked 838 hate groups’ growth and movement across the country in 2020, brought on in part by a 55% surge in the number of US hate groups since 2017 (though down from its all-time high in 2018). Political elections have put an increasing number of  populist, nationalist, and right-wing figures in office throughout Europe, spurred by rising anti-immigration sentiment, frustration with the political status quo, concerns about globalization, and fears over the loss of national identity. The issue has become so muddled that some educational material must make clear that although a given Nordic cultural symbol, such as Thor’s hammer (Mjölnir) is a hate symbol, it is also commonly used by non-racist neo-pagans and others, and so it should be carefully judged within its context before the viewer assumes the one wearing it to be a member of a hate group. Nothing is black and white. Everything is uncertain. 

Instructors and teachers have recommitted to doing better, echoing statements like that of Natalie Van Deusen, Associate Professor at the University of Alberta. In her own classes, Van Deusen makes a deliberate effort to highlight the flourishing ethnic and cultural exchange among Nordic people of the periods she covers, mainly the late eighth to early eleventh centuries. She includes in her teaching lesser-seen and -heard viewpoints, such as their relationships with the Sámi, indigenous peoples of northern Scandinavia, or trade with the East:

I strive to teach in a way that doesn’t solely focus on Norse-speaking peoples, who were by no means the only ones to occupy the Nordic region during this period, nor were they without influence from surrounding cultures. 

We have to do more, go further, explore deeper, and keep talking about this until there is no question where we stand, as individual scholars, or as people within our communities who care about accuracy (as far as it can be established), diversity (as much as the evidence supports), and education. Always education. 

Dr. Van Deusen remains more committed than ever to keeping the conversation going, saying in a recent interview, ‘I think it’s a willingness to talk when people want to talk to us about these things, and a willingness (as scholars an educators of this period) to acknowledge that this is a real issue.’ For Van Deusen, at this point

[W]e can’t not address it, and the last thing I would want is for someone to be in my class for the wrong reasons and twist my words because I didn’t explicitly say “I’m not here for validating these interpretations” – which I do now, at the beginning of each term.

This is why, through the pages of The MHR, you’ll hear more from me soon. Drawing on a range of evidence, from modern news to textual and archaeological evidence, my colleagues and I will examine the ways in which Viking culture has been and is manipulated, used, and misrepresented by those who seek to create an underlying continuity, real or imagined, stretching directly back to the people of the past known as the Vikings.

So, hold on to your butts! Like the blurry outline of a longship on the horizon, I shall return!

Download PDF

Elaine Farrell, Women, Crime and Punishment in Ireland: Life in the Nineteenth-Century Convict Prison, (Cambridge, 2020).

Elaine Farrell, Women, Crime and Punishment in Ireland: Life in the Nineteenth-Century Convict Prison, (Cambridge, 2020) Review

Women, Crime and Punishment in Ireland is a detailed resource which expands upon the existing scholarship of prison life and brings the administration of punishment in an Irish female convict prison into particular focus. Scholars have recently begun to reflect on how the implementation of nineteenth-century laws (such as the English Poor Laws) affected the wider lives of those inside an institution and this has been conducted through an analysis of the agency that imprisoned women showed in the face of increasingly punitive legislation.

Biography: Megan Yates is an ESRC PhD Student in the school of History, Politics and International Relations at the University of Leicester. Her project is collaborative, working closely with the University of Nottingham and the National Archives. In her doctoral research, she focuses on the daily experiences of vagrants within the workhouses of the nineteenth century and across the Midlands.

In this heavily researched book, Elaine Farrell effectively synthesises studies around life in nineteenth-century institutions with new understandings of the selfhood, agency and life cycles that her female convicts displayed.[1] Through the lives of Irish female convicts, the author conducts a thorough examination of Irish prison life, families, friendships, relationships, and wider network acquisition. It is in these places, Grangegorman, Mountjoy Female Penitentiary, and the Cork Female Convict Depot, that Farrell explores the experiences of incarcerated women and their interwoven identities inside and outside of the prison walls.

Farrell’s case studies detailing the convictions, treatment and opinions of these women will be a great resource for multiple branches of history, both in the institutional sense of how life in an Irish convict prison worked for these females, as well as for those studying the detailed experiences and lifecycles of people in nineteenth-century institutions.

Farrell’s focus also contributes more broadly to discourse surrounding crime and punishment, particularly the evolution of western punitive practices as she explores in her introduction the administrative pathway from Grangegorman prison to Cork Female Convict Depot. Her exploration of these prison institutions to some extent accords with Foucault’s examination of prison in Discipline and Punish and his suggestion that imprisonment was part of a much bigger carceral system that, Farrell thus goes on to argue, infiltrated every aspect of the lives researched in her case studies.[2]

The incorporation of Irish women is a novel and impressive approach to the study of lived experiences and the positionality of her complicated case studies within the context of Ireland, politically, socially and culturally. The source base for this work combines both traditional and non-traditional resources as it utilises the official record of court transcripts, prison ledgers, annual reports and legislation, alongside personal testimony and inmate letters. Farrell uses a close reading analysis to unpick the rhetoric of these sources and interprets them through her vast knowledge of the Irish penal system in the mid-late Victorian era. She combines her analyses with other historical works for both Irish and British institutions and this marries her work with historiography on the Poor Laws including broad ideas about imprisonment, settlement and resistance.[3]

Farrell employs a case-study approach to her sources through sub-chapters at the beginning of each main chapter which brings the human element to the front of her discussion. This structure takes the reader on a journey, an effective approach which envelopes the somewhat disjointed and difficult excerpts of narrative with the social circumstances of these women’s lives. It makes sense, due to the volume of material in each of her themes, to break this down here. This book is structured into five chapters to correspond with its five case studies. Farrell has themed her case studies based on wider societal issues that she interleaves with the stories of multiple women. This is important because otherwise, there is the danger of losing the human aspect of these cases and reducing them to broad-brush arguments. As proven in Farrell’s previous works, she demonstrates that these women were not exceptions and in doing so, she gives voices to ordinary women who might otherwise have been forgotten.

In the first half of Women, Crime and Punishment Farrell tells us about the everyday lived experiences of women in the convict prisons of Ireland. Farrell discusses sanitation, rebellion, work, schooling, length of incarceration and uniform. The female uniforms here stand out, because as Farrell notes, there were four different types, varying in style and colour to reflect the different status of the convicts. Farrell explores the ways in which women expressed individuality by altering their uniform through removing collars, tearing hems and wearing neckerchiefs. Although the convict uniform was implemented in English prisons by the mid-Victorian era, the differentiation of uniform based on a points system was not a practice that was replicated in the British convict system.[4] Uniform was intended to de-individualize convicts and control their appearance. Farrell suggests that in the women’s prisons she studied, the uniform was a bargaining chip for good behaviour, although as she argues, this was hardly a successful penal model.[5] Farrell’s exploration of the attitudes to the uniform from the perspective of the convicts wearing it is one small example of the many new aspects of convict life she shares throughout her chapters.

In the second sub-chapter, Farrell introduces us to the Carroll family. This complicated family of petty criminals frame the rest of her chapter which explores familial bonds maintained or dissolved during their detention. This chapter has disheartening elements whereby we learn of women who were abandoned by their families once they became institutionalised. Farrell goes through a particularly poignant case of a mother who, after five years of imprisonment, was unsuccessful in regaining custody of her child. Farrell’s dip into this story is particularly captivating as she expands upon the feelings these women would have experienced, not simply being incarcerated and imprisoned, but for some, recognising a lost relationship and maternal bond.

Farrell uses thousands of prison files including letters sent back to the prison after a convict’s release. These sources are comprehensive. The high number of exemplary cases in these chapters often encourages the narrative style to switch from one convict to another very quickly. However, this is representative of the short snippets of stories she finds in the archives. Although women such as Catherine Lavelle (depicted on the cover image) were well known and had exceptionally thorough sources, others had no more than one or two mentions in the archive. Intermingling these stories shows that Farrell is not trying to follow any one convict’s prison journey; instead, she brings to life the thorny and intricate lives of many women. Thus, Farrell’s methodology and data collection are made glaringly clear. She repeats the sample size often in this chapter and it is clear that she is confident in stating how representative this evidence is of Irish female convicts’ nineteenth century prison experience.

Women were also involved in the prison system as employees and Farrell discusses in later chapters, the bonds and relationships that could be formed between prison matrons and female convicts. In contrast, Farrell also describes the stigma and criticism female prison employees faced from their male counterparts for their ‘weakness’ in trusting and perhaps liking certain inmates. In further chapters Farrell pulls at the strings of convicts’ relationships, friendship, marital or extra-marital relations as well as their enemies and fights or conflict that occurred amongst inmates. She argues that her female convicts and convicts in general, are a lens through which to recover the voices and relationships of lower-class people who were not in a workhouse, an asylum or an industrial school but often shared traits such as poverty or mental illness. It is the first time, through this book, that the voices of such people have been centre stage. Farrell uses sources from the prison staff to form a picture of what happened to these women but also writes their stories from their perspectives in order to emphasise that they were real people with real lives and real stories to tell.

Farrell’s convict testimony and multi-dimensional sources demonstrate that the prison system was a pseudo society/community. It had its own set of rules visible through work and dietary regulations, correspondence regulation, and a points-based reward system. The females in this prison were deprived of many things but were likewise able to create friendships and maintain kinship bonds, even across institutions. This, Farrell argues, is a very personal experience and depends completely upon individual circumstances. Farrell herself is therefore right to argue in her conclusion that this book is heavily ‘saturated with further evidence of women’s agency’ in penal institutions and that her study has been possible because such significant identity documents exist in the form of letters, petitions, diaries.[6] Farrell argues that convicted women are valuable to examine because their personalities came through in the documents. Sophisticated record linkage work in archives has allowed historians, such as Farrell, to recover multiple perspectives on prison life. As Farrell says, ‘these were ordinary lives captured on paper because of an extraordinary sentence’ and this concept will contribute greatly to a number of further studies into the identities of people long forgotten.[7]

Download PDF

 

Notes

[1] For example, Rebellious Writing: Contesting Marginalisation in Edwardian Britain, ed. by Lauren Alex O’Hagan, Writing and Culture in the Long Nineteenth Century, 10 (New York: Peter Lang, 2020).

[2] Michel Foucault, Discipline and Punish: The Birth of the Prison, Peregrine Books, Repr (Harmondsworth: Penguin Books, 1982).

[3] K.D.M Snell, Parish and Belonging: Community, Identity, and Welfare in England and Wales, 1700-1950 (Cambridge: Cambridge University Press, 2009), pp.1-245; David Moon, The Russian Peasantry 1600-1930: The World the Peasants Made. (Hoboken: Taylor and Francis, 2014), pp.38-39.

[4] David Englander, Poverty and Poor Law Reform in Britain: From Chadwick to Booth, 1834-1914, Seminar Studies in History (London ; New York: Addison Wesley Longman, 1998), pp. 38–39.

[5] Paul Carter, Jeff James, and Steve King, ‘Punishing Paupers? Control, Discipline and Mental Health in the Southwell Workhouse (1836–71)’, Rural History, 30.2 (2019), p.164.

[6] Elaine Farrell, Women, Crime and Punishment in Ireland: Life in the Nineteenth-Century Convict Prison (New York: Cambridge University Press, 2020), p. 257.

[7] Farrell, p.260.

The Problem with Prison – From an Academic Who’s Been There

The Problem with Prison – From an Academic Who’s Been There

Gary F. Fisher is an inter-disciplinary teacher and researcher in the liberal arts tradition. He received his doctorate in Classics from the University of Nottingham in 2020 and has published research on a variety of subjects, ranging from the history of education to twentieth-century travel literature. He is currently employed by Lincoln College.

The National Records of Scotland recently released their much-delayed statistics concerning the number of drug-related deaths that occurred during the year of 2019. They revealed a continued rise to a record high, firmly cementing Scotland as the drug deaths capital of Europe. This, combined with the fact that Scotland also boasts the largest prison population per capita in Western Europe, has prompted a slew of articles proposing various types of reform. So far, so familiar. ‘Prisons in this country are a mess’ is one of the few sentiments that appears to be shared on both sides of the political aisle in the UK. It is a sentiment that you can find in both the New Statesman and The Spectator. As the Oxford History of the Prison has noted, it is a sentiment that has persisted for over two centuries, at least since the prison reformer John Howard roundly criticised the condition of eighteenth-century Britain’s criminal justice system in his 1777 The State of Prisons.

That, sadly, is where the unity ends. While Britons can collectively agree that they are not happy with the state of our prisons, we differ wildly in our proposed solutions. More than that, we cannot even agree on precisely what the problem is. On the one hand, there are those who believe prisons have become too soft, that they are limp-wristed holiday camps in which society’s foulest enter through a revolving door to be waited on hand and foot before being released all too quickly upon an unsuspecting public. On the other hand, there are those who view prisons as brutal and draconian institutions, in which cruel and oppressive restrictions on individuals’ human rights serve to entrench and reinforce criminal behaviour to no palpable social benefit. The former will question what punishment or deterrent is offered by giving potentially violent and dangerous criminals free access to entertainment resources and educational courses, luxuries that law-abiding citizens would only be able to access at personal expense. The latter will appeal to examples of ‘humane incarceration’, typically exhibited in Scandinavian countries, and cite the reduced rates of re-offending associated with a more rehabilitative approach. Representatives of these two broad camps regularly spar on the airwaves, yet these clashes rarely serve to advance the dialogue. One can watch two daytime television debates entitled ‘Are we too soft on prisoners?’, one from ten years ago and one from only last year, and see almost the exact same points and rebuttals being made, with no significant innovations in reasoning having being made in the intervening decade. Neither side will engage with, let alone be convinced by, the arguments of their opponents. In fact, it’s questionable whether they even hear each other. 

Courting one of these two sides has become something of a prerequisite for elected office in the UK. Upon his election in 2019, Boris Johnson immediately sought to placate the ‘tough on crime’ crowd by introducing reforms to expand sentence durations and prison numbers. On the other side of the debate, since her election as Scottish First Minister in 2014 Nicola Sturgeon has consistently cultivated favour amongst those who favour a progressive approach to criminal justice by introducing reforms to move the focus of Scottish criminal justice away from punishment and towards rehabilitation. This politicisation of debate has hardly helped matters and, frustrated by the fruitlessness of dialogue on the subject, one criminal defence lawyer recently penned a plea in Scottish Legal News. Iain Smith implored legislators to ‘move beyond tokenistic, meaningless terms like being “hard” or “soft” on crime’, and instead adopt a ‘smart’ approach that focuses on achieving the goal of reducing offending through solutions that occur outside the walls of a prison. As well intentioned as Smith’s plea may be, it seems unlikely to gain traction. 

Highly conscious of this fatalistic public attitude towards the prison system, I first passed through the gates of a category C men’s prison at the beginning of 2020. I should stress that I entered voluntarily, joining the staff as the manager of the prison’s library. It was not long into my career as the librarian that I found the same two broad camps that occupied public dialogue had carried through into the four walls of the prison. There were those staff who pined for the days of ‘proper prison’ and suggested that the modern officer ought to be renamed the ‘custody butler’ for the amount of waiting on their charges that they were increasingly expected to perform. Drawn against them were those who embraced the sector’s increased emphasis on supporting rather than simply securing their residents, and quietly derided those individuals they believed to be insufficiently ‘pro-prisoner’ in the execution of their duties. The incompatibility of these two schools was driven home to me when the library found itself in possession of several copies of a workout guide that could be completed in a cell with no equipment. One of my colleagues was excited by the prospect of distributing this to the men in their cells so that they could continue to exercise while the ongoing pandemic limited their yard time. The other asked ‘why would we want to help them get stronger?’ 

This was more than a simple difference in methods: it was a disagreement as to what the fundamental goal of prisons actually was. To some, their goal was to punish, and anything other than Dickensian horror would represent a betrayal of that goal. To others, their aim was to reform and support, and any interruption to their counselling sessions and wellbeing workshops constituted a frustration of that aim. No wonder these two sides are unreceptive to each other’s arguments: they’re arguing from different pages. They don’t merely disagree on what is wrong with prisons, they disagree on what a ‘correct’ prison would look like.

The reality is that the punitive and rehabilitative approaches, while not necessarily completely mutually exclusive, are at the very least highly counter-productive. It is somewhere between hard and impossible to adopt one without at the very least undermining the other. During my time in the prison library, I witnessed how these two approaches played against each other, with the prisoners caught in the middle. To illuminate this, consider a few examples. Upon being sentenced, a newly convicted offender will be transported to their holding facility in a secure transport vehicle, colloquially known as a ‘sweatbox’. Inside the sweatbox the offender is crammed into a tiny cubicle smaller than an airplane bathroom. So small is this space that taller offenders are unable to properly sit down within them, instead having to half-stand, half-sit for the duration of their indeterminately long journey from court to prison. The transport’s blacked-out windows make the prisoner invisible to the world outside and he is assigned his number and stripped of all personal effects. With this removal of identity completed, the prisoner is then assigned a counsellor and mental health support worker to help him explore the roots of his criminality and come to terms with the very personality that has just been stripped from him. 

A prisoner is encouraged to explore his faith and has regular contact with a large team of professionally-trained chaplains from a variety of religious denominations. But if he indulges too deeply in this faith he will find himself earning the attention of counter-extremism professionals who will monitor him and his reading habits closely and reprimand him or extend his sentence if they deem it appropriate. 

He is given a wide variety of learning opportunities and encouraged to attain qualifications that will hold him in good stead upon his return to the outside world. But he is also barred from accessing the greatest source of learning and information ever created: The World Wide Web. He instead has to make do with the finite materials that the already overstretched library and education services are able to provide and risk a revocation of his learning privileges should a book be returned delayed, damaged or dog-eared.

Of course, those who believe their job is to punish and those who believe their job is to rehabilitate are frustrated with each other. They are not merely pursuing different objectives, they are actively cancelling out each other’s efforts. I am not the first to have noted the incompatibility of these goals. In 2019 James Bourke, the governor of H.M.P. Winchester, was reported in The Daily Telegraph criticising the culture of ‘fantasy’ surrounding what prisons are able to achieve, and how the present attempt to achieve both punishment and rehabilitation has left both goals unfulfilled. Although conditions in prison may be a horrifying deterrent for the ‘nice, white middle-class’ people who legislate for them, Bourke claims their structure and security means they can act as a ‘place of refuge’ for those from more troubled backgrounds. Meanwhile, the idea that a five-week prison sentence is enough time to successfully rehabilitate a convicted criminal after years, if not decades, of accumulated suffering and habitual criminality is derided as a ‘fantasy’. In attempting to both deter criminal behaviour  through harsh punishment and rehabilitate convicted criminals through reformative support, prisons seem doomed to fail both goals.  

This leads back to the condition of public debate about criminal justice reform. Despite the great amounts of ink spilled and spittle launched debating the conditions of our prisons, we never seem to have actually tackled this most fundamental of issues: what do we actually want our prisons to achieve? Do we want to rehabilitate, or do we want to punish? Neither goal is palpably ridiculous, but we can’t have our cake and eat it too. As it stands, the same arguments will continue to be repeated, drug-related deaths will continue to rise, and the one thing that we’ll all be able to agree on will be that we’re not happy with the state of our prisons.

In the BBC’s recent political thriller Roadkill, the Prime Minister, played by Helen McRory, informs Hugh Laurie’s Justice Minister that,  ‘We lock people up. We’re famous for it. We like locking people up. It’s in our character’. For the foreseeable future, that seems unlikely to change. 

Download PDF

British High-Seas’ Sovereignty: A ‘Fisherman’s Tale’

British High-Seas’ Sovereignty: A ‘Fisherman’s Tale’[1]

Dr David Robinson is the Editor-in-Chief of The MHR and an Honorary Post-Doctoral Fellow of the University of Nottingham. In this Spotlight article, he discusses Britain’s shifting (shifty?) presentation of history over fishing rights…

For the past five years, sovereignty has been the dominant feature of British public discourse. In particular, its gradual erosion since the UK joined the European Community in 1973. Nowhere was the apparent decline in Britain’s ability to maximise its advantages and strategic resources more apparent than in the fishing industry. The nation’s once-proud and dominant trawler fleet, the argument was advanced, was dealt an iniquitous losing hand from the bottom of the deck, by faceless sleight-of-hand Eurocrat croupiers, in the guise of the European Fisheries Policy.

Tensions arose recently when French fisherman attempted a blockade of Jersey, angry at what they saw as the late imposition of licencing requirements by Jersey, under the new UK-EU Trade and Cooperation Agreement (TCA). When British Prime Minister, Boris Johnson, dispatched the Royal Navy the popular British tabloid newspaper, The Sun, responded in fine form with the headline, ‘Take Sprat! Jersey fishing: Royal Navy warships see of blockade and send 56-strong fleet of French boats packing!’

For Brexiteers, the decimation of the British fishing industry has long been a direct result of the restrictions placed on British vessels fishing their own territorial waters. At the same time, foreign incursions were encouraged and EU quotas allowed them to land far bigger catches.

Despite representing just 0.12% of the British economy, fishing was presented as a paradigmatic symbol of the European yoke. And Britain was not to be yoked, particularly by a continent that owed its freedom to the sacrifices made, twice, by the flower of British youth. Arch-Brexiteers frequently suggested parallels between two world wars and Britain’s exit from the EU. Nigel Farage, for example, was pictured in the right-wing press standing next to posters advertising Christopher Nolan’s latest movie and extolling Britain’s Remain-supporting youth ‘to go out and watch #Dunkirk’!

This was a powerful argument, as it offered an example of why Britain should leave the EU that went beyond simple economics. Broad Remain counter-arguments that focused on the loss of a few percentage points of GDP over a couple of decades were a brittle defence of EU membership, especially when British seadogs were left muzzled and tied to the pier stanchions of Grimsby and Hull, whilst Spanish and French armadas ruled the North and Irish Seas as well as our Channel waters. When the referendum cards went down, symbolic beat shambolic; the only way the Remain campaign can really be summarised. Besides, it was argued, leaving the EU would mean an economic resurgence for the British fishing industry. EU boats would be banished for good from UK territorial waters, Britain would reassert its traditional sovereignty and fish ‘a sea of opportunity’.

As it has transpired, however, negotiations with the EU have not concluded as well as British fisherman and Brexit voters had hoped. For a variety of reasons, which could be grouped under the heading ‘reality’, the British fishing industry’s hopes have foundered on the rocks of political and economic expediency, and they have taken to the media, post-Brexit, crying ‘betrayal!’.

That, however, is not the point of this article.

What is, instead, is a selective version of history, successfully deployed to persuade British voters that their future lay in the past. Or, rather, a return to a previous state of ‘sovereignty’ that never really existed. The broader lesson is that sovereign and economic interests are best served not by looking backwards to an imagined past, but by a realistic appreciation of a nations’ current tactical and geo-political strengths.

The historical irony is that the British government’s claims over sovereignty and territorial waters are the very opposite of the traditional case made by British fisherman and governments defending their interests since the nineteenth century. Central to this evolving story has been Britain’s fishing rivalry with Iceland.

In the late 1890s, the accepted limit of territorial coastal sovereignty was three miles. For reasons that went beyond fishing rights, Britain’s official position, defending its dominance of the high seas, was that the three-mile limit was ‘a principle on which we might be prepared to go to war with the strongest power in the world.’[2] So when Iceland got uppity around 1890 and banned foreign trawlers from fishing within four miles of its coast, bays, and fjords to protect dwindling fish stocks, a Royal Naval display of ‘gunboat diplomacy’ put them back in their place. This display of ‘might is right’ held until the early 1950s, by which time the Americans were mightier. When the United States asserted its right to defend its fisheries well beyond the three mile limit, Iceland followed suit and, despite strenuous British diplomatic and legal efforts, it was forced to accept Iceland’s enforcement of a four-mile exclusion zone.[3]

Iceland, however, had played the smart game. Unable to depend on naval power, it leveraged its strategically important geographical position, its membership of NATO, coupled with an oil-for-fish agreement with the Soviet Union, to persuade the broader Western powers that, excuse the pun, there were bigger fish to fry. Concerned about a potential Icelandic pivot eastward, Britain was persuaded to acquiesce.[4]

Some turned to history to vent their frustration at Britain’s inability to enforce its ‘rights’ through military means. In 1955, senior Foreign Office civil servant, Jack Ward, lamented a government reluctance to sink the Icelandic coastguard as ‘sadly lacking in the Nelson touch.’[5] The British also reminded Iceland that, ‘they had been fishing in Icelandic waters since the early fifteenth century and therefore had traditional rights to fish there.’[6] Sixty years later, such arguments from history failed to impress British negotiators when Olivier Leprêtre, the president of the northern France fisheries committee, noted that ‘fishermen have always followed the fish. At the start of the last century, my great grandfather fished in the Thames estuary.’

The 50’s skirmish was the prelude to further conflict between the two nations, dubbed the Cod Wars. In 1958, Iceland declared a twelve-mile territorial limit, from which British trawlers were to be excluded. After two years of hostilities which saw the Icelandic coastguard, British trawlers and Royal naval warships exchange boarding parties, the British backed down once again.

Emboldened by their successes, Iceland continued to extend its territorial claims. To fifty miles in 1972, and to 200 miles in 1975. This time, the threat to life was real, with several deliberate and accidental collisions between trawlers, British warships and the Icelandic coastguard. Britain even deployed the Royal Air Force in an intimidatory capacity. Good sense prevailed when attempts by Halifax aircraft to use trailing cables (communication aerials) to rip the aerials from Icelandic trawlers passing the positions of their British counterparts to the Icelandic coastguard were aborted due to the serious threat to the lives of Icelandic seamen.[7]

Once again Iceland understood their tactical strengths, leveraging broader support by threatening to leave NATO and expel the US military from their key strategic base at Keflavik. These second and third Cod Wars were, again, concluded on favourable terms to Iceland. Britain grudgingly accepted the same 200-mile territorial limit it has, more recently, vociferously defended, citing the red line of ‘taking back control’ of its ‘historically’ sovereign waters.[8]

Interestingly, like Britain recently, internal politics played a significant part in Iceland’s approach to negotiations. Whilst many in their government were minded to be less belligerent, the popular Icelandic Communist Party forced their hand by whipping up populist support for strongly opposing the British.[9] Here is one lesson: all external conflict is, to some degree, interconnected with internal politics.

In a recent ironic twist which has slipped beneath the radar, the Royal Air Force has just named one of its new Poseidon MRA1 maritime patrol aircraftSpirit of Reykjavik in honour of the role played by the Icelandic capital and its people in enabling the Allied victory during the Battle of the Atlantic.[10] You couldn’t make it up.

Perhaps we might dub recent negotiations some kind of fourth Cod War, albeit less dramatic. Who won the latest round? In terms of internal politics, the powerful bloc of ‘Vote Leave’ politicians that now control the British government, hands down. Their recourse to history, however fallacious and disingenuous (fishy?), was strongly persuasive. Of course, there were many different motivations for leaving the EU. As a good friend of mine pointed out, ‘were I to know that the UK would, economically, sink into the sea, I’d still have voted Brexit!’ Repeating that to other Leave-supporting friends has resulted in enthusiastic agreement. One has to wonder if the future of the fishing industry was really their top priority.

The problem with this interpretation is that the long-standing, structural issues facing Britain’s fishing industry lie not in EU unfairness or belligerence, but in decades of underinvestment and neglect by the British government. European fishers long ago bought up British trawler quotas as long term investments, whereas the latter, unsupported by their government, were happy to sell for a short term killing and redeploy their boats in the North Sea oil industry. The British ‘fishing industry’ is as much an international trade in quotas, controlled by foreign interests and a few wealthy British families as it is concerned with actually catching fish. This was no basis for a strong negotiating position with EU officials who were never to be influenced by the arguments about Britain’s ‘historic sovereignty’ of the seas, which have reversed over a century, however persuasive they were to English ‘patriots’.

So, the trawlermen of Hull, Grimsby and Peterhead feel betrayed with a mostly as-you-were agreement on fishing with the EU. As John Lichfield puts it, ‘conclusion…You can win a political argument with lies and myths. Governing or negotiating with them is as useful as fishing without nets.’

Still, all may not be lost. An internal DEFRA email recently noted that, to protect their fishing waters, Britain has just ‘12 vessels that need to monitor a space three times the size of the surface area of the UK.’ Iceland was successful with less. In 1958, they ‘had no navy and the Icelandic coast guard had only seven small ships with one gun each.’[11] The difference is that Iceland had a better appreciation of where its strengths lay and a genuine commitment to its fishing industry. And, despite internal political disagreements, they were able to unite around their nation’s practical interests. Britain’s perceived aggression and intractability in the Cod Wars ‘united the Icelandic nation, from Left to Right’, despite their internal political fractures.[12] It seems this is a lesson Britain has failed to learn as a similar belligerence has united a traditionally divided EU.

Domestic political victory is somewhat pyrrhic when it fails to achieve its broader economic aims and leaves many of its supporters feeling betrayed. In 1951, the Icelandic press noted that Britain appeared to think it was still ‘living in the seventeenth or eighteenth century.’[13] In many ways, it seems it still does. In any reasonable assessment, Britain stands in the first rank of nations. It is a danger to itself and a loss to the broader international community that it continually insists on looking backwards to its past to find its future.

Download PDF

Bibliography

Primary Sources

The National Archives of the U.K.: Public Record Office, Ministry for Agriculture and Fisheries, 41/674, Grey to Findlay, Oslo, 26 June 1911

The National Archives of the U.K.: Public Record Office, Foreign Office, 371/116445/NL1351/186, F.O. draft submission, Sept. 1955.

Vísir (Icelandic Daily), 5 June 1958.

 

Secondary Sources

Gudmundsson, G. J., ‘The Cod and the Cold War,’ Scandinavian Journal of History, 31 (2006), pp. 97-118.

Johannesson, G. T., ‘How “cod war” came: the origins of the Anglo-Icelandic fisheries dispute, 1958–61’, Historical Research, 77 ( 2004), pp. 544-74.

Kurlansky, M., Cod: A Biography of the Fish that Changed the World (New York, 1997).

Notes

[1] A boastful or exaggerated claim

[2] The National Archives of the U.K.: Public Record Office, MAF 41/674, Grey to Findlay, Oslo, 26 June 1911

[3] G. T. Johannesson, ‘How “cod war” came: the origins of the Anglo-Icelandic fisheries dispute, 1958–61’, Historical Research, 77 ( 2004), pp. 544-74, pp. 545-9.

[4] Johannesson, ‘How “cod war” came’, 548-9.

[5] T.N.A.: P.R.O., FO 371/116445/NL1351/186, F.O. draft submission, Sept. 1955.

[6] G. J. Gudmundsson, ‘The Cod and the Cold War,’ Scandinavian Journal of History, 31 (2006), pp. 97-118, p. 100.

[7] My thanks to Sqn. Ldr. R. P. Robinson (retd.) for his remembrances.

[8] Gudmundsson, ‘The Cod and the Cold War’, pp. 108-10.

[9] Gudmundsson, ‘The Cod and the Cold War’, 102.

[10] Once again, I am indebted to Sqn. Ldr. R. P Robinson for drawing my attention to this point..

[11] M. Kurlansky, Cod: A Biography of the Fish that Changed the World (New York, 1997), p. 162.

[12] Johannesson, ‘How “cod war” came’, 564.

[13] Vísir (Icelandic Daily), 5 June 1958.

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Digital Archive Review: The Internet Archive

Digital Archive Review: The Internet Archive

This review is the first of a new series, intended as a learning resource, and aimed primarily at undergraduates about to embark on individual research projects and dissertations, but will also be relevant to anyone interested in the rich potential of digital archives for accessing primary sources. Here, Robert Frost discusses using the Internet Archive to access out-of-copyright books from the early nineteenth century. The Internet Archive is an indispensable resource for all those interested in modern British history, cultural history, and beyond, and one now more important than ever due to lockdown restrictions this past year

 

Biography: Robert Frost is an AHRC-funded doctoral student with joint Geography and History department supervision. He is interested in Georgian and early Victorian travel, exploration and field studies in the Eastern Mediterranean.

Throughout my PhD on the work of the Egyptologist and antiquary Sir Gardner Wilkinson (1797–1875), I have found the Internet Archive (IA) invaluable.[1] In this piece, I give a short introduction to the IA, recount how I have used it in my research, and cover a few of the problems that using such a digital resource inadvertently brings.

The IA started off in the mid-1990s, and is now an organisation with a number of branches: most famously, it runs the ‘Wayback machine’—an online archive of billions of website pages, as well as an ‘Open library’ which ‘loans’ new books for a limited time period. The IA also holds film and audio media. My focus here though is on a more specific part: its massive collection of out-of-copyright books, from the eighteenth, nineteenth, and twentieth centuries (and a few from even further back), sourced in large part from major public and university libraries in the United States.

Being open-access, the IA is simple to access: no passwords are required, although there is an option to create an account to access additional features, including the lending library of more recent books (Fig. 1). The interface is also easy to use: if you know what book you want, then you only need to type the title into the search bar at the top right-hand corner of the page. If the IA holds it, then it will come up (Fig. 2). Also worthy of note is the ‘Advanced Search’ functionality: it is possible to narrow the results down to individual years, or keywords. Unlike some other online archives (including at least one subscription one which I know of), the IA allows users to export entire out-of-copyright books as PDFs, rather than simply view them, or just download a limited page range. The option is available in a panel when scrolling down the page. I have used this feature to download copies of all of Gardner Wilkinson’s books—including multi-volume works such as his Manners and Customs of the Ancient Egyptians (1837), Modern Egypt and Thebes (1843), Dalmatia and Montenegro (1848)—amongst much else (Fig. 3).[2]

Had this resource not existed—excluding other online archives from consideration for a moment—then I would have had to buy reprinted copies of Wilkinson’s books from Cambridge University Press (at a price of £30 per volume) where available, and take photographs of books in archives for his more obscure printed works. As Wilkinson’s published books run to over 6,000 pages—by no means all of which are available as reprints—that that would be a sizeable number of photographs, which I would then have had to spend even more time organising. While noting that a lot of extremely good research was conducted in the pre-digital age, it is interesting to note what else becomes easier when you have digital copies of books immediately to hand (which you can annotate, and which are not at risk of being recalled by other library users). More cross-referencing is one possibility. So is spending more time on looking at the corresponding Wilkinson manuscript collection.

Not consulting the original copies comes at a cost, however. The most significant drawbacks to using the IA to read nineteenth and early twentieth-century books are those associated with materiality. At one level, reading a book published in the Victorian era on a screen is quite different to how it was originally read—an argument forwarded by John Berger in relation to paintings.[3] Whether this makes a significant difference or not is debatable. The closest that I would come to making any sort of complaint about a book being abstracted by the IA is that you can all too easily lose any conception of its size: was a book ‘Octavo’ size and therefore read by a large audience? Or was it an ‘elephant folio’ which would hardly ever have escaped a scholarly library? These questions need to be asked (and answered) if any attempt to put a book in its context is attempted—yet it is only one I started to think about seriously after seeing several enormous volumes of antiquarian books—which I had previously only been familiar with through the IA—in ‘hard’ copy, in a research library.

What is of far more practical significance—at any rate to my own research—is that the paratextual material—maps and images—frequently fairs less well in the copying process than the text. This issue has affected my own research on several occasions, in relation to one of Gardner Wilkinson’s books, Dalmatia and Montenegro (1848)—a travel and regional history book on the southernmost regions of the Habsburg Empire, with forays into the contested borderland with the Ottoman Empire. It was only after studying this text for a year that I realised that the original edition included a fold-out map of the eastern Adriatic and Balkan coast. This feature had not been reproduced in the copy that I had downloaded from the IA—I had to rely on another website to see this. Finding out that the original book had a map was not a surprise—the publisher, John Murray often included them in regional and travel books—but at the same time my thoughts at various points during the past year were more along the lines of ‘It’s so frustrating having to use google earth—why doesn’t this book have a map?’. The question that I should have been asking was, of course, ‘I wonder if the original had a map?’.

The story with images is better, but has some of the same problems. One problem is that some images are landscape and it takes more effort to rotate them (they need to be downloaded first) than it would do for a book—a case where John Berger’s critique really does matter: it is all too easy to simply skip over them, or be lazy or simply glance at them and move on. This is a problem that can be easily solved, given a few moments. But there are also more serious issues: some images in scans are blurred or otherwise distorted (and on a few occasions, I have even found that whole pages are missing). Usually this has not been too much of a problem, but there have been times when it has been an issue: I still recall one supervision in which myself and my supervisors disagreed as to whether a particular image of a track on the hills above the plain of Tzetinie and Lake Scutari (in Montenegro) showed a man on a donkey or not. It was hardly a major issue, but was one which would not have occurred had I been using a hard copy rather than an inadequately-copied image.

Whilst I have dwelt on some of the drawbacks involving the IA—in part because these are not often discussed—they definitely should not suggest that the IA is a resource with major problems. Even when it comes to research on materiality, there are advantages. Unlike the digitisation produced by some scanning technology—such as ECCO—the images of book pages are usually of a high standard: annotations and marginalia—now an area of major interest amongst scholars—come out well, allowing signs of reader engagement and contemporary responses to be studied.[4] On several occasions I have even come across a reader noting the full name of an author, where only initials were printed—a potentially invaluable piece of help—although one of course which depends on the copy being scanned: if copiers choose the books in best condition, with clean pages, then historians interested in this angle may not even be aware of a lost opportunity.  Especially during the last year, the choice has been, far more often than not, either to use a digital copy of a book (especially a nineteenth-century one), or make do without it altogether. Other options exist: the Hathi Trust, covers a similar time period. Early English Books Online features books from the late medieval and early modern periods. For me though, the IA has quite literally been the most important website on the internet.

Figure 1: The Internet Archive homepage.

Figure 2: Results for Manners and Customs of the Ancient Egyptians (several volumes and editions).

Figure 3: Wilkinson’s Manners and Customs of the Ancient Egyptians—a navigable book, which is also available to download.

Download PDF

Bibliography

Berger, J., Ways of Seeing (London, 1972)

Jackson, H. J., Marginalia: Readers Writing in Books (New Haven CT, 2001)

Jackson, H. J., Romantic Readers: The Evidence of Marginalia (New Haven CT, 2005)

Internet Archive. < https://archive.org/ >, accessed 21.3.2021.

Wilkinson, J. G., Manners and Customs of the Ancient Egyptians (London, 1837)

Wilkinson, J. G., Modern Egypt and Thebes (London, 1843)

Wilkinson, J. G., Dalmatia and Montenegro (London, 1848)

 

Notes

[1] Internet Archive. < https://archive.org/ >, accessed 21.3.2021.

[2] J. G. Wilkinson, Manners and Customs of the Ancient Egyptians (London, 1837); J. G. Wilkinson, Modern Egypt and Thebes. (London, 1843); J. G. Wilkinson, Dalmatia and Montenegro (London, 1848).

[3] J. Berger, Ways of Seeing (London, 1972).

[4] H. J. Jackson, Marginalia: Readers Writing in Books (New Haven CT, 2001); H. J. Jackson, Romantic Readers: The Evidence of Marginalia (New Haven CT, 2005).

Contemporary Scottish Diplomacy: Some Recent and Distant Parallels

Contemporary Scottish Diplomacy: Some Recent and Distant Parallels

The Scottish government under the SNP has frequently employed diplomacy to help secure its strategic goals. In response to this prevalence, this article explores how contemporary Scottish diplomacy compares to three other related forms of diplomacy, drawn from both the recent and the distant past, in the hope that these might pave the way for increased understanding of the Scottish government’s actions.

Keywords: Diplomacy, Paradiplomacy, International Relations, Scotland, Medieval, Modern, Brexit, Scottish Independence.

Biography: Jamie Smith is a fourth-year PhD History student at the University of Nottingham. His research explores Anglo-Scottish and Anglo-Welsh diplomacy between 927 and 1154, using comparative and interdisciplinary methods.

Scots went to polls this May for the Scottish parliament election, re-electing the pro-independence Scottish National Party (SNP), which first came to power in 2007. Within the SNP manifesto was a section on ‘Scotland in the World’, outlining their foreign policy plans: increased funding for international development, the adoption of a feminist foreign policy, and closer ties to European countries. Diplomacy is nothing new for the Scottish government. On its website you will find a section on international relations, outlining ongoing policies. These include engagement strategies with numerous countries, ministerial visits, a strategic review of Scoto-Irish relations, the promotion of arctic cooperation, running offices abroad, and a commitment to a UN agreement. Yet this enthusiasm for diplomacy seemingly runs contrary to the Scottish parliament’s actual powers. When this devolved parliament was established in 1998, international relations were reserved, remaining under Westminster’s remit. The existence of contemporary Scottish diplomacy has drawn both academic and non-academic comment. Writing for the website UnHerd, for example, Henry Hill has highlighted the tension between Scottish diplomacy and Westminster’s powers. Regardless of whether it is right or wrong, Scottish diplomacy is happening, and therefore we should look to debate its objectives, methods and outcomes, not just its existence. Hardly unusual, the SNP’s diplomatic actions share common features with other case studies, from both the recent and distant past. Here I highlight three such parallels, exploring diplomacy conducted by other sub-state governments, governments in exile, and medieval figures. These comparisons present innovative ways of interpreting Scottish foreign policy, which future scholars can utilise and build on, paving the way for increased understanding of present-day Scottish politics.

Diplomacy conducted by other sub-state governments and entities is one obvious, relevant comparison. The Scottish government is certainly not unique in having foreign policies. Scholars have taken an interest in the diplomacy of towns, regions and other sub-state nations, such as Wales, Macao, US states and Japanese islands, to name but a few examples.[1] Whilst there are many potential routes to explore, the theme of differentiation might prove particularly fruitful. This is where a sub-state administration pursues diplomatic actions that contrast or counter the foreign policy of the state that the sub-state sits within. For example, during the 1960s the Quebecois government was concerned that the Canadian foreign ministry was unrepresentative of French speaking people and overlooking the Francophone world. Consequently, the government of Quebec addressed this deficit, forging its own ties with French speaking countries, notably signing an agreement on education with France which they hoped would help overhaul their own education system.[2] Alternatively, in the 1980s the US government intervened in Nicaragua, supporting forces that were attempting to oust the ruling Sandinista government. To demonstrate opposition to their own government’s foreign policy and to show solidarity with Nicaragua, US cities and towns began twinning with Nicaraguan ones.[3] Today, 32 Nicaraguan municipalities are stilled twinned with a US counterpart.

The Scottish government under the SNP has pursued a similar policy in relation to the European Union (EU), following the Brexit referendum in 2016, which saw the UK collectively vote to leave the EU, but the majority of Scots vote to remain. Whilst the UK government spent the past few years negotiating the state’s exit from the supranational institution, the Scottish government has sought to strength its ties with the EU. For instance, the Scottish 2020-21 budget declares that the government’s goals include ‘ensuring Scotland remains a valued and well-connected nation, despite the UK’s decision to leave the EU’ and to ‘demonstrate our ambition for independent membership of the European Union.’ Several types of diplomatic practices complement these goals, such as meetings between members of the Scottish government and European figures. In 2018 and 2019, Scottish ministers made 80 diplomatic trips to European capitals, with Nicola Sturgeon, the current Scottish first minister, meeting with the EU’s Brexit negotiator, Michel Barnier. They have also used the Scottish government’s international office in Brussels to network with European officials. One of the last events hosted there prior to the COVID-19 pandemic was a Burns Night VIP supper, attended by representatives from Croatia and Germany, ‘as well as MEPs and other high level EU contacts.’ Given the focus on the EU, it is perhaps no coincidence that, with a budget of £2,079,000 a year, the Brussel’s office receives more funding than any of Scotland’s other international offices. Reaching out to the EU, both through rhetoric and diplomatic actions, is intended to smooth an independent Scotland’s accession into the bloc. Whilst the EU was cool towards Scottish membership during the referendum on independence in 2014, the SNP hopes that its overtures will encourage the EU to take a more positive stance on the issue. Regardless, their diplomacy can be seen as another example of sub-state differentiation, which seeks to counter the UK government’s Brexit policy, and work to ensure that Scotland can resume its membership of the EU in the near future.

Another comparison can be made with governments in exile, and the common desire for legitimacy. As noted above, establishing international offices and meeting foreign dignitaries are two tranches of Scottish diplomacy. Including the aforementioned Brussels office, there are eight of these based around the world, often in strategically important locations such as Beijing and Washington. They operate as pseudo-embassies, advocating Scottish interests and strengthening ties with host countries. As for the summits and conferences that Scottish officials have taken part in, these have not only been with representatives of the EU and its member states. A joint heritage initiative involving the Scottish government and five global heritage sites provided Alex Salmond, at the time first minister of Scotland, with the opportunity to meet key Chinese officials and the then Indian prime minister, Manmohan Singh.[4] Likewise, during a trip to North America in February 2019, Nicola Sturgeon met with UN officials as well as the governor of New Jersey.

These tactics echo those employed by governments in exile. One such example of this is the Tibetan Government in Exile, set up in northern India by the Dalai Lama in 1960, with the goal of restoring independence to Tibet following its annexation by China. Like the SNP, it has made diplomacy a central part of its policy plan, establishing pseudo-embassies in eleven cities and organising pseudo-summits between the Dalai Lama and foreign leaders.[5]

Whilst outwardly different, the SNP controlled Scottish government and the Tibetan Government in Exile have similar goals. Both groups seek to assert themselves as the legitimate government of a nation that is currently controlled by another government (although even the most hard-line SNP member would struggle to make the case that Scotland’s position within the UK is exactly comparable to the Chinese occupation of Tibet). Thus, these groups appropriate the symbols of statehood. Since diplomacy is traditionally seen as under the remit of state governments, these groups increase their legitimacy by practicing it.[6] The significance of their diplomatic acts are perhaps best demonstrated by their opponents’ responses. After Prime Minister David Cameron met the Dalai Lama in 2012, the Chinese government condemned the meeting, stating: ‘We ask the British side to take the Chinese side’s solemn stance seriously, stop indulging and supporting “Tibet independence” anti-China forces [sic].’ Similarly, the UK government has attempted to curtail or limit Scottish diplomacy. For example, then foreign secretary Jeremy Hunt stopped the foreign office from providing consular support for Sturgeon’s trips abroad on the grounds she was using them to drum up support for independence. The overlap then between these groups, their goals, and the responses to their diplomatic actions suggest that comparisons between Scottish diplomacy and governments in exile could provide important insight.

Finally, modern Scottish diplomacy warrants comparison with examples of medieval diplomacy. Contrary to more traditional views, scholars argue that diplomacy in the modern world is no longer the sole preserve of the state. Rather, globalisation has caused diplomacy to fragment, with numerous non-state entities forging their own foreign policies. This includes the sub-state governments and governments in exile mentioned above, as well as supranational institutions, such as the EU, and both legitimate and illicit NGOs, such as multinational corporations and terrorist networks. In line with this, scholars such as John Watkins and Jakub Grygiel have called for comparisons between the increasingly “post-state” diplomacy of the modern world and interactions in the pre-state period.[7] Medieval diplomacy was similarly multifaceted, not solely restricted to kings, but involving popes, bishops, magnates, heirs, claimants and exiles, amongst many others.

Whilst there are many possible comparisons, medieval earls are perhaps the most relevant to the modern Scottish government, and particularly the strategy of counterbalancing. Earls were the medieval equivalent of sub-state governments, ruling over territory but subordinate to another sovereign power, in their case a king rather than a state. One such example, who is discussed in my thesis, is Earl Ælfgar of Mercia, a prominent magnate during King Edward the Confessor of England’s reign (1042-66). This was a period of instability within England, involving both intra-noble division, and disputes between the magnates and the king. In 1055, Ælfgar, who at the time was earl of East Anglia, was outlawed by the English court. In response, he went to King Gruffydd ap Llywelyn of Wales, and together they led a joint attack on Hereford. The English court caved into this military pressure, and reinstated Ælfgar as an earl.[8] K. L. Maund suggests Ælfgar had already reached out to Gruffydd prior to 1055.[9] Whether the alliance had been prearranged or not, Ælfgar certainly built on it, marrying his daughter to Gruffydd, an event usually dated to c. 1057.[10] This seemingly put him in a good position. When Ælfgar, now earl of Mercia, was outlawed once more in 1058, we are told he was later reinstated, thanks again to Gruffydd’s military backing.[11]

During a period of intra-England disunity, Earl Ælfgar tied himself to a neighbouring king, securing a military ally and safe haven, to counterbalance the domestic division he faced. The Scottish approach to the EU follows a similar rationale. Independence would politically and economically divide Scotland from the rest of the UK, with the potential for a decline in trade being a major cause for concern. The SNP’s solution is EU membership. Fiona Hyslop, the Scottish Cabinet Secretary for Economy, Fair Work and Culture, responds to concerns about independent Scotland’s trade by pointing to the Republic of Ireland: ‘Through membership of the EU, independent Ireland has dramatically reduced its trade dependence on the UK, diversifying into Europe and in the process its national income per head has overtaken the UK.’ The aforementioned Scottish diplomacy with EU leaders aims to secure Scotland’s future membership of the EU, and thus another market that will counterbalance losing access to the UK one. The main difference between this and Ælfgar’s diplomacy is that he was forced to seek foreign help due to intra-English disputes, whilst the SNP wants a self-imposed division with the UK, which they will need to counterbalance.

Evidently, whether we like it or not, Scottish diplomacy is a prominent feature of the contemporary international landscape. Consequently, I have highlighted here three relevant comparisons which could improve our understanding of it: the diplomacy of other sub-state governments, governments in exile, and medieval individuals, such as earls. Other modern sub-state governments, such as the Quebecois government, are perhaps the most applicable to the Scottish government given their similar constitutional positions. Although, all three approaches could prove useful for scholars of politics and international relations over the next few years. Having been re-elected, Nicola Sturgeon plans to call another independence referendum once the COVID-19 pandemic has passed, meaning that Scottish diplomacy will likely remain an important subject of analysis, both during a future referendum campaign and in the early days of an independent Scotland.

Download PDF

 

Bibliography

Primary Sources

‘Anglo-Saxon Chronicle’, in English Historical Documents: c. 1042-1189, ed. by D.C. Douglas and G.W. Greenaway (London, 1953), pp. 107-203.

Orderic Vitalis, The Ecclesiastical History, ed. and trans. by M. Chibnall, 6 vols. (Oxford, 1969-80).

Secondary Sources

Clarke, A., ‘Digital Heritage Diplomacy and the Scottish Ten Initiative’, Future Anterior: Journal of Historic Preservation, History, Theory and Criticism, 13 (2016), pp. 51-64.

Criekemans, D., ‘Regional Sub-State Diplomacy from a Comparative Perspective: Quebec, Scotland, Bavaria, Catalonia, Wallonia and Flanders,’ The Hague Journal of Diplomacy, 5 (2010), pp. 37-54.

Grygiel, J., ‘The Primacy of Premodern History’, Security Studies, 22 (2013), pp. 1-32.

Jakubec, P., ‘Together and Alone in Allied London: Czechoslovak, Norwegian and Polish Governments-in-Exile, 1940-1945’, The International History Review, 42 (2020), pp. 465-84.

Maund, K. L., ‘The Welsh Alliances of Earl Ælfgar of Mercia and his Family in the Mid-Eleventh Century’, in R.A. Brown (ed.) Anglo-Norman Studies XI(Woodbridge, 1988), pp. 181-90.

McConnell, F., Moreau, T., & Dittmer, J, ‘Mimicking State Diplomacy: The Legitimizing Strategy of Unofficial Diplomacies,’ Geoforum, 43 (2012), pp. 804-14.

Paquin, S., ‘Identity Paradiplomacy in Québec,’ Québec Studies, 66 (2018), pp. 3-26.

Tavares, R., Paradiplomacy: Cities and States as Global Players (Oxford, 2016).

Watkins, J., ‘Toward a New Diplomatic History of Medieval and Early Modern Europe’, Journal of Medieval and Early Modern Studies, 38 (2008), pp. 1-14.

Williams, A., ‘Ælfgar, earl of Mercia’, Oxford Dictionary of National Biography, 23 September 2004.

 

Notes

[1] D. Criekemans, ‘Regional Sub-State Diplomacy from a Comparative Perspective: Quebec, Scotland, Bavaria, Catalonia, Wallonia and Flanders’, The Hague Journal of Diplomacy, 5 (2010), pp. 37-54.

[2] S. Paquin, ‘Identity Paradiplomacy in Québec’, Québec Studies, 66 (2018), pp. 19-21.

[3] R. Tavares, Paradiplomacy: Cities and States as Global Players (Oxford, 2016), pp. 234-35.

[4] A. Clarke, ‘Digital Heritage Diplomacy and the Scottish Ten Initiative’, Future Anterior: Journal of Historic Preservation, History, Theory and Criticism, 13 (2016),  p. 56.

[5] F. McConnell, T. Moreau and J. Dittmer, ‘Mimicking State Diplomacy: The Legitimizing Strategy of Unofficial Diplomacies’, Geoforum, 43 (2012), pp. 806-08.

[6] P. Jakubec, ‘Together and Alone in Allied London: Czechoslovak, Norwegian and Polish Governments-in-Exile, 1940-1945’, The International History Review, 42 (2020), pp. 468.

[7] J. Grygiel, ‘The Primacy of Premodern History’, Security Studies, 22 (2013), p. 2; J. Watkins, ‘Toward a New Diplomatic History of Medieval and Early Modern Europe’, Journal of Medieval and Early Modern Studies, 38 (2008), p. 5.

[8] ‘Anglo-Saxon Chronicle’ (Henceforth ‘ASC’), in English Historical Documents: c. 1042-1189, ed. by D. C. Douglas and G. W. Greenaway (London, 1953) (Henceforth EHD II), pp. 132-34.

[9] K. L. Maund, ‘The Welsh Alliances of Earl Ælfgar of Mercia and his Family in the Mid-Eleventh Century’, in R. Allen Brown (ed.) Anglo-Norman Studies XI (Woodbridge, 1988), p. 185.

[10] Orderic Vitalis, The Ecclesiastical History, ed. and trans. by M. Chibnall, 6 vols. (Oxford, 1969-80), 2, pp. 138-39: A. Williams, ‘Ælfgar, earl of Mercia’, Oxford Dictionary of National Biography, 23 September 2004: https://doi-org.ezproxy.nottingham.ac.uk/10.1093/ref:odnb/178. Accessed 13 April 2021.

[11] ‘ASC’, EHD II, p. 136.

Election Day and Britain’s Existential Crisis

Election Day and Britain’s Existential Crisis

In this Spotlight article for Election Day in the UK, David Robinson discusses how the past forty years of Britain’s economic and social history have led to a divided nation and a present day existential crisis, which no main political party seems willing to discuss.

Across the UK, local elections are being held today, May 6th. The electorate can be forgiven if they hadn’t noticed. There has been little in the way of campaigning. This was, perhaps, to be expected. The opposition parties may have felt somewhat encumbered by a perceived duty not to interrupt the ongoing vaccine roll-out, whilst the lack of oppositional challenge suits the incumbent Conservatives, so why say anything when saying nothing may well preserve the status quo?

Yet, there is so much that remains unresolved from the politics of the past few years. The rise of European populism, the election of Donald Trump, and Brexit are understood to have been more than electoral choices in the normal sense. They were also expressions of anger and frustration by swathes of people who feel a lack of dignity in insecure and unfulfilling work that leaves them economically disadvantaged, left out of mainstream society, and looked down upon by university-educated, home-owning ‘elites’ with emotionally satisfying and financially rewarding careers. As I will argue, even had they campaigned hard, there is little that any of the main parties would have said that would have gone any way to addressing these problems.

Was it not ever thus? The ‘haves’ and ‘have nots’? Well, yes, but the less advantaged have, traditionally, at least had some political representation. Today, many feel that it is not they who have deserted the parties they have traditionally supported, but the latter who have deserted them in moving towards a form of identity politics and attempting to appeal more to the ‘liberal elite’ than the traditional ‘working class’.

Alongside this, the workplace has fundamentally changed. Over forty years, workers have had to exchange occupations that might not have paid as well as others but were secure. These were jobs which came with a justified sense of dignity, a sense that one was contributing meaningfully to society in comparison to the menial, insecure and poorly paid work in warehouses and call centres. In contemporary Britain, there is a sense in which half the population are the working poor, engaged in administrating, collating, and delivering consumer goods to the other half. The result is the anger and vitriol that has split the UK and other democracies and has come as a shock and surprise to those who had not even realised the extent of this disaffection.

The Brexit vote and Trump’s Presidential campaign fully understood and leveraged these feelings. Rather than the conventional political strategy of trying to unite the electorate behind a set of ideas, the cracks in society were deliberately and crudely crowbarred open more widely, encouraging the view that the nation was divided into two, diametrically opposed halves. This strategy, combined with the anger and hopelessness of so many, is nothing short of an existential threat to society and democracy. While the existential nature of this threat may have been subsumed beneath the emergency of the pandemic, it remains ready to remerge in full force as restrictions are lifted and the gap between the haves and the have-nots becomes visible once more.

But how did we get to this position of disaffection and division? What are the political forces and ideological concepts which brought us here?

In the late 1970s, there was a fundamental shift in ideas about how economies ought to function. More than a decade of stagnant growth, labour conflicts, and high unemployment opened the door to a set of ideas, long-held by some but rejected up to that point by governments of all stripes. Since the inter-war years, economists such as Friedrich Hayek and, later, Milton Friedman, had argued for less central decision-making and planning by governments, in favour of ‘the market’. The mechanisms of supply and demand, and competition between profit-motivated companies would, they argued, provide what people wanted far more efficiently than traditional government planning and control.

Furthermore, the idea of governments and civil servants working for the ‘public good’ was rejected. Instead, it was maintained, such public offices were used by their occupants to build their own careers and operate in their own self-interest.

All this would be swept away. By liberalising money markets, companies and entrepreneurs would have easier access to capital with which to supply a wider range of goods and services that society demanded and from which they would be free to choose.

The role of governments and central banks was now…to do nothing! The market was a more efficient arbiter of what society wanted than government bureaucracy and would provide higher quality at a lower cost.

This idea has been, over the past forty years or so, extended from consumer goods and services to public services traditionally provided through taxation by government. Whether it be the products of the steel industry, or care for the elderly the market would now provide, not government.

Such radicalism was initially implemented by the centre-right governments of Margaret Thatcher and Ronald Reagan, but its full flowering, arguably, came under the centre-left governments of Tony Blair, Bill Clinton, and Gerhard Schroeder. Importantly, the challenges we face as a result of such ideologies are not simple left-right debates. They have become ubiquitous across the political spectrum.

Have such policies worked? They have not. It is impossible to argue that there is now more equal access to high quality public services, health care, education, or care for the vulnerable. Access to high quality services such as these is largely dependent on income. Even organisations such as the IMF, at the heart of driving such policies for the last five decades, agree that these ideas have been ‘oversold’.

The recently retired Governor of the Bank of England, Mark Carney, has pointed out the more subtle problem. There has been a separation of ‘moral sentiment’ from economics. We no longer debate what is ‘right’, there is no political discussion of what we think ought to be provided in society, regardless of increased costs, or the wider ‘value’ of something, beyond its economic value; the value of everything has become the price of everything. If the market has not provided something, then it cannot be provided.

Once again, this is not a left-right debate. Prior to this turn to the market in the late-1970s, the importance of debating ethical questions as part of economic decision making was seen as a fundamental part of political action across the ideological divide and going back to the eighteenth century. Over the last few decades, however, governments of different political persuasions have justified inequalities and their own lack of action in remedying them, by resorting to the argument that the market is the arbiter and expression of democratic choice. That nothing can be done in the face of the free market.

More generally, the turn towards market outcomes over the past four decades has determined that ‘success’, one’s status and reward, is mainly measured by economic outputs. It is argued that those who earn the highest incomes are those who have demonstrated the talent and hard work to best satisfy the demands of the market and it is right, therefore, that larger and larger economic rewards and  prestige are concentrated in their hands. The corollary is that those who have less, have proven themselves less able and hard working to have satisfied the same demands, are, thus, equally deserving of their lower economic and social status.

It has become ubiquitous for political leaders to claim they have, or at least are striving to, implement a ‘meritocracy’, a society in which one’s position, what might broadly be termed one’s ‘success’, is due to a combination of one’s hard work and ability; in short, one’s merit.[1]

Whilst  leaders acknowledge that differences of class, gender, birth, race and so forth have and continue to present barriers to a level playing field on which all can compete, and ‘compete’ is relevant here, there is a general acceptance of a vision of the future in which such barriers continue to be dismantled. Once accomplished, it is argued, a ‘just’ society would finally be realised, in which one’s place in society is what one ‘deserves’. Certainly, such a society, politicians maintain, would be far ‘fairer’ than, say, an aristocracy or a class-bound society in which benefits are handed down from one generation to the next.

Indeed, so ‘common sense’ seems this argument, that it is hard to find many across the political spectrum, or indeed members of the public, that would disagree with its prima facia conclusions.

And yet, there is a dark side to this argument. We have become so used to the idea of ‘competition’ in society, for jobs, income, prestige and so on, that we have accepted the inevitable idea of ‘winners’ and ‘losers’, with all its accompanying,  morally freighted baggage. To be a winner is a ‘good thing’. But to be a ‘loser’ is to have one’s efforts consigned to some kind of social dustbin in which one has ‘failed’. If you have not achieved what you wanted to, that’s your fault, the result of some kind of combination of a lack of ability and effort.

Of course, in a meritocracy, it might be argued that such conclusions are harsh, but fair. If one has ‘won’, one deserves the laurels. If you have not, well, it’s a tough world out there. Next time, work harder, be smarter. And teach the lesson to your children.

If one’s failure is due to the lack of a level playing field, the fact that public school pupils with parents who have connections in the professional world have a far better chance than those from a council estate, well, that only shows how important it is to remove such barriers and push headlong towards a true equality of opportunity.[2]

Again, so common sense does this seem, it is hard, on the surface, to argue against it. There will always be inequality to some degree. Let it be justified on the basis of hard work and ability, the merits of the individual.

But is this really the case? Are ability and effort really, entirely, self-earned merits? If I decide to encourage and extol the virtues of ‘achievement’ in my young children, good marks at school, self-sacrifice, hard work, a pathway towards higher education and a high-income and ‘prestigious’ career, are they not more likely to achieve these goals than children who are less so facilitated and encouraged? It would seem a positive outcome is, in fact, largely a combination of pre-disposed genetics and the environment in which they have, arbitrarily, found themselves.

Equally, my daughter may find herself with fame, fortune, and prestige as a top professional footballer. But, again, this is only because she happens to live in a society that extolls the virtues of playing football well. Had she happened to have found herself by accident of birth in the thirteenth century, her ability to curl a ball accurately with the outside of either foot, whether by genetic predisposition or the effects of her environment, would have stood for little. Even the hard work she puts in to develop these skills is, to a significant extent, a factor of an environment that encourages such effort; one in which not all children are lucky  to find themselves, regardless of their intrinsic ‘talent’.

To a significant extent therefore, meritocratic success is the result of unearned genetics, environment, and a set of skills, either naturally acquired, or encouraged and taught, that happen to be desirable and rewarded by society at a particular point in time. One might say, a morally arbitrary accident of birth.

Even worse, those who ‘fail’ to achieve ‘success’ are told this is the position they deserve. Had they, in a meritocracy, worked harder and been smarter, they would have done better.  It was not always thus. Prior to the turn towards market outcomes as the designator of ‘success’, labour that may not have accrued substantial economic rewards was still considered as valuable and as contributing to the common good. Coal miners and steel workers knew that barriers of class and arbitrary birth hindered their progression towards higher income professions, but they also knew this was not their fault or due to their own lack of worth and merit. The dignity inherent in their labour, their contribution and value to society, was apparent and they rightly proclaimed it thus. And they were supported in this belief by representative political and labour organisations.

For unrepresented workers today, whose traditional jobs have been overtaken by technology, and who find themselves valued by the market in terms of insecure, minimum wage, zero hours contracts, such dignity and self-respect is harder to assert.

It is this combination of the turn to market outcomes as the arbiter of all value, and an apparent ‘meritocracy’ in which one’s position, for good or ill, is what one ‘deserves’, which is the cause of the anger and resentment in society. This has been exacerbated by the lack of political representation for people who are told the cause of their disaffection is actually their own lack of hard work and ability.

Importantly, these market-governed judgements are completely uninterested in questions of ‘right or wrong’.

Milton Friedman, for example, one of the Nobel Prize-winning architects of free market ideology accepts that the market should conform to the ‘basic rules of the society, both those embodied in law and those embodied in ethical custom’ (my emphasis).

The question is, of course, how do we decide what is classed as an ‘ethical custom’? Surely, the basis of this is through political discourse and debate. But the intertwined ideologies of the market and meritocracy are like the air we breathe; invisible and so accepted as facts of everyday life that they require no debate.

The answer to the anger and division in society is not a question of ‘policies’. It is one of understanding how we got to this place and the structural ideologies that paved the way. Fundamentally, there is a need for robust public and political debate about the society we want. Only then can we decide on the policies that will get us there.

The public’s views on public ownership, inequality, and higher taxes, particularly for the top 1% of earners, show that the UK is, broadly, ready for reform and change. Perhaps it’s time to be angry, but together rather than factionally. It’s certainly well past time to engage politically and demand that what we want as a society should not be solely determined by ‘the market’, but by discussion.

Depressingly, this is not a debate that will intrude on today’s elections.

Download PDF

Notes

[1] In the following sections, I draw heavily on M. J. Sandel, The Tyranny of Merit: What’s become of the Common Good? (London, 2020). Other works on a similar theme include: D. Goodhart, Head, Heart, Hand: The Struggle for Dignity and Status in the 21st Century (London, 2020); J. Littler, Against Meritocracy: Culture, Power, and Myths of Mobility (Abingdon, 2018); M. A. P. Bovens & A. C. Wille, Diploma Democracy: The Rise of Political Meritocracy (Oxford, 2017).

[2] What is interesting is how elite children use meritocratic language to obscure their origins. LSE’s new study shows how our fetishisation of meritocracy makes privileged people frame their lives as an uphill struggle:(https://www.theguardian.com/commentisfree/2021/jan/18/why-professional-middle-class-brits-insist-working-class). However, as Friedman and Laurison point out, ‘People from working-class origins do sometimes make it into elite jobs, but it is rare; only about 10% of people from working-class backgrounds (3.3% of people overall) traverse the steepest upward mobility path…Origins, in other words, remain strongly associated with destinations in contemporary Britain.’: (S. Friedman & D. Laurison, The Class Ceiling: Why it Pays to be Privileged (Bristol, 2020),p. 13.)

 

 

 

 

 

 

 

 

 

 

 

 

 

Value and Values

Value and Values

Dr David Civil is a Research Fellow at the Jubilee Centre in the School of Education at the University of Birmingham. His PhD research explored the concept of meritocracy in post-war Britain’s intellectual politics.

Since their inauguration in 1948, the BBC Reith Lectures have provided historians with an annual window into the intellectual preoccupations of the post-war world. From the impact of quantum and atomic theory in 1953 with Robert Oppenheimer’s Science and Common Understanding to questions of racial, gender and national identity in 2016 with Kwame Anthony Appiah’s Mistaken Identities. A potential Reith lecturer, searching for a topic to explain the contemporary moment would be spoilt for choice: the Covid-19 pandemic, the climate crisis, rampant economic inequality and social injustice, the rise of populism, big tech, the existential challenge to democracy etc. The list, it seems, is endless. On the surface then, the selection of Mark Carney, the Canadian central banker and former Governor of the Bank of England, feels like an odd choice. Carney has sat at the apex of a financial system assailed on all sides and held responsible, by a wide variety of politicians, commentators as well as large swathes of the public, for creating or exacerbating many of the problems listed above. The title of his series however, ‘How We Get What We Value’, unites the vast majority of these crises and, in doing so, like all good Reith Lectures, touches on one of the fundamental issues of the post-Cold War age.

At the heart of Carney’s thesis is the idea that financial value has trumped human values as developed nations morph from market economies into societies where the market rules. The free market, Carney claims, has become the organising framework not just for economies, but for broader human relations as its reach extends further into civic spaces and family life. Across a variety of sectors ‘citizens’ have been replaced by ‘service users’, with perilous consequences for our civic sphere. Whether manifested in concerns about the outsourcing of public services to private providers or the growing privatisation of public spaces, the so-called ‘invisible hand’ has come to exercise a visible and forceful grip.

Within these market societies the idea of subjective value is now hegemonic. Whereas in the past thinkers as diverse as Aristotle, Karl Marx and Adam Smith felt the value of a product derived from how that product was produced, neo-classicist economists in the early twentieth century shifted the axis of value theory away from labour and towards the consumer. A product or service was no longer deemed valuable because of the costs that had gone into making or providing it, instead value was to be decided by whether individual consumers were willing to pay for it. Value was no longer thought to lie in the sweat of the labourer but in the eye of the beholder. In many ways this was a democratic shift: value was to accrue to those who could satisfy millions of individual preferences as reflected in the free market place.

It was not, however, without consequences and Carney identifies a number of risks associated with this rise of subjective value. For example, individuals are not always the rational decision-makers assumed by neo-classicist economic theory and often value the present more than the future. This ‘tragedy of the horizon’ has made solving issues like climate change more difficult. The catastrophic costs of a global issue like the climate crisis are felt beyond the traditional time horizons of most actors – imposing a cost on future generations that the current generation have no direct incentive to fix. As Carney has noted elsewhere, the ‘horizon for monetary policy extends out to two or three years.’ For ‘financial stability it is a bit longer, but typically only to the outer boundaries of the credit cycle – about a decade.’ In others words, once climate change becomes a defining issue for financial stability, it may already be too late. More worryingly, Carney claims, is the ‘drift from moral to market sentiments.’ This ‘flattening of values’ corrodes those which have tended to exist outside of the market (e.g. civic virtues) and in the process has undercut the social foundations upon which any economic activity fundamentally relies. In short, anything not priced, not deemed financially valuable, in our society is not valued. Nowhere is this fact more starkly visible than in the essential work of the care sector where, because the value of care is difficult to measure, pay remains low and conditions poor. Care workers, therefore, remain the victim of a damaging tautological spiral: because their labour has been historically undervalued they are not paid a lot and because they are not paid a lot their labour is not seen as valuable.

The message and the messenger of the 2020 Reith Lectures is emblematic of the growing intellectual consensus in favour of a ‘social reset’. Whether embodied in Prime Minister Boris Johnson’s rather vague slogan of ‘Build Back Better’, the World Economic Forum’s ‘Great Reset’ or the progressive Left’s ‘Green New Deal’, the desire for a fundamental reappraisal of the global economy is shared, admittedly to differing degrees and to varying ends, across the ideological spectrum. While Carney’s lectures serve as a symbol of this particular conjuncture, his concerns are nothing new. The idea of a parasitic free market is a common theme of communist and socialist texts, while the more Carney-like warnings are found in a variety of liberal and social democratic positions. Even more surprising, perhaps, is to find traces of Carney’s thesis amongst some of the neoliberal thinkers whose intellectual output in the early-to-mid twentieth century did so much to economically and philosophically support the rise of the market in the 1980s.

For example, Friedrich von Hayek, the Austrian economist, philosopher and author of the influential tract The Road to Serfdom, argued in 1960 that

A society in which it was generally presumed that a high income was proof of merit and a low income of the lack of it, in which it was universally believed that position and remuneration corresponded to merit, in which there was no other road to success than the approval of one’s conduct by the majority of one’s fellows, would probably be much more unbearable to the unsuccessful ones than one in which it was frankly recognised that there was no necessary connection between merit and success.[1]

For Hayek, in any free society income should reflect the value of an individual’s goods and services and have nothing to do with merit, virtue or the moral importance of their contribution. In a similar vein Frank Knight, the American anti-New Deal economist and later Hayek’s teacher, argued in the early 1920s that an individual’s income or market value should not be associated with their social contribution.[2] Serving demand in the market is simply a matter of satisfying the wide range of tastes and desires people happen to have at that particular moment in time. The ethical significance of satisfying them, however, depends on their moral worth.

Evaluating this worth involves making contested moral judgements which go beyond the discipline of economics. The philosopher Michael Sandel illuminates this distinction by considering the character of Walter White, the teacher, father and drug-dealing kingpin of the Emmy-award winning drama Breaking Bad. Most viewers would agree that White’s contribution as a teacher far exceeds that of his contribution as a drug dealer. ‘Even if meth were legal’, Sandel argues, ‘a talented chemist might still make more money producing meth than teaching students.’ But this does not mean that a ‘meth dealer’s contribution is more valuable than a teacher’s.’[3] In a similar vein, few would have argued that Captain Sir Tom Moore’s fundraising efforts, reaching £33 million in total, would have represented less of a contribution had he only met his initial target of £1000. In this sense the value of his effort was recognised in the civic or moral character of his actions rather than because of their monetary value.

Context is important here. Hayek, for example, had little influence in 1960, the start of a decade where technocratic desires to rationally plan economic activity reached fever pitch and the free market remained a marginalised concept. Hayek’s primary concern in distinguishing between merit and value was to secure the legitimacy of free market inequalities. This legitimacy, he claimed, would be tarnished if those at the top were not only rich but also considered morally superior. As Carney’s lecture makes clear, however, these warnings went unheeded as price and value increasingly became conflated. Those individuals with high incomes also came to possess greater status, power and, perhaps most damagingly, moral superiority. It does not therefore require much of an intellectual leap to consider how Hayek’s concerns have played out in the last decade of political destabilisation. Amidst the Covid-19 pandemic, however, it is clear that these market generated inequalities are suffering a legitimation crisis. In this sense Carney’s intervention has fired the starting pistol on what Mariana Mazzucato has described as a great contested debate about value.[4]

It is clear that this debate can not be overly reliant on the discipline of economics, a discipline where the moral questions highlighted by Knight have been subsumed beneath technical exercises in applied rationality. The market appealed to politicians and policymakers precisely because it eschewed these contested judgements, pushing questions of ‘who gets what?’ onto an abstract, impersonal force. There was no longer a contested debate about the morally right or wrong course of action but a mechanistic discussion about the economic costs or benefits of a particular policy choice. In its place the debate will be heated. As David Robinson has outlined in this journal, in its worst form it will descend into an irrelevant culture war. Yet it appears that a tentative consensus is forming as those across the political spectrum recognise that key workers deserve pay, status and conditions beyond those assigned by the market.

A great debate about values entails radical consequences for the shape of higher education in Britain, a sector which has too often, in the words of David Manning, disengaged ‘from the virtues of scholarship to perform research for market value.’ This is particularly true of disciplines like the Humanities where value is difficult to measure and demonstrate. Shifting away from crude metrics, however, should not, be used as an opportunity to completely dismantle mechanisms designed to deliver accountability and ensure fairness. Instead it represents an opportunity for all of us in the Humanities to illuminate the issues, challenge long-standing assumptions and help to construct a new social contract which places human, rather than merely financial, values firmly at the centre.

Download PDF

Bibliography

Hayek, F. A.,The Constitution of Liberty (London, 1960).

Knight, F. H.,  The Ethics of Competition (New Brunswick, NJ., [1923] 1997).

Mazzucato, M., The Value of Everything: Making and Taking in the Global Economy (London, 2017).

Sandel, M. J., The Tyranny of Merit: What’s Become of the Common Good? (London, 2020).

 

Notes

[1] F.A. Hayek, The Constitution of Liberty (London, 1960), p. 98.

[2] F.H. Knight, The Ethics of Competition (New Brunswick, NJ., [1923] 1997), p. 46.

[3] M.J. Sandel, The Tyranny of Merit: What’s Become of the Common Good? (London, 2020), pp. 138-39.

[4] M. Mazzucato, The Value of Everything: Making and Taking in the Global Economy (London, 2017)

 

 

 

 

 

 

 

 

 

 

 

 

The “Woke” Bite Back!

The “Woke” Bite Back!

David Robinson takes a light-hearted look at the shifting reputation of the Humanities academic but concludes there are serious reasons for their voices to be heard…

In a pre-pandemic life I was asked at a social event the obligatory question about what I do. I replied that, having been made redundant after 25 years in the business world, I went to university to study undergraduate history. Subsequently, having discovered a love for research and writing, I completed my MA and, recently, received my PhD. “And what will you do with that?”, asked my interlocutor. “Well”, I replied, “I’ve put it in a frame on the wall.”

A flippant response, but I was heading off the inevitable. A request to defend spending seven years achieving something that could not be directly monetised; not, anyway, in the sense that my previous life as a Commercial Director could be leveraged for financial gain. Of course, I could secure a position as a university academic, but in a climate where even entry level positions are given to module convenors with several years’ experience and a slew of books and articles, that is unlikely.

Hold your sympathy, though. I have my dream job! I was recently offered the opportunity to edit a journal I co-founded 3 years ago: the one you are presently reading, The MHR.

“Ah!”, exclaimed my dinner-party host, “I expect that pays quite well.” “Precisely nothing!”, I replied. “Oh! Well, that’s the problem”, they responded, “there’s not much one can do with those humanities degrees.” We both shook our heads knowingly and they wandered off to find someone with a proper job to talk to.

This was an experience to which I have become accustomed. Perhaps, you have too? Why study History? Or even worse, Art History or Philosophy? American Studies? What can you really do with these degrees? What do they even mean?

When my teenage daughter suggested she, too, might be interested in studying history, that she had enjoyed our visit to the National Gallery in London and our subsequent lunch discussion of the ways in which gender roles had been assigned in art, someone quietly reminded her that “what your Dad does isn’t really history.” Quite. She might be advised to “do a proper subject.”

Such disdain does not seem justified. In fact, the reverse. In the twentieth century, thirteen out of the nineteen British Prime Ministers awarded a degree were humanities graduates. More recently, P.P.E. (Philosophy, Politics, and Economics) is the humanities degree of choice for an astonishing proportion of those who rule over us and those who explain how we are ruled, whether senior politicians or prominent political journalists. And what about those who implement policy, the civil service? Andrew Greenway, a former senior civil servant who writes regularly for Civil Service World, argues that P.P.E is not necessarily the golden ticket to the top of the political and administrative elite’. In his list of post-war Prime Ministers and Cabinet Secretaries, only three studied P.P.E. Still, between the P.M. who, as Greenway puts it ‘chooses the route’ and the cabinet secretary who ‘drives the car’, of the eighteen he lists with a degree, thirteen studied history or a mix of classics and philosophy. The outliers were a few lawyers, economists, and a chemist.

It seems, then, that we might complicate the debate on the relevance of a humanities degree. An education providing little apparent value for the likes of thee and me, appears to be an almost ubiquitous preparation for a career at the highest levels of public life.

Let’s unpack this a little further. Back in the nineteenth century, studying classics at Balliol, Oxford, under the college’s Master, Benjamin Jowett, was de rigueur for a career as a senior administrator in British India. Young men were trained, Jowett claimed, ‘by cold baths, cricket, and the history of Greece and Rome.’[1] The British did not simply take their management of India from classical Greek and Roman precedent, they were the new Greeks and Romans. A classical education was not merely a useful preparation for colonial administration, it was central to justifying ‘the historical experience of overseas domination.’[2]

Generations of British school children have been taught British history as a discrete list of the actions of, mostly, white male elites, often described by gentlemen amateurs and retired statesmen who regularly wrote about the very policies they themselves designed and implemented. As Churchill noted, ‘history will be kind to me, for I intend to write it.’

The most influential historian of his generation, Thomas Babington Macaulay, wrote his 1848 seminal work almost entirely as a justification of the cultural, economic and political authority of the English middle-classes. This was ‘the smug message of Macaulay’s History of England.’[3] Nor was he informing only his own generation. As he wrote to a friend, ‘I have tried to do something that will be remembered; I have had the year 2000, and even the year 3000, often in my mind.’[4] Macaulay would probably have been reasonably satisfied with his efforts. These are histories of glorious nation that are, at best, incomplete and de-contextualised and, at worst, a carefully crafted narrative of British (English?) exceptionalism to justify and lionise tyrannical imperialism and global domination.

Is this a fair and balanced assessment? Of course not. For a start, Churchill never made such a statement. Although, he did say ‘for my part, I consider that it will be found much better by all Parties to leave the past to history, especially as I propose to write that history myself.’[5] A damning indictment? Well, that’s the problem with woke lefty historians: no sense of humour. Churchill was probably just joshing. A bit.

The more serious point is that, as David Ludden puts it, ‘the veracity of statements about reality is not at issue so much as their epistemological authority, their power to organize understandings of the world.’[6] More simply, the study of human affairs is not so much about what happened and when, although events and chronology are important, but how and why the past is and has been interpreted differently.

So, back to my central question. Why are the exponents of academic humanities, once respected for their knowledge and trusted to pass on their understanding of Britain’s and, more broadly, the ‘West’s’ contribution to concepts of ‘progress’ and ‘civilisation’, now castigated as ‘typical of the open-toed, sandal wearing, beardy geography teachers at the heart of all the problems in modern society.’[7]

For two key reasons: on the one hand, because they present an existential challenge to many Britons’ understanding of themselves and their nation’s place in the world, past, present and future; on the other, because they potentially strike down a central appeal of politicians to the voting public, namely their right to rule based on defending that same understanding.

Most academics today contend that there is a strong prima facia case to suggest that the interpretation of the humanities for educational and public consumption has tended to be selective and aimed at presenting a particular view of the past that tends towards a certain British exceptionalism and national superiority. In recent decades, scholars of the humanities have taken to analysing and deconstructing the comfortable and self-congratulatory picture of the past taught for more than a century.

Are they right? Can we have that discussion? Not a very sensible one when government ministers trivialise the issues with populist headlines such as We will save Britain’s statues from the woke militants who want to censor our past.

Let me close with an example of just how misleading such headlines can be, the 2020 removal of the statue of Edward Colston, in Bristol, the paradigmatic example of cancel culture, imposed by woke militants intent on erasing our history.

The reality, I contend, is almost exactly the opposite of what has been popularly proposed. The argument has been made that Colston, an acknowledged beneficiary of colonial oppression and slave trading, may have profited from activities considered unacceptable today, but that when his statue was erected in 1895, such practices were less proscribed. To remove his statue is to impose a modern moral standard not subscribed to at the time, and is thus a distortion of history, the classic example of ‘cancel culture’.

In fact, the removal of Colston’s statue had little to do with historical debate and more to do with the frustrations of local protesters. Their legitimately gained democratic mandate, to have a plaque attached which offered some more context in terms of Colton’s slave trading activities, has been continually blocked. It is also demonstrably the case that slave trading was, largely, as unacceptable in 1895 as it is today.

That, however, is not my point. The proposal to erect Colston’s statue back in 1895 was a local political response to the growing protests of Bristolian workers objecting to poor rates of pay and working conditions. Political activists argued in public speeches, influenced by the political tracts of Karl Marx, that those workers were as much victims of their merchant masters as the slaves and the colonised that had been such a source of enrichment for the commercial elites of Bristol. These were arguments that gained some traction with the voting public of the city. Such concerns and the potential for unrest, common to many British cities, prompted local businessman, James Arrowsmith, to try and raise a statue to Colston, a well-known Bristolian philanthropist, by public subscription. Arrowsmith’s strategy was to counter criticism of Bristol’s colonial merchants through a demonstration of public support for the civic benefits brought to the city by those engaged in colonial trade. In this, he largely failed. Although some public funds were raised, the statue was eventually erected mostly at his own cost.

The background, then, to the Colston statue is not one of ubiquitous popular support for a merchant philanthropist Bristolian, but a fascinating insight into nineteenth-century ‘open class warfare’ and public support for ‘the formation of a “labour party” to represent working people’. Which aspects of history are being erased by substituting a mature debate on this subject with trivialising accusations of woke cancel culture? In many ways, Arrowsmith’s nineteenth-century tactics are being replicated by twenty-first century politicians.

The past will always be contested. If you are reading this, you are probably engaged in that process to some extent or another. Practitioners of academic humanities are, perhaps, not naturally suited to confrontation. But our voices are not just important, they are key. As Orwell pointed out, critically thinking about the past is an essential part of the present and, by extension, the future.

The MHR aims to be a part of that debate, through the voices of our contributors. We look forward to hearing from you.

Download PDF

Notes

[1] P. Woodruff, [= P. Mason], The Men who Ruled India: The Founders (London, 1953), p. 15.

[2] E. W. Said, Culture and Imperialism (New York, 1993), p. 114.

[3] F. Bédarida, A Social History of England, 1851-1975 (Paris, 1976), p. 49.

[4] T. Pinney, The Letters of Thomas Babington Macaulay in four volumes (Cambridge, 2008), p. 216.

[5] Speech in the House of Commons, Hansard ,Volume 446 (23 January 1948), Column 557. https://hansard.parliament.uk/Commons/1948-01-23/debates/b9704861-e9ed-40d9-ab92-4477a26e25f5/CommonsChamber.

[6] D. Ludden, ‘Orientalist Empiricism: Transformations of Colonial Knowledge’, in C. A. Breckenridge & P. van de Veer (eds.), Orientalism and the Postcolonial Predicament: Perspectives on South Asia (Philadelphia, PA., 1993), p. 250.

[7] As said to me at the same dinner party described above. Ok, it’s a great line!