The RAE and REF: Resources and Critiques

I am writing this piece during what looks like the final phase of the USS strike involving academics from pre-1992 UK universities. A good deal of solidarity has been generated through the course of the dispute, with many academics manning picket lines together discoverying common purpose and shared issues, and often noting how the structures and even physical spaces of modern higher education discourage such interactions when working. Furthermore, many of us have interacted regularly using Twitter, enabling the sharing of experiences, perspectives, vital data (not least concerning the assumptions and calculations employed for the USS future pensions model), and much else about modern academic life. As noted by George Letsas in the Times Higher Education Supplement (THES), Becky Gardiner in The Guardian, Nicole Kobie in Wired, and various others, the strike and other associated industrial action have embodied a wider range of frustrations amongst UK-based academics over and above the issue of pensions: to do with casualisation and marketisation in academia, the growth of bloated layers of management and dehumanising treatment of academics, the precarious conditions facing early career researchers (ECRs), widespread bullying, and systemic discrimination against female academics, those from minority groups, and so on. Not least amongst the frustrations are those about various metrics employed to judge ‘performance’ relating to the government Research Excellence Framework (REF, formerly the Research Assessment Exercise (RAE)), and new Teaching Excellence Framework (TEF).

In this blog post, I will outline a short history of the RAE/REF with relevant links, and collect together recent comments about it and suggestions for alternatives. For most of this (except a few places), I will attempt to outline the arguments of others (including my own expressed online) on either side, rather than try to unpack and critique them – this blog is undoubtedly a ‘survey text’ in the sense often dismissed by REF assessors, though hopefully should serve some useful purpose nonetheless! In an academic spirit, I would welcome all comments, however critical (so long as focused on the issues and not personalised towards any people mentioned), and will happily correct anything found to be erroneous, add extra links, and so on. Anyone wishing to make suggestions in these respects should either post in the comments section below, or e-mail me at the addy given at the top of this page.

One of the most important pieces of sustained writing on the RAE and REF is Derek Sayer, Rank Hypocrisies: The Insult of the REF (London: Sage, 2014), a highly critical book which carefully presents a large amount of information on its history. I draw extensively upon this for this blog, as well as the articles by Bence and Oppenheim, and Jump on the Evolution of the REF, listed below. A range of primary documents can be found online, provided by the Higher Education Funding Council for England (HEFCE) and its counterparts in the rest of the UK, on RAE 1992, RAE 1996, RAE 2001, RAE 2008, and REF 2014. These are essential resources for all scholars investigating the subject, though obviously represent the perspectives of those administering the system. Equally important are Lord Nicholas Stern’s 2016 review of the REF, and the 2017 key policy decisions on REF 2021, made following consultation.

There are many other journalistic and scholarly articles on the REF and its predecessors. Amongst the most important of these would be the following:

Michael Shattock, UGC and the Management of British Universities (Buckingham: Society for Research into Higher Education & Open University Press, 1994).

Valerie Bence and Charles Oppenheim, ‘The Evolution of the UK’s Research Assessment Exercise: Publications, Performance and Perceptions‘, Journal of Educational Administration and History 37/2 (2005), pp. 137-55.

Donald Gillies, ‘How Should Research be Organised? An Alternative to the UK Research Assessment Exercise’, in Leemon McHenry, Science and the Pursuit of Wisdom: Studies in the Thought of Nicholas Maxwell (Heusenstamm: Ontos Verlag, 2009), pp. 147-68.

Zoë Corbyn, ‘It’s evolution, not revolution for REF’THES, 24 September 2009.

John F. Allen, ‘Opinion: Research and how to promote it in a university’Future Medicinal Chemistry 2/1 (2009).

Jonathan Adams and Karen Gurney, ‘Funding selectivity, concentration and excellence – how good is the UK’s research?’Higher Education Policy Institute, 25 March 2010.

Ben R. Martin, ‘The Research Excellence Framework and the ‘impact agenda’: are we creating a Frankenstein monster?’Research Evaluation 20/3 (1 September 2011), pp. 247-54.

Dorothy Bishop, ‘An Alternative to REF 2014?’Bishopblog, 26 January 2013.

University and College Union, ‘The Research Excellence Framework (REF): UCU Survey Report’, October 2013.

Paul Jump, ‘Evolution of the REF’Times Higher Education Supplement (THES), 17 October 2013.

Peter Scott, ‘Why research assessment is out of control‘, The Guardian, 4 November 2013.

John F. Allen, ‘Research Assessment and REF’ (2014).

Teresa Penfield, Matthew J. Baker, Rosa Scoble, Michael C. Wykes, ‘Assessment, evaluations, and definitions of research impact: A review’Research Evaluation 23/1 (January 2014), pp. 21-32.

Derek Sayer, ‘Problems with Peer Review for the REF‘,  Council for the Defence of British Universities, 21 November 2014.

‘Telling stories’Nature 518/7538 (11 February 2015).

Paul Jump, ‘Can the research excellence framework run on metrics?’THES, 18 June 2015.

HEFCE (chaired James Wilsdon), ‘The Metric Tide: Report of the Independent Review of the Role of Metrics in Research Assessment and Management’, 8 July 2015.

James Wilsdon, ‘The metric tide: an agenda for responsible indicators in research’The Guardian, 9 July 2015.

Paul Jump, ‘Is the REF worth a quarter of a billion pounds?’THES, 14 July 2015.

J.R. Shackleton and Philip Booth, ‘Abolishing the Research Excellence Framework’, Institute of Economic Affairs, 23 July 2015.

James Wilsdon, ‘In defence of the Research Excellence Framework’The Guardian, 27 July 2015.

Alex Jones and Andrew Kemp, ‘Why is so much research dodgy? Blame the Research Excellence Framework’The Guardian, 17 October 2016.

James C. Conroy and Richard Smith, ‘The Ethics of Research Excellence’Journal of Philosophy of Education 51/4 (2017), pp. 693-708.

 

 

A Short History of the RAE and REF to 2014

There were six rounds of the RAE, in 1986, 1989, 1992, 1996, 2001 and 2008, with the gaps between each becoming progressively larger. The REF has run just once to date, in 2014, with the next round scheduled for 2021.

The first ‘research selectivity exercise’ in 1986 was administered by the University Grants Committee (UGC), an organisation created after the end of World War One. As noted by Bence and Oppenheim, there was a longer history of the development of Performance Indicators (PIs) in higher education through various metrics, but definitions were unclear, so this exercise was viewed as an attempt to convert other indicators into a clear PI, which it was thought would add efficiency and accountability to university funding through a competitive process, in line with other aspects of the Thatcher government’s policies.

The 1986 exercise involved just the traditional universities, and only influenced a small proportion of funding. It consisted of a four-part questionnaire on research income, expenditure, planning priorities and output. Assessment was divided between roughly 70 subject categories known as Units of Assessment (UoAs). There were wider criticisms of the 1986 exercise, to do with differing standards between subjects, unclear assessment criteria, and lack of transparency of assessors and an appeals mechanism. As such it was much criticised by academics, and reformed for 1989, in which ‘informed peer review’ was introduced for assessment, following wide consultation. This year, a grading system from 1 to 5 was also introduced based upon national and international criteria, 152 UoAs were used, sub-committees were expanded, and details of two publications per member of staff submitted were required, as well as information on research students, external income and plans. It was used to allocate a greater proportion of funding. There were still many criticism, to do with the system favouring large departments, a lack of clear verification of accuracy of submissions, and late planning causing difficulties for institutions preparing their submission strategies.

Other important changes affecting higher education took place during this early period of the RAE, including the abolition of tenure by the Thatcher government in 1988, then the 1992 Further and Higher Education Act , which abolished the university/polytechnic distinction, so that the latter institutions could apply for university status, and then be included in the RAE. The Act also established four funding councils for England, Scotland, Wales and Northern Ireland to replace the UGC, and made research funding allocated entirely on a selective basis, replacing previous systems of funding based upon student numbers. There had been no formula funding for research in polytechnics, so the new system radically altered the balance, allowing them to compete openly with the more traditional institutions for such funding.

RAE 1992 then brought major new changes, with institutions able to select which ‘research active’ staff to put forward, a longer timescale allowed for research in the arts and humanities, improved auditing processes, and reduced assessment down to 72 UoAs. 192 institutions participated, covering over 43,000 full-time equivalent researchers. Practically all university research funding from this point was determined by the exercise, based upon a quality rating, the number of research-active staff, amount of research income and some consideration of future planned activity. Departments which were given an assessment of 1 or 2 would not receive any funding. The result was that the older universities received 91% of the available funding, new (post-1992) universities 7% and colleges 2%. 67% of departments were ranked 1, 2 or 3. This led to objections that the system was biased in favour of the older and larger universities, which had supplied many of the panelists for certain UoAs. Some results were challenged in court, and a judge noted a need for greater transparency.

Changes for RAE 1996 involved the submission of four publications for selected research-active staff, and stiffer requirements on a cut-off date for outputs being placed in the public domain. Rating 3 was divided into 3a and 3b, and an extra 5* rating introduced, while each panel was required to make clear their criteria for assessment. 60 subject panels, with chairs appointed by the funding councils on the basis of recommendations of previous chairs, and other panel members selected on the basis of nominations from various learned societies or subject associations. These considered 69 UoAs on the basis of peer review. This was also the first RAE which allowed performance submissions for musicians (see below), which was encompassed in the following definition of ‘research’ provided by the funding councils:

‘Research’ for the purpose of the RAE is to be understood as original investigation undertaken in order to gain knowledge and understanding.  It includes work of direct relevance to the needs of commerce and  industry, as well as to the public and voluntary sectors; scholarship*;  the invention and generation of ideas, images, performances and artefacts  including design, where these lead to new or substantially improved  insights; and the use of existing knowledge in experimental development  to produce new or substantially improved materials, devices, products and  processes, including design and construction. It excludes routine  testing and analysis of materials, components and processes, eg for the  maintenance of national standards, as distinct from the development of  new analytical techniques.

* Scholarship embraces a spectrum of activities including the development of teaching material; the latter is excluded from the RAE.

One of the major problems encountered had to do with academics moving to other institutions just before the final date, so those institutions could submit their outputs, as well as early concerns about the power invested in managers to declare members of staff ‘research-inactive’ and not submit them. Furthermore, it was found that outcomes were biased towards departments with members on assessment panels. Once again, no funding was granted to departments graded 1 or 2. This time, however, 43% of departments were ranked 4, 5 or 5*, a rise of 10% since 1992.

The changes to RAE 2001 involved panels consulting a number of non-UK-based experts in their field to review work which had already been assigned top grades. Sub-panels were created, but there were also five large ‘Umbrella Groups’ created, in Medical and Biological Sciences; Physical Sciences and Engineering; Social Sciences; Area Studies and Languages; and Humanities and Arts. Some new measures also acknowledged early career researchers, some on career breaks, and other circumstances, and a new category was created for staff who had transferred, who could be submitted by both institutions, though only the later one would receive the resulting research funding. Expanded feedback was provided, and electronic publications permitted, though different UoAs employed different criteria in terms of the significance of place of publication and peer-review. 65% of departments were now ranked 4, 5 or 5*. 55% of staff in 5 and 5* departments were submitted, compared to 23% in 1992 and 31% in 1996.

The Roberts review of 2002 expressed concern about how the whole exercise could be undermined by ‘game-playing’, as institutions were learning to do. Furthermore, there were concerns about the administration costs of the system. A process was set in place, announced by Gordon Brown, to replace the existing RAE (after the 2008 exercise) with a simpler metrics-based system. As detailed at length in Sayer, despite major consultations involving many important parts of the UK academic establishment, an initial report and proposals of this type were quickly changed to a two-track model of metrics and peer review, then the whole plan was almost completely abandoned.

RAE 2008 itself had fewer major changes. Amongst these were a renewed set of assessment criteria, especially as affected applied, practice-based and interdisciplinary research, a two-tiered panel structure, with sub-panels undertaking the detailed assessment and making recommendations to main panels, who made broader decisions and produced a ‘quality profile’ for a department, in place of the older seven-point system. Individual outputs were now given one of five possible rankings:

4*: Quality that is world-leading in terms of originality, significance and rigour
3*: Quality that is internationally excellent in terms of originality, significance and rigour but which nonetheless falls short of the highest standards of excellence
2*: Quality that is recognised internationally in terms of originality, significance and rigour
1*: Quality that is recognised nationally in terms of originality, significance and rigour
Unclassified: Quality that falls below the standard of nationally recognised work. Or work which does not meet the published definition of research for the purposes of this assessment

By 2008-9 (before the results of RAE 2008 took effect) about 90% of funding went to just 38 universities, but from 2009 48 institutions shared this amount (after 15. As Adams and Gurney have noted, the weighting of the 2008 exercise meant that the difference between obtaining 2* and 3* was greater than that between 3* and 4*, or between the previously 4 to 5 or 5 to 5* rankings. 54% of 2008 submissions were ranked either 3* or 4*, 87% 2*, 3* or 4*.

The plans for post-2008 exercises were finally published in September 2009 by HEFCE, indicating a new name, the REF, but otherwise the system was much less different to those which preceded it than had been assumed. Now the ranking was to be based upon three components: ”output quality’ at 60%, ‘impact’ at 25%, and ‘environment’ at 15% (later revised to 65%, 20% and 15% respectively). Outputs were to be assessed as before, though for sciences, citation data would informed various panels. ‘Environment’ was assessed on the basis of research income, number of postgraduate research students, and completion rates. But the most significant new measure was ‘impact’, reflecting the desires of the then Business Secretary Lord Mandelson for universities to become more responsive to students, viewed as customers, and industry, defined as ‘an effect on, change or benefit to the economy, society, culture, public policy or services, health, the environment or quality of life, beyond academia’. Each department was to submit a general statement on ‘impact’ as a whole, and could submit between 2 and 7 impact ‘case studies’, depending upon the number of research-active staff submitted to the REF. This was a huge shift, and restricted to impact which could be observed during the cycle between exercises, and derived from research produced when the academic in question was already at the submitting institution.

Other changes including a major shift in the number of UoAs and sub-panels to 30, and just four main assessment panels. One single sub-panel would assess outputs, environment and impact. However, the same number of experts were involved as before.

Since REF 2014, the Stern Report has informed significant changes to the system, in part intended to avoid the potential for gaming. Following further consultations, it has been announced that a minimum of one output and a maximum of seven from each member of a department will be submitted. Further measures have been introduced to ensure that most short form text-based submissions must be ‘Open Access’, available freely to all, which generates its own set of issues. Further plans for REF 2028 indicate that this will also apply to long form submissions such as monographs; the situation for creative practice outputs currently appears not to have changed, but this situation may be modified. HEFCE was abolished at the end of March 2018, and replaced in England by the new Office for Students (OfS) and Research England, the actions of which remain to be seen.

The RAE and REF have caused huge amounts of resentment and anger amongst academics, and produced sweeping changes to the nature of academic work as a whole. Sir Peter Swinnerton-Dyer, architect of the first RAE (interviewed in Jump, ‘Evolution of the REF’) argued against many of the subsequent developments, and with every reform to the system, institutions would put greater pressure on individuals, especially those in junior positions, leading to some of the awful cases of chronic stress, mental illness and bullying which have been detailed recently on social media.. Many report that REF submissions constitute the only research valued by their institutions. A Head of Department (HoD) or other REF supervisor who achieved a high REF scoring could expect to win favour and further promotion from their management; in practice, this often meant cajoling and bullying of already-overworked staff with threats and intimidation about whether they would maintain their job, and and little favour or support shown to those who might not produce the right number of 3* or 4* outputs. Those dealing with mental health issues, trying to balance impossible teaching and administrative workloads (all fuelled by the Mandelsonian idea of the student-as-consumer) and research demands with major care commitments for children or the elderly, were often driven to breakdowns or to quit academia; some cases of this are documented below. Academics ceased, in the eyes of many managements, to be human beings towards whom they had a duty of care as their employers, but merely as potential cash cows, to be dispensed with if there was any pause in this function.

Gaming of the system continued in many forms from RAE 1992 onwards. Many institutions would award 0.2 FTE or short-term contracts in the run-up to the RAE/REF, so that institutions could profit from particular individuals’ outputs (not least ECRs who might have a monograph and were desperate for any employment record on their CVs). All of this could mean that rankings were unrepresentative of the research carried on by the majority of a department’s full-time, permanent staff.  Research projects taking more than 6-7 years were greatly disadvantaged, or at least those embarked on them would still have to produce four other world-leading outputs in during a RAE/REF cycle, in many institutions, sometimes in order to retain a position at all. Callous HoDs or other REF managers could dismiss some work which had occupied academics for years (whilst maintaining hefty teaching and administration workloads) as merely 2*, on the grounds of its being ‘journalistic’ (often it was relatively readable), a ‘survey text’ (if it drew upon a wide range of existing scholarly literature), or the like, often with crushing impacts on the academics concerned.

The period of the RAE’s history saw other sweeping changes to Higher Education in the UK. Between 1963 and 1970, numbers of young people attending university had doubled following the Robbins Report, but then remained essentially static until the late 1980s, when over a decade numbers rose from 17% in 1987 to 33% in 1997 (see Ann-Marie Bathmaker, ‘The Expansion of Higher Education: Consideration of Control, Funding and Quality’, in Steve Bartlett and Diana Burton, Education Studies: Essential Issues (London: Sage, 2003), pp. 169-89). Since then numbers participating have continually risen, to a peak of 49% in 2011. This was an unrepresentative year, the last before the introduction of trebled tuition fees, which were a disincentive for students to take a gap year, followed by a concomitant dip of 6% (to 43%) in 2012, then a further rise to 49% in 2015, exceeding the pre-2011 peak of 46%, thus confounding (at least to date) those who predicted that increased fees would lead to decreased participation.

Sayer points out that there are few equivalents for the REF elsewhere in the world and none in North American or Europe. Furthermore, few have sought to emulate this system. Some of those cited below argue that most of the known alternatives (including those which preceded the introduction of the RAE) may be worse, others (including myself) cannot accept that this is the ‘best of all possible worlds’. I would further maintain that the human cost of the REF should not only be unacceptable, but illegal, and that only a zero tolerance policy, with criminal charges if necessary (even for the most senior members of management) could stop this. Dignity at work is as important in this context as any other, and little of that is currently on display in UK academia.

 

Creative Practice and Non-Text-Based Outputs

An issue of especial relevance to those engaged in performing-arts-based academic disciplines such as music, theatre, or dance (and in many cases also creative or other forms of writing, the visual arts, and so on) is that of outputs submitted to the REF in the form of creative practice. By this I mean specifically outputs in the form of practice (i.e. practice-as-research), as opposed to those simply documenting or critically analysing one’s own or others’ practice. I have previously blogged extensively on this subject, following the publication of a widely read article by John Croft (‘Composition is not Research’, TEMPO 69/272 (April 2015), pp. 6-11) and replies from me (‘Composition and Performance can be, and often have been, Research’, TEMPO 70/275 (January 2016), pp. 60-70 ) and from Camden Reeves (‘Composition, Research and Pseudo-Science: A Response to John Croft’, Tempo70/275 (January 2016), pp. 50-59), and a subsequent public debate on the subject. Amongst the issues raised, some of them familiar from wider debates on practice-as-research which are referenced in my own article, were whether creative practice on its own can stand as research without requiring additional written documentation (not least the now-familiar 300-word statements which can be regarded as deemed essential by the REF, as I argue in response to a claim made by Miguel Mera in that debate), whether creative work which most resembles ‘science’ is regarded as more ‘research-like’, an implicit claim unpacked by Reeves (as one colleague put it to me, ‘if it has wires going into it, it’s more like research’), with all this implies in terms of (gendered) views of STEM versus the humanities, or whether certain types of output are privileged for being more ‘text-like’ than others (scores versus recordings, for example) and thus some practitioners are at an advantage compared to others (here I give some figures on the relative proportions of composers and performers in different types of music departments). Attitudes to the latter vary hugely between institutions: at least one Russell Group department was happy to award a chair to a performer whose research output consists almost exclusively of performances and recordings, mostly as part of groups, while at others, especially those without strong representation of the performing arts amongst managements, such outputs are hardly valued at all and are unlikely to be submitted to the REF, nor win promotion for those who produce them.

Another issue is that of parity between creative practice outputs and other types. Many creative practitioners will never have had to submit their work to anything like peer review in the manner known for articles and monographs, and questions arise as to, for example, what number or type of compositions or recordings, visual art works or dance performances should be viewed as equivalent to the production of a monograph, when assessing promotion and the like? Music departments in which half or more of the faculty is made up of practitioners (usually composers) may have limited experience of peer review, or for that matter of wider academic debates and discourses, and some might argue that they are able to get ahead in their professions with considerably less time and effort than their equivalents who produce more traditional outputs. This is, I believe, a very real problem, which then maps onto questions of the significantly different requirements for producing different types of creative practice outputs, and needs serious consideration if there is to be any semblance of fairness within such academic departments.

Sayer also notes how many works in the humanities gain impact over an extended period of time, giving works of Walter Benjamin, Michel Foucault and Benedict Anderson as examples, and also notes how many can remain intensely relevant and widely cited long after publication, in distinction to a science-based model of cumulative and rapidly-advancing knowledge, whereby a certain passage of time leads to some outputs being viewed as outdated.

Recent Commentary

Over the last few days, various academics have been commenting on the REF, mostly on Twitter. I attempt to collect the most important of these here.

One of the first important threads came from geographer Julia Cupples (@juliecupples79). In this thread, she called the fundamental status of REF classifications ‘ludicrous’, argued how problematic it would be to direct research exclusively for REF and elite British academics, called the demands of ‘originality’ for a single publication ‘masculinist and colonial’, argued that female authors and those from ethnic minorities are at a disadvantage, not least because of less likelihood of citation. The ranking of junior colleagues by senior ones was labelled ‘one of the most toxic mechanisms in place in the neoliberal academy’, making a mockery of most other means of achieving equality, and so that the REF works against attempts to ‘dismantle discrimination, build collegiality, prevent academic bullying, and decolonize our campuses’. This thread was widely tweeted and praised, inducing others to share similar stories, with Cupples responding that the REF is ‘a means to discipline, humiliate and produce anxiety’. Not all agreed, with Germanist Michael Gratzke (@prof_gratzke) arguing that the peer review element for arts and humanities was a good thing, and that as the scheme would not disappear, one needed to deal with it reasonably. More respondents were sympathetic, however. Urban Studies Professor Hendrik Wagenaar (@spiritofwilson) cited the REF as a cause of ‘the demeaning command-and-control management style that has infected UK universities, and the creation of the soulless apparatchiks that rise up through the ranks to take every ounce of pleasure out of research and writing’, and how it prevents ‘a climate of psychological safety, trust, mutual respect, and togetherness; a place where it is safe to take risks’. Molly Dragiewicz (@MollyDragiewicz) asked whether metrification fetishises ‘engagement’, though a different view was taken by Spanish musicologist and novelist Eva Moreda Rodriguez (@TheDrRodriguez), in response to some queries of my own to Cupples. Cupples had said that it would be ‘deeply problematic if we started writing for REF and a panel of elite British academics rather than for our research communities’, to which I asked about the definition of a ‘research community’ and why they should be exempt from external scrutiny and issues of parity with other (sub-)disciplines, also pointing out that both the Chicago School of Economics or some groups of racial theorists would have fitted this category. Cupples maintained that such communities were not groups of academics, but Moreda asked in return how ‘we avoid academic work being judged on the basis of whether it reinforces& confirms the basic tenets & prejudices of said research community?’, as well as whether such community engagement was already covered through impact assessment?

Around the same time, drama lecturer Kate Beswick (@ElfinKate) blogged on ‘REF: We need to push back against a system that has lost its way’. Whilst accepting the need for assessment of academic research, she noted how layers of bureaucracy were created to game the system, the growth of internal practice REFs, the pressure to produce outputs simply to satisfy the REF rather than for any other value, and the new pressures which will follow implementation of open access policies. This, argued Beswick, would force scholars to find ‘REF compliant’ publishers, which would compromise academic objectivity, rigour, reach and international credibility. However, she did not suggest any alternative system.

However, the first major thread in defence of the REF came from historian David Andress (@ProfDaveAndress). Andress argued that the RAE/REF enabled quality research funding to go to post-1992 institutions, that every alternative had worse biases, and that the distributive mechanism was so wide that it could almost be called ‘a relic of socialism’, concluding with the confident claim that ‘If you get rid of it, you will definitely get something worse’. This was sure to produce many responses. Clinical psychologist Richard Bentall (@RichardBentall), who was a panelist in 2008 and 2014, argued that the process was ‘conducted with absolute fairness and integrity’, but the problem was with the interpretation of it by universities (a point which many others would also evoke in other threads). Bentall noted how his own former institution gave an edict telling no researcher to publish 2* papers, which constitute 80% of world science, so that the REF ‘has become an end in itself’. I myself responded that many places have concluded that research is of no value unless beneficial to the REF, also raising the question (about which I am most definitely in two minds) as to whether we need to accept that some institutions need to be focused on teaching rather than research, rather than all scrambling over a sum of government money which is unlikely to increase. Some subsequent interactions have however made me rethink this. I also noted how some assessors have little knowledge of anything beyond their own narrow and underdeveloped fields, but which nonetheless are felt necessary to be represented on panels, noted (as would many others) how a similar process is not used in many other countries, and was sceptical about any ‘better than any conceivable alternative’ argument. Andress responded that he was not saying that, but that better alternatives which can be conceived cannot be easily put into effect, and also that, in light of the expansion of the sector, ‘RAE/REF is on the positive side of the ledger’, and shouldn’t simply be dismissed. In a series of tweets, I also expressed some questions about whether all aspects of the expansion had been positive, without corresponding increases in the level of secondary education, which can have a net levelling effect when the Oxbridge/Russell Group model is applied to institutions with very different types of student bodies, from this arguing that REF was a part of a process which pretended there were not major differences between institutions, and causes huge pressures for academics at institutions where the teaching demands are higher for students with less inclination towards independent study. These are highly contentious arguments, I realise, which I want to throw out for consideration rather than defend to the last.

Moreda also responded to Andress, taking a medium view. In a thread, she acknowledged the potential of the REF for management to use to bully academics and the inordinate use of resources, but noted that it had enabled her to gain an academic position in the UK, which would otherwise have been very difficult without an Oxbridge pedigree, also on account of having a foreign accent, with little teaching experience at that point, and so on. However, she did also temper this by noting that the ability to produce REFable publications relied upon her being ‘able-bodied and without caring duties’, and that a continued discourse was required in order to consider how to accommodate others.

I asked REF defenders whether REF panellists ever read more than a few pages of a monograph, because of the time available, or listened carefully to audible outputs (rather than reading the 300-word statements which can act as spin)? Moreda responded by framing the issues as whether the REF or equivalent can ever be free of corruption, and whether such a system needs to exist at all. She was ambivalent about both questions, but also disliked the implied view of some REF-opponents that ‘research shouldn’t be subjected to scrutiny or accountability’. Whilst agreeing on this latter point, I argued that REF does not really account for parity between disciplines and sub-disciplines, some with vast differences of time and effort (especially where archival or fieldwork are involved) required for producing an equivalent output. I proposed that no output should receive 3* or 4* where authors ignore relevant literature in other languages, and that the standards of some journals should be scrutinised more. Moreda essentially agreed with the need for wider factors to be taken into account, whilst (in somewhat rantish tone!) I continued that examiners needed a wide range of expertise across multiple sub-disciplines, and asked how in historical work like hers and mine (I work on music in Nazi and post-war Germany, she works on music in Franco’s Spain and amongst Spanish exiles) how many would know if we were making up or distorting the content of the sources? Knowing of a time when there was a leading REF assessor who could not read music, I asked how they could judge many music-related outputs, and both Moreda and I agreed there could be merit in using non-UK examiners, while I also suggested that a department should be removed from the REF when one of their own faculty members is on a panel, because of the potential for corruption.

Theatre and Performance/Early Modern scholar Andy Kesson (@andykesson) posted a harrowing thread relating to his early career experiences at the 2014 REF, for which his outputs were a monograph and an edited collection. In the lead-up, he was informed that these were ‘”slim pickings” for an ECR submission’, and pushed to get them out early and develop other publications. This came at a time when Kesson’s father died and he was forced to witness his mother in the late stages of a long-term fatal illness. Whilst deeply upset by these experiences, Kesson tried to explain that he would struggle to fulfil these additional publication demands, and was told this work was non-negotiable. After the death of his mother, her own father also became extremely ill, and Kesson was forced to do his work sitting next to his hospital bed. When offered a new job, his previous institution threatened legal action over his ‘slim’ REF submission, leading to a dispute lasting two years. Many were upset to read about the callousness of Kesson’s former institution. Social identity scholar Heather Froehlich (@heatherfro) responded that ‘academics are the most resilient people on earth, who are willing to endure so much yet still believe in their absolute singular importance – only to be told “no, you are wrong” in every aspect of their professional lives’. However, one dissenting voice here and elsewhere was that of Exeter Dean and English Professor Andrew McRae (@McRaeAndrew), who cited Wilsdon’s defence of the REF mentioned earlier, and argued that no QR money would ever be given without state oversight, asking whether a better model than the REF existed? Engineering Professor Tanvir Hussain (@tanvir_h) argued that the problem was with Kesson’s institution’s interpretation of REF rules rather than the rules themselves, a theme which others have taken up, on how the ambiguities of the REF are used as a weapon for favouritism, bullying and the like.

Geographer Tom Slater (@tomslater42), having read many of the worst stories about people’s experiences with the REF, called out those who serve on panels, making the following claims:

A) you are not being collegial
B) you are appallingly arrogant if you think you can offer an evaluation of the work of an entire sub-discipline *that has already been through peer review*
C) you are not doing it because somebody has to
D) you are not showing “leadership”
E) you are contributing to a gargantuan exercise in bringing UK academia into international disrepute
F) you are making academia an even more crappy for women, minorities, critical thinkers, and great teachers
G) if you all stood down, HEFCE would have massive problem

Various people agreed, including in the context of internal pre-REF assessments. Another geographer, Emma Fraser (@Statiscape) suggested simply giving any REF submission a 4*, a suggestion Slater and sociologist Mel Bartley (@melb4886) endorsed, and was made elsewhere by novelist and creative writing lecturer Jenn Ashworth (@jennashworth).  Linguistics scholar Liz Morrish (@lizmorrish) was another to focus on the behaviour of individual institutions, maintaining that ‘the was NEVER intended to be an individual ranking of research. It was intended to give a national picture and be granular only as far as UoA. What you are being asked to do is just HR horning in on another occasion for punishment’. Slater himself also added that ubiquitous terms such as ‘REFable’ or ‘REF returnable’ should be abandoned.

Paul Noordhof (@paulnoordhof) asked in this context ‘Suppose there were no REF, or equivalent, linked to research performance. What would stop the University sector achieving efficiency savings by allowing staff numbers to reduce over time and doubling teaching loads? Especially for some subjects’, but Slater responded that collective action from academics (as opposed to the more common action supporting and promoting the REF) would stop this. Slater also responded directly to McRae’s earlier post, including the statement ‘Careful what you wish for’, by arguing that ‘most would wish for a well funded sector where we don’t have to justify our existence via an imposed, reductive, compromised, artificial assessment system that destroys morale. Careful what you lie down for’.

Italian social scientist Giulia Piccolino (@Juliet_p83), responding to my retweeting of Slater’s original thread, called herself ‘the last defender of the REF’, which she felt to be ‘a bad system but the least bad system I can imagine’, a similar position to that of Andress. In response, I suggested that a better system might involve the submission of no more than two outputs from any department, allowing much more time to be spent on peer review. Piccolino noted that in other countries where she had worked, appointments depended simply on one’s PhD supervisor (a point she also made in response to Cupples), that scholars stop researching after receiving a permanent job (but still try and control junior figures) (something I have observed in some UK institutions), and so argued that while the REF could could be improved and humanised, it seemed a break on arbitrary power as encountered elsewhere. Piccolino’s returned elsewhere to her theme of how the transparency and accountability of the REF were an improvement on more corruptible systems, with which many UK academics were unfamiliar.

The debates with McRae continued, after his response to Cupples, in which he called the REF ‘an easy target’ and suggested that its demise would leave academics reliant on grants (a view endorsed wholeheartedly by Piccolino), claimed that many would prefer to replace peer-review with metrics, and that impact produced some important activity. Legal academic Catherine Jenkins (@CathyJenkins101) asked if things were so bad before the introduction of the RAE in 1986, to which McRae responded that he did not work in the UK then, but saw the problems of an Australian system in which publications in ‘a low-achievement environment’ in which many had not published for years, did not help a younger academic get a job. Modern Languages scholar Claire Launchbury (@launchburycla) argued that the modern Australian system (despite, not because of, its own ‘Excellence in Research for Australia’ (ERA) system for research evaluation) was practically unrecognisable in these terms. In response to a query from Marketing lecturer Alexander Gunz (@AlexanderGunz) relating to the lack of a REF equivalent in North America, McRae responded that that system was radically different, lacking much central funding, but where ‘state institutions are vulnerable to the whims of their respective govts, so in that respect greater visibility/measurability of performance might help’. Cupples herself responded to McRae that ‘The vast majority of universities in the world have no REF (and neither did British universities not so long ago) and yet research gets done and good work gets published’. Historical sociologist Eric R. Lybeck (@EricRoyalLybeck), a specialist in universities, echoed the view of Swinnerton-Dyer in hearkening back to the ‘light touch’ of the first RAE, which ‘would be an improvement’, and also argued against open access, saying this ‘distorts and changes academic practices’.

Film lecturer Becca Harrison (@BeccaEHarrison) posted her first REF thread, detailing her disillusion with UK academia as a result of the system, noting that she was told when interviewing for her first post-PhD job that her research ‘had to be world leading’ (4*) in order to get an entry-level job, and feeling that even this might amount to nothing because ‘there are 100 ECRs with 4* work who need my job’. This led her to support calls to boycott preparations for the REF as part of continuing industrial action. Another thread detailed common objections to the REF, then in a third thread, Harrison detailed her experiences with depression and anxiety attacks during her PhD, leading to hair loss and stress-induced finger blisters making it impossible to type, as well as early experiences with a poorly-paid teaching fellowship together with a non-HE job to pay bills, working 18 hour days in order to produce a monograph and endlessly apply for jobs. In her first full-time job, Harrison encountered bullying, misogyny from students, a massive workload and obsessiveness about production of 4* outputs. This did not lead to a permanent contract, but a new job offer came with huge requirements just for grade 6/7. She rightly said ‘please, people implementing REF, people on hiring committees, please know that this is what you’re doing to us – and that when we’ve done all this and the system calls us ‘junior’ and treats us like we don’t know what we’re doing we will get annoyed’.

Some further questions were raised by several on the new rules on open access, for example from Politics scholar Sherrill Stroschein (@sstroschein2), who argued that this would ‘just make book writers produce best work outside of REF’. But this important debate was somewhat separate from the wider question of the value of the REF, and what system might best replace it, which I decided to raise more directly in a new thread. There were a range of responses: musicologist Mark Berry (@boulezian) argued for a move away from a model based upon the natural sciences, and claimed that ‘Huge, collaborative grants encourage institutional corruption: “full economic costing”‘, while Moreda alluded to an article from 2017 about the possibility of a ‘basic research income’ model, whereby everyone had a certain amount allocated each year for research, so long as they could prove a reasonable plan for spending it (David Matthews, ‘Is “universal basic income” a better option than research grants?’THES, 10 October 2017, though engineer David Birch responded that this would ultimately lead to another system similar to the REF). She saw how this would be insufficient for most STEM research and some in the humanities, but this could then be supplemented by competitive funding, as is already the case. Berry made a similar point to Moreda, also noting how much money would be saved on administration, whilst Cupples also agreed, as did sociologist Sarah Burton (@DrFloraPoste). Sums of up to around £10K per year were suggested; Burton also added that larger competitive grants should be assigned on a rotating basis, so that those who have had one should be prevented from holding another for some years, to create openings for post-graduate researchers (PGRs) and ECRs. I responded that this might exacerbate a problem already prevalent, whereby time-heavy species of research (involving archives, languages, old manuscripts, etc.) would be deterred because of the time and costs involved; Burton agreed that ‘slow scholarship’ is penalised, especially ethnographic work (this type of point was also made by archaeologist Rachel Pope (@preshitorian), comparing time-intensive archaeological work with ‘opinion pieces’ judged as of similar merit), while Moreda suggested that some ‘sliding scale’ might be applied depending on whether research involves archives and the like, though acknowledged this could result in ‘perverse incentives’.

I also noted that one consequence of Burton’s model would be a decline in the number of research-only academics, but that it would be no bad thing for all to have to do some UG core teaching (with which Cupples agreed). Burton’s response was ambivalent, as some are simply ‘not cut out for teaching in a classroom’, though I suggested similar problems can afflict those required to disseminate research through conferences and papers, to which Burton suggested we also need to value and codify teaching-only tracks for some. Moreda was unsure about the proposal to restrict consecutive grants, especially for collaborative projects, though also suggested that such a model might free up more money for competitive grants. Noting earlier allegations of careerism, etc., Berry argued that one should not second-guess motivations, but there should be space for those who are not careerists, and that it would be helpful for funds to assist with language or analytical skills or other important things.

I asked who might have figures for (i) no. of FTE positions in UK academia at present (to which question I have since found the figure of 138,405 on full-time academic contracts, and 68,465 on part-time academic ones, in 2016-17); (ii) current government spending on research distributed via REF (the figure for 2015-16 was £1.6 billion), and (iii) the administrative costs of REF (for which a HEFCE report gives a figure of £246 million for REF 2014). This latter figure is estimated to represent roughly 2.4% of a total £10.2 billion expenditure on research by UK funding bodies until REF 2021, and is almost four times that spent on RAE 2008. Nonetheless, its removal would not make a significant difference to available research funds. If one considers the ‘basic research income’ model (in the crudest possible form) relative to these figures, an annual expenditure of £1.6 billion would provide £10K per year for 160,000 full-time academics, which would be a very large percentage. if the part-time academics are assumed to average 0.5 contracts.

An arts and humanities scholar who goes by the name of ‘The Underground Academic’ (@Itisallacademic) (hereafter TUA) felt the basic income model would prevent a need to apply for unnecessary large grants, and also expressed personal dislike for collaborative projects, a view which runs contrary to orthodox wisdom, but was backed by Moreda and Berry. I agreed and also questioned the ‘fetishisation of interdisciplinary work’ as well. TUA responded with a pointer to Jerry A. Jacobs, In Defense of Disciplines: Interdisciplinarity and Specialization in the Research University (Chicago: University of Chicago Press, 2013), which is a sustained scholarly critique of interdisciplinarity, so often assumed to be an unquestionable virtue. Burton also asked that employers and funders value book-based research more, and expressed frustration that her own work on social theory is deemed ‘easy’, to which I added an allusion to a common situation by which reading-intensive work, often involving carefully critical investigation of hundreds of books, can be dismissed as entailing a ‘survey text’.

There were a range of other more diverse responses. Cupples also argued that the New Zealand system, the Performance-Based Research Fund (PBRF), whilst imperfect, was ‘a thousand times better than the REF’; Cupples and Eric Pawson authored ‘Giving an account of oneself: The PBRF and the neoliberal university‘, New Zealand Geographer 68/1 (April 2012), pp. 14-23. Amongst the key differences Cupples outlined were individual submissions, crafting of one’s own narrative, own choice of most suitable panel, own choice of nominated outputs, information on how one did oneself (not available to others), and greater support from departments.

Piccolino returned to her earlier questions about the potential for corruption in non-REF-based academic cultures, and asked ‘which system guarantees that people are hired for being committed, dedicated researchers vs being friends, friends of friends, products of elite institutions etc?’. Following Cupples mention of the PBRF, Piccolino also mentioned the Italian abilitazione nazionale, providing criteria for associate and full professors, but she suggested it was of little effect compared to patronage and the need for compliant researchers. This system was, according to Piccolino, closer to the REF than the German Habilitation. She also drew attention to a scathing article on corruption in Italian academia (Filippomaria Pontani, ‘Come funziona il reclutamento nelle università’Il post, 11 October 2016).

Social scientist Gurminder K. Bhambra (@GKBhambra) pointed out the intensification of each iteration of the REF, with the current post-Stern version more individualised and pernicious than before. Medievalist James T. Palmer (@j_t_palmer) argued that REF is not the primary means of distributing research funding, because the majority is distributed through competition, though the REF may determine university funding in general (a profound observation whose implications need wider exploration).

Medieval and early modern historian Jo Edge (@DrJoEdge) asked why, in a REF context, peer-reviewed book chapters are seen as inferior to journal articles, to which Andress replied that (a) some believe book peer-review is less rigorous, as chapters are pre-selected and reviewed collectively; (b) the chapters will have less impact since less easy to find through the usual search engines (a point which Burton said she had also heard); (c) old-style elitist prejudice.

A sardonic exchange proceeded between three musicians or musicologists : composer Christopher Fox (@fantasticdrfox, himself a REF 2014 panelist), Berry, and me. Fox felt that ‘the current UK research model is counterproductive in the arts’ and that ‘Competition is a useless principle around which to organise our work’. I asked what it would mean to rank the work of leading late-twentieth-century composers such as Pierre Boulez and Jean Barraqué, Luciano Berio and Luigi Nono, Brian Ferneyhough and Robin Holloway, or the playing of pianists Aloys Kontarsky and David Tudor, or clarinettists Harry Sparnaay and Armand Angster, as 3* or 4*, especially if non-musicians were involved in the process? Fox also referenced US composers Terry Riley and Pauline Oliveros, and as how one can fix criteria which account for the disparities in their aesthetic intentions, while Berry pointed out that Anton von Webern (almost all of whose works are short in duration) would ‘never have been able to “sustain his invention over a longer time-span”‘, alluding to a common criteria for composition. Conversely, I asked if Erik Satie’s Vexations (which consists of two lines of music repeated 840 times), or the music of La Monte Young (much of it very extended in duration) should ‘have been regarded as streets ahead of most others, if submitted to REF?’, in response to which musicologist (French music expert) Caroline Potter (@carolinefrmus), author of several books on Satie, alluded to an upcoming ‘REF-related satire’ which ‘seems like the only sane way to deal with the business’. I asked about whether all of this contributed to a ‘a renewed, and far from necessarily positive, concept of the “university composer” (or “university performer”)’ (terms which have often been viewed negatively, especially in the United States), when academia is one of the few sources of income. Fox felt that this culture encouraged ‘the production of compositions that only have significance within academia’. I also raised the question of whether academics looked down on books which could be read by a wider audience, which Berry argued stemmed from envy on the part of those with poor writing skills.

Independently, cultural historian Catherine Oakley (@cat_oakley) echoed the views of Kesson and Harrison, as regards the impact of REF upon ECRs, who need ‘monograph + peer-reviewed articles’ to get a permanent job, yet start out after their PhDs in ‘precarious teaching posts with little or no paid research time’.

Elsewhere, industrial relations expert Jo Grady (@DrJoGrady) advocated boycott of preparations for the REF and TEF. In a series of responses, some asked how this could be done, especially when individuals are asked to submit their own outputs for internal evaluation. Further questions ensued as to whether this might lead to some of the worst (non-striking) academics undertaking the assessment.

Sayer himself (@coastsofbohemia) also contributed to these Twitter exchanges. In a first thread, he alluded to a passage from his book: ‘In a dim and distant past that is not entirely imaginary (and still survives for the shrinking minority of faculty members in N America) research was something that academics undertook as a regular part of their job, like teaching … Universities … expected their staff to publish … and academics expected universities to give them sufficient time to pursue their research … There was no *specific* funding for time for research but … the salary was meant to support and remunerate a staff member’s research as well as his or her teaching … [whereas today] Because the only govt support for universities’ “research infrastructure … and pathbreaking research …” comes through QR funding and QR funding is tied to RAE/REF rankings, any research that scores below a 3* necessarily appears as unfunded. The accomplishment of the RAE/REF … is to have made research *accountable* in the literal sense of turning it into a possible object of monetary calculation. This makes the REF a disciplinary technology in Foucault’s sense … which works above all through the self-policing that is produced by the knowledge that one’s activities are the subject of constant oversight. Both inputs (including, crucially, academics’ time) and outputs (as evaluated by REF panels and monetized by the QR funding formula) can now be *costed.* The corollary is that activities that do not generate revenues, whether in the form of research grants or QR income, may not count in the university’s eyes as research at all.’ In response to a question from me about his feelings on the argument that RAE/REF had helped post-1992 institutions, Sayer argued that there were other alternatives to no funding or REF-based funding, alluding to some of the suggestions in his article on peer review listed earlier. In a further thread, he summarised these arguments: the relative merits of peer review vs. metrics was ‘not the issue’. Sayer asserted that ‘Peer review measures conformity to disciplinary expectations and bibliometrics measure how much a given output has registered on other academics’ horizons’, and that neither of these are a reliable basis for 65% of REF ranking. Instead, he suggested that more weight should be allocated to research environment and resources, research income, conference participation, journal or series editing, professional associations, numbers of research students, public seminars and lectures, all of which are measurable.

Literature and aesthetics scholar Josh Robinson (@JshRbnsn) joined the discussions towards the end of this flurry of activity. Coming into one thread, he noted that internal mock-REF assessments meant ‘that the judgements of powerful colleagues with respect to the relative merits of their own & others scholarship can never be held to account’, since individual scores are not returned to departments, also arguing that this would be exacerbated in REF 2021. In response to McRae, Robinson added his name to those advocating a basic research income, which McRea said would technically be possible, but in practice ‘would redistribute tens of millions per year from RG to post-92 unis. Try that on your VC!’. Robinson’s response was to quote McRae’s tweet and say ‘the manager at a Russell Group insitution shows what he’s actually afraid of.’ But in response to a further statement in which Robinson thought that what his VC ‘would be afraid of would be a generally good thing’, McRae suggested that this might simply lead VCs to make redundancies. Robinson pointed out that an allocation by FTE researcher would provide an incentive to hire more people with time for research. Robinson has indicated that he might be able to make available a recent paper he gave on the REF, which I would gladly post on here.

But Morrish, responding that McRae’s claim that the REF is ‘the price we pay, as a mechanism of accountability’, retorted that ‘the price we pay’ is ‘a) Evidence of mounting stress, sickness and disenchantment among academics REF-audit related; b) Ridiculous and career-limiting expectations of ECRs’.

A few other relevant writings have appeared recently. Socio-Technical Innovation Professor Mark Reed (@profmarkreed) and social scientist Jenn Chubb (@JennChubb) blogged on 22 March calling on academics to ‘Interrogate your reasons for engaging in impact, and whatever they are, let them be YOUR reasons’, referencing a paper published the previous week, ‘The politics of research impact: academic perceptions of the implications for research funding, motivation and quality’British Politics (2018), pp. 1-17. Key problems identified included choosing research questions in the belief they would generate impact, increased conflicts of interest with beneficiaries who co-fund or support research, the necessity of broadening focus, leading to ‘shallow research’, and more widely the phenomenon of ‘motivational crowding’, by which extrinsic motivations intimidate researchers from other forms, and a sense that impact constitutes further marketisation of HE. Chubb and Richard Watermeyer published an article around this time on ‘Evaluating ‘impact’, in the UK’s Research Excellence Framework (REF): liminality, looseness and new modalities of scholarly distinction’Studies in Higher Education (2018), though I have not yet had chance to read this. Historian Tim Hitchcock (@TimHitchcock) also detailed his experiences of the RAE/REF from the late 1980s onwards, first at North London Polytechnic. Hitchcock argues that:

I have always believed that the RAE was introduced under Thatcher as a way of disciplining the ‘old’ universities, and that the 1992 inclusion of the ‘new’ universities, was a part of the same strategy.  It worked.  Everyone substantially raised their game in the 1990s – or at least became more focussed on research and publication.

Hitchcock goes on to detail his experiences following a move to the University of Hertfordshire after RAE 1996. He notes how hierarchies of position (between Lecturer, Senior Lecturer, Reader, Professor) became more important than ever, and recruitment was increasingly guided by potential RAE submissions. However, Hitchcock became more disillusioned when he took a position at the University of Sussex after REF 2014, and saw how the system felt ‘more a threat than a promise’ in such places, in which REF strategy was centrally planned. He notes how ‘The bureaucracy, the games playing and the constantly changing requirements of each new RAE/REF, served a series of British governments as a means of manipulating the university system’, the system was increasingly rigged in favour of ‘old’ universities, and made life increasingly difficult for ECRs, who had to navigate ever-bigger hurdles in order simply to secure a permanent position. Hitchcock concludes that:

Higher education feels ever more akin to a factory for the reproduction of class and ethnic privilege – the pathways from exclusion to success ever more narrowly policed. Ironically it is not the ‘neo-liberal’ university that is the problem; but the ‘neo-liberal’ university dedicated to reproducing an inherited hierarchy of privileged access that uses managerialism and rigged competition to reproduce inequality.

He does not write off the potential of the REF to change this, and appears to see the particular ways it is administered and used (and viewed by some in ‘old’ universities) as the problem.

There is more to say about the Thatcherite roots of the RAE, her disdain for the ‘old’ universities, especially after her alma mater, Oxford University, refused in 1985 to award her an honorary doctorate, and what the 1992 act meant in terms of a new vocational emphasis for higher education in general, to which I may return in a subsequent blog post.

It is very clear that the majority of Academic Twitter are deeply critical or bitterly resentful of the RAE/REF, and most believe reform to be necessary. Editorial director of the THES, Phil Baty (@Phil_Baty) offered up a poll asking whether people thought the REF and RAE had been positive or negative; the results were 22% and 78% respectively (and further comments, mostly making similar points to the above, followed). The arguments pro and contra, as have emerged over the weekend can be summarised as follows:

Pro: provides some transparent external scrutiny and accountability; enables funding for post-1992 institutions; enables some to find work who would find it impossible in other systems dominated by patronage; is a better model than any other which has been discovered; employs peer-review rather than metrics.

Contra: invests too much power in managers; creates bullying and intimidatory atmosphere at work through REF preparation mechanisms; makes job market even more forbidding for ECRs; highly bureaucratic; very costly; dominates all research; time-consuming; discriminatory; sexist; colonialist; makes few allowances for those with mental health, care, family, or other external commitments; uncollegiate; employs assessors working outside their area of expertise; uses too many UK academics as assessors; marginalises 2* work and book chapters; fetishises collaborative or interdisciplinary work; falsely erases distinctions between institutions; relies on subjective views of assessors; artificially bolsters certain types of creative practice; is not employed in almost any other developed country; employs mechanisms more appropriate that STEM subjects than arts, humanities and social sciences; has increased pressure on academics with every iteration; causes huge stress and sickness amongst academics.

Stern has not been enough, and there is no reason to believe that those making the final decisions have much interest in the welfare of lecturers, or for that matter the creation of the best type of research culture. Major reform, or perhaps a wholly new system, are needed, and both government and the OfS and Research England should listen to the views expressed above. And new employment laws are urgently needed to stop the destruction of academics’ lives which is happening, regularly as a result of the REF.

 


8 Comments on “The RAE and REF: Resources and Critiques”

  1. Chris Hewson says:

    “long-term impact (observed after a period of more than 6 or 7 years after the output was first produced) was of no importance for the REF”. Not true, separate research could be cited in case studies going back 20 years.

    Also, research bit of HEFCE have become Research England… student facing bits went to OfS

    Great post though… thanks for putting it together.

    • Ian Pace says:

      Thanks – will make changes. I’ve been informed (perhaps erroneously?) that the next REF will only allow impact case studies where both the original research and the impact fall within the cycle? Would be very interested to know if this is wrong. And was given that impression last time too.

  2. Reblogged this on Julie Cupples and commented:
    Excellent post on the collective thoughts on the REF enacted mostly through Twitter. Essential to our efforts to create a more humane and intellectually stronger university

  3. dereksayer says:

    Reblogged this on coasts of bohemia and commented:
    A very useful summary of recent debates. Thank you!

  4. […] which they think will be considered 4* in the Research Excellence Framework or REF (on this, see this blog from the last strike). This is not remotely feasible for those juggling part-time jobs, travel, […]

  5. MartinPe says:

    Excellent post on the collective thoughts on the REF enacted mostly through Twitter. Essential to our efforts to create a more humane and intellectually stronger university

  6. […] The RAE and REF: Resources and Critiques. An article written during the period of the 2018 industrial action in academia, collating a wide range of views on these institutions mostly expressed on social media, with wider links to literature on the subject. This contains a small amount relating to practice-research and the REF. […]

  7. […] The RAE and REF: Resources and Critiques (3/4/18) […]


Leave a comment