RISK, REGULATION AND QUALITY MANAGEMENT
Colin Raban
Over the past 20 years, interest in ‘risk’ and its management has exploded, pervading all aspects of social and economic activity and dominating the regulatory literature (Adams 2006; Power 2004). In the UK, the term has appeared with growing frequency in the reports and guidance issued by the Quality Assurance Agency (Raban 2008), and many higher education institutions now claim to have developed their own ‘risk-based’ approaches to quality assurance. Attention to risk and its management has become a defining feature of the ways in which higher education institutions are reviewed and regulated. In the words of Hommel and King (2013), ‘risk-based regulation seems to be the order of the day'.
The 2011 White Paper (Students at the Heart of the System) declared that the Government would introduce a ‘strong’ and ‘genuinely risk-based’ quality assurance régime for England (BIS 2011). This prompted a procession of HEFCE proposals:
In 2012, HEFCE published a consultation paper, A risk-based approach to quality assurance.
Then, three years later, it announced that it would be ‘seeking views on future approaches to the quality assessment of higher education’, approaches that must be ‘risk-based and proportionate ‘ (HEFCE 2014).
Two further consultation documents were published – the first in January 2015 and the second in June of that year (HEFCE 2015a and 2015b).
This was followed (in November) by a report on responses to the consultation (MRUK 2015), a Green Paper (BIS 2015), and the publication in March 2016 of HEFCE’s ‘Revised Operating Model for quality assessment’.
The Government then published a White Paper (BIS 2016) reaffirming its expectation that HEFCE (and later the Office for Students) should apply a 'risk-based approach', both to govern passage through the various higher education ‘gateways’ and as the basis on which the regulator will undertake its regular assessments of quality.
BIS 2016, paragraph 13. Five 'gateways' are identified in the White Paper. The first gives entry to the higher education system by conferring registered status on a provider. The second and third lead to the two forms of approved provider status. The fourth gateway governs the various routes to the acquisition of taught degree awarding powers (TDAP), and the fifth leads to the conferral of research degree awarding powers (RDAP).
In spite of the attention given to the idea of ‘risk-based’ regulation, ‘risk’ itself remains an ‘elusive, contested and inherently controversial’ concept; ‘people are using the same word, but understanding different things by it, and shouting past each other’ (Adams 2006; Power 2004). The idea has been ‘debased by slack usage’ and the notion of risk-based quality assurance is ‘ill-defined and thus capable of conveying a variety of possibly incompatible meanings’ (Raban and Turner 2006). It is not surprising, therefore, that the recent HEFCE consultation found that respondents felt 'a need for further clarification of what (the risk-based) approach should entail’ (MRUK 2015).
This paper seeks to provide that clarification drawing, for this purpose, on the findings of the House of Commons’ Regulatory Reform Committee. Reporting in 2009 and in the wake of the financial crisis, the Committee set out the essential characteristics of an effective risk-based approach (House of Commons 2009a). It considered that such an approach might be valid if inter alia there were to be 'diligence in understanding risk' and an awareness on the part of regulators and the regulated that risk assessments should be subject to ‘appropriate challenge'. In its briefing paper for the Committee, the National Audit Office added that a valid risk-based approach would need to be ‘informed by good evidence’ or, to adopt an expression used elsewhere in the report, 'the right intelligence' (House of Commons 2009b). The Committee’s report concluded that regulators should be willing ‘to be intrusive rather than light-touch when appropriate’ and that they should seek to ‘match the experience and weight of those they regulate’.
Taking the cue from the Regulatory Reform Committee, I shall start by considering what might constitute a 'diligent’ understanding of risk and I shall suggest that the notion of ‘risk’ assumes a relationship between harm or loss on the one hand, and some prior action or event on the other. This will prompt me to make a distinction between two approaches to risk-based regulation: one that focuses primarily on the products of that relationship (its outcomes) and another which pays more attention to an organisation's exposure to, and its ability to manage adverse (or ‘risk causing’) events. In reality, of course, a regulator may focus on both, but with varying degrees of emphasis.
Understanding risk
A colleague once described the concept of risk as ‘otiose’, a product of lazy thinking that serves little practical purpose. If that is indeed the case it reflects a failure, particularly on the part of recent White and Green papers, to explain what is meant by the term. Although the charge of 'otiosity' might also apply to the various consultation documents published by the Funding Council, HEFCE did provide a concise and useful definition of risk in a guide to good practice that it produced some fifteen years ago (HEFCE 2001). This described a risk as:
‘the threat or possibility that an action or event will adversely or beneficially affect an organisation’s ability to achieve its objectives’.
The definition of risk
HEFCE's definition has three important implications:
A risk can be positive as well as negative, an opportunity as well as a threat: our higher education institutions deliberately take risks, indeed their very survival is often dependent on doing so.
Notwithstanding the positive aspect of risk, for the remainder of this discussion we shall use the term in its negative sense.
The risk assessor's task is to identify future outcomes, those that ‘will’ (or rather could) ‘affect an organisation’s ability to achieve its objectives’.
Outcomes may be the product of factors that are located within the organisation or its provision, or that organisation could be more or less exposed to risk-causing actions or events that are located within its operating environment.
This definition is consistent with the Regulatory Reform Committee’s (2009a) exhortation to 'focus more on assessing possible future risks' and to recognise that risks may be 'systemic' in origin as well as operating at the 'individual’ (or provider) level. In at least one respect, HEFCE’s definition is also consistent with the view that had been expressed by the Royal Society in 1983: this viewed ‘risk’ as ‘the probability that a particular adverse event … results from a particular challenge’ (Adams 2001). Both definitions emphasise the need to understand the causal relationship between the ‘harm’ or ‘loss’ resulting from some prior ‘challenge’, ‘action’ or ‘event’.
Track record and performance
Recent usage has associated ‘risk’ with ‘track record'. However, what is meant by ‘track record’ is often unclear. In the 2011 White Paper the term is used but not explained other than by making repeated use of the ambiguous phrase ‘track record of quality’. Since then, the phrase has been used, sometimes to refer to an institution’s performance with respect to its outcomes or outputs, and on other occasions to its ‘track record in managing quality and standards’. For example, a year after the publication of the White Paper, a HEFCE consultation document (HEFCE 2012) referred specifically to a provider’s ‘track record’ in ‘assuring quality and standards’ as the principal measure of risk. Since then, and until very recently, approaches to quality assessment that focus on institutions’ quality assurance processes have been disparaged in favour of alternatives (including the Teaching Excellence Framework) which focus on outcomes.
The emphasis on an outcomes-focused approach was almost certainly encouraged by a report of the House of Commons' Innovation, Universities, Science and Skills Committee (2009d). It attacked QAA for focusing ‘almost exclusively on processes, not standards’ and concluded that ‘this needs to change’. According to HEFCE (2015b), things did change: there has been, it said, ‘a major shift in quality assessment and assurance activity to focus more on student outcomes than institutional processes’.
It is not surprising, therefore, that in its initial responses to the funding bodies’ consultation paper QAA described its avowedly ‘risk-based’ approach to institutional review as ‘strongly focused on the track record and student outcomes of each provider’. The Russell Group (2015) followed suit, arguing that any new risk-based regime should reflect institutions’ ‘outcomes and operations’ and their ‘risks and ongoing outputs’. And, in similar vein, the Competition and Markets Authority (2015) recommended that the new regulatory framework should ‘maintain a risk-based approach…making more use of student complaints data and closer scrutiny of providers with high non-completion or dropout rates’.
Roger Brown (2015) has suggested that this focus on outcomes might ‘prove to be a blind alley’. By concentrating on a college’s or university’s performance with respect to student outcomes, a regulator or review agency would only be able to determine whether a risk has been realised, whether ‘harm or loss’ has been incurred to the point, perhaps, that the institution, its programmes or its students are already at risk. This was one of the points that emerged in HEFCE’s January 2015 consultation, with some respondents suggesting that the current system was 'more reactive than proactive' (it was 'unable to identify areas of risk before they became problematic') and that it focused more on 'an organisation's past performance' than on its 'ability to develop in the future' (MRUK 2015).
Performance in the sense of outcomes is measured by what are sometimes called ‘lag’ indicators. For TEQSA, the Australian quality agency, lag indicators include measures of graduate satisfaction, first destinations, student attrition and completion rates (TEQSA 2012 and 2016a). If, however, we are to understand and anticipate risk with a view to preventing its realisation, an institution or a regulator would need to dig deeper and identify potentially ‘risk-causing’, or risk-conducing, events and how they might impact on an organisation’s achievement of its objectives (thereby causing ‘harm’ or ‘loss’). In TEQSA’s approach, this entails an examination of such ‘lead’ (or input) indicators as staff-student ratios, the proportion of staff on casual contracts and a provider’s financial viability and sustainability.
The Australian Tertiary Education Quality and Standards Agency (TEQSA) used the terms ‘lead’ and ‘lag’ in an earlier version of its Regulatory Risk Framework (TEQSA.. February 2012). In its most recent version TEQSA has dropped the terms and replaced them with ‘input’ and ‘output’ indicators (Risk Assessment Framework, February 2016). Whilst 'lag’ indicators are described as providing ‘a view of actual history and a record of the provider', 'lead … indicators assist in identifying potential emerging risks through consideration of activity that may cause a risk event' (TEQSA 2012).
Recognising that the phrase 'track record' might apply to the management of quality and standards (see paragraph 8, above), we might add the effectiveness of a provider’s quality management system to TEQSA’s list of lead indicators.
Exposure
We encounter another source of ambiguity when we ask how TEQSA’s lead indicators might assist us in ‘identifying potential emerging risks’. Although a lead indicator could relate to an ‘activity’ that causes a ‘risk event’, it might also describe a feature of the organisation (such as its risk appetite or behaviour) that does not cause the event but makes it more or less vulnerable to some, possibly external, threat. An understanding of the relationship between the risk-causing event and its outcome requires, therefore, some appreciation of an institution’s present and future exposure to such events. It would be ‘exposed’ if it were ‘in danger’, ‘susceptible’, ‘liable’, or ‘vulnerable’ to harm or loss.
This notion of ‘exposure’ acknowledges the potentially complex relationship between the ‘risk-causing event’, a ‘threat’, and an outcome. At its simplest, harm may be inflicted by factors within the organisation or within its operating environment. However, there could be a more complex interaction between potentially damaging events located within and/or outside the organisation and those features of that organisation (strengths or weaknesses, perhaps) that might amplify or mitigate the impact of these events. Applied to an academic department, the potentially risk-causing events might be a consequence of decisions taken (or not taken) elsewhere within the institution as well as developments within the wider political, regulatory, market and economic settings within which both the team and the host institution operate.
The financial sector distinguishes between 'idiosyncratic' and 'systemic' risk and Adair Turner advocated 'a shift in regulatory philosophy' towards a focus on 'systemic risks and judgements about business model sustainability, and away from the assumption that all risks can be identified and managed at a firm specific level' (FSA 2009, The Turner Review: a regulatory response to the banking crisis, p.92). There are (unexplained) references to ‘systemic’ risk in the White Paper, and in a 2013 publication QAA recognised the need to place less emphasis on an institution’s track record and focus more on ‘how and where’ its collaborative activity is being undertaken (Strengthening the quality assurance of UK transnational education). Whether or not these factors impact on the department’s ‘ability to achieve its objectives’ will depend on qualities of that department that might render it more or less susceptible to the potentially risk-causing event.
Some of these ‘qualities’ will be the inherent characteristics of a department, an institution or its provision and many, if not all, of TEQSA’s lead indicators fall into this category (see para 13 above). However, the relevant qualities will also include the organisation’s ability to manage adverse events and their potential impact on its objectives. This means that the likelihood of a risk being ‘realised’ will depend, not only on the organisation’s resilience but also (and crucially so) on its competence in the active management of risk. An institution with a high level of risk exposure could be sufficiently competent in the management of risk as to minimise the likelihood of it suffering harm or loss.
Competence
Risk is a matter of potential, or possibility, and a true risk assessment will always have an eye to the future, anticipating detriment rather than merely observing the damage that has occurred already. Whilst a focus on track record may tell us something about the effectiveness of an organisation’s past decisions, we should heed the advice of the Treasury and Financial Services Authority: ‘past performance information on its own is of little indicative value’, it is ‘an uncertain or potentially misleading guide’ (Walker 2009). Any risk-based approach to regulation must, therefore, pay at least as much attention to a provider's competence in the management of present and future risks, as it does to its past performance with respect to its student outcomes. This was one of the messages that emerged from HEFCE’s consultation on its review of the quality assessment framework, with many respondents noting that a risk-based approach would need to take an organisation’s ‘future capacity’ as well as its ‘track record’ into account’ (MRUK 2015).
‘Track record’ is, as we have seen, an ambiguous concept and it seems likely that the interpretation of the phrase in the MRUK report reflected the usage adopted by QAA in the earlier phases of the consultation (para 11, above). More recently, however, QAA has come to view ‘risk’ as something that should be assessed in terms of an institution’s competence. Thus, in its final response to the Quality Assessment Review, QAA (2015) argued that a risk-based approach should be ‘based on a solid understanding of a provider’s capacity to manage its own quality’ and the Agency’s response to the Green Paper appeared to signal its complete conversion to competence-focused assessment. It acknowledged that its current review method (Higher Education Review) ‘does not provide assurance about (an institution’s) ability to maintain quality and standards in the future’, and it concluded that ‘a truly risk-based approach’ would draw upon the Agency’s earlier audit methodology to enable a judgement of confidence to be made in ‘a provider’s ability to secure quality and standards at (the present) time and in the future’ (QAA 2016). This would, in effect, entail an assessment of an institution's competence in identifying, assessing and managing risk.
'The right intelligence'
‘Diligence in understanding risk’ is just the first step. The Regulatory Reform Committee also advised that the validity and usefulness of any subsequent assessment of risk will depend on whether it is ‘informed by good evidence’ (House of Commons 2009a). What counts as appropriate evidence, ‘the right intelligence’, will itself depend on whether our assessments are going to be focused on symptoms or on causes. In this section I shall argue that if we were intending to focus on an institution’s performance, as measured by student outcomes and other ‘lag’ indicators, we might justifiably rely on quantitative data or ‘metrics’. Conversely, if we were to focus, as we should, on an institution’s risk potential, its competence and exposure, then ‘the right intelligence’ would necessarily draw to a greater extent on evidence of a qualitative nature.
TEQSA is only able to work with a limited range of lead indicators because it is reliant on those which can be measured by metrics. A comprehensive consideration of lead indicators would require the use of more qualitative evidence.
Whilst the costs of collecting and processing the required evidence should be ‘proportionate’ to tits quality and thus the dependability of the decisions upon which it is based, it is much less obvious that the choice of evidence should be governed by a desire to reduce the burden of data gathering and analysis. It seems, however, that the current preference for a ‘metrics and indicators’ approach is being driven by the demand for a ‘streamlined’ system that would reduce ‘the regulatory and administrative cost and burden’ for institutions (BIS 2015).
Griffiths and Halford (2016) have referred to recent calls for the use of ‘indicators’ in an ‘intelligence-led, risk-based approach’. The 2011 White Paper had proposed that ‘the frequency, and perhaps need, for reviews (should) depend upon a basket of data’ and HEFCE’s 2015 consultation document had ‘raised the spectre of quality oversight driven by the monitoring of data’. Griffiths has himself responded to this proposal by publishing several articles exploring ‘how quantitative data can be used as an indication of risk in a risk-based system of quality assurance’. His objective has been to ‘determine which indicators, if any, could have predicted the outcome of past QAA university reviews’.
Griffiths’ work itself reflects the shortcomings of what passes as ‘intelligence-led’ regulation: it lacks a clear definition of ‘risk’ and one might question whether it employs ‘the right intelligence’. Statistical data are by their nature ‘historic’ – they relate to an institution’s past and current performance and, in a limited number of cases, its attributes. It is not surprising, therefore, that the majority of the indicators in Griffiths’ dataset fall within the ‘output’ or ‘lag’ category: they are measures of the ‘history’ and ‘record’ of a college or university and not, as is the case for ‘input’ or ‘lead’ indicators, a means of identifying ‘potential emerging risks’. And Griffiths’ dependent variable (‘the overall outcome of QAA reviews’) may itself be flawed if one concedes the possibility that QAA reviews do not provide a reliable measure of ‘the effectiveness of a provider’s QA processes’.
The problem lies in the formulaic, procedure-driven character of the HER method – a method that rests on the assumption that compliance with the Expectations of the Quality Code is evidence enough of a provider’s competence (Raban and Cairns 2014). As we have seen, this is a point that the Agency has itself conceded (see para 18).
Metrics and lag indicators would be of little value to a regulator that is seeking to gauge ‘future capacity’. Whilst they have their place (and this applies particularly to the way in which the Revised Operating Model provides for concerns about performance to trigger an ‘Unsatisfactory Quality Investigation’), at least as much attention must be given to ‘lead’ or ‘input’ evidence relating to an institution’s competence and exposure.
This is a point that is made by TEQSA: ‘a combination of input and output/outcome indicators are used, recognising that relying solely on output/outcome indicators would mean a focus on the detection of confirmed failure, but not prevention. A combination of indicators also provides a more holistic view of a provider’s operations noting the limitations of individual indicators’. TEQSA (2016). TEQSA’s Risk Assessment Framework, Version 2.1. February. Much of that evidence cannot be captured by ‘metrics’, and an assessment of risk potential or competence would necessarily draw on qualitative evidence which itself would need to be subject to expert judgement.
Qualitative evidence and expert judgement are especially important if an assessment is to be capable of ‘anticipating and managing emerging risks’ or, as some have termed it, ‘risk incubation’ (King 2011b, 2014). This, and the challenge of obtaining ‘good evidence’ to support risk-based judgements, has been acknowledged by HEFCE. It is, we are told, ‘mindful of the complexities’, and it recognises that ‘data analysis and dialogue in these circumstances need to be robust, sophisticated and nuanced’:
‘… we are not advocating a crude metrics-driven approach, using data to predict providers that might or might not have received successful outcomes under previous quality assessment approaches. Rather, data is used as one source of information to inform a broader judgement supported where needed by suitably qualified and independent experts’ (HEFCE 2016).
The Funding Council is recognising here that any risk-based assessment must employ what James Wilsdon (2015) has termed ‘a variable geometry of expert judgement, quantitative and qualitative indicators.
Subject to challenge
At the time of writing, the 2016 Higher Education and Research Bill was passing through the House of Lords. In the debate held on the 6th December, Lord Judd reminded the House of Socrates’ dictum: ‘a good decision is based on knowledge and not on numbers’. The sentiment was echoed by Lord Lucas who described the TEF’s gold, silver and bronze awards as ‘a ranking system for turkeys’. He went on to say that ‘the point of data is to produce lots and then let people make up their own minds, given their own particular needs and context’. Both peers were not only expressing their disquiet over an excessive and injudicious reliance on metrics; they were also reminding us of the inherent contestability of evidence of any kind, qualitative as well as quantitative. In the words of the Regulatory Reform Committee, the evidence generated by a regulatory instrument must be ‘subject to appropriate challenge’.
A ‘crude’ metrics-driven approach might rest on the assumption that the facts speak for themselves. They do not, of course, and, by building peer review into each of its elements, one would expect that HEFCE’s Revised Operating Model (ROM) would ensure that ‘the facts’ (data) are translated into information and intelligence through a process of interpretation and challenge. Challenge – or what King (2014) has described as ‘regular, intelligent and increasingly informed conversations’ – will also be important in identifying ‘areas of hidden risk’. These are risks that may be wilfully or unwittingly concealed from the eyes of a regulator or, indeed, from an institution’s own senior management or governing body (House of Commons 2009a).
Under the terms of the ROM, governing bodies will be expected, as a matter of routine, to ‘challenge assurances received from within (their institutions)’; and the purpose of HEFCE’s assurance visits will be to ‘test the basis’ on which a governing body provides it own assurances to the Funding Council (HEFCE 2016). This means that governors, managers and senior academic bodies will be dependent on the willingness of staff to disclose the risks of which they are aware; and those who are charged with the task of managing an institution’s accountability to its external stakeholders must have confidence in the validity and reliability of the intelligence generated by internal quality assurance procedures.
Any organisation would wish to avert situations of the kind described by Paul Moore (the former Head of Group Regulatory Risk at Halifax Bank of Scotland). Testifying to the House of Commons’ Treasury Committee, Moore explained that ‘the one primary cause’ of the financial crisis was ‘a total failure of all key aspects of governance’ (House of Commons 2009c). As a consequence of the ‘degradation of the risk function’ within HBOS and the ‘lack of corporate self-knowledge’, the bank ‘failed adequately to recognise and act upon the principal risks to its business models’ (House of Commons 2013). Moore concluded that ‘openness to challenge is a critical cultural necessity for good risk management and compliance – it is in fact more important than any framework or set of processes’.
There is an important lesson in Moore’s conclusion. Few would contest the proposition that the full and frank disclosure of risk depends on there being a culture within the organisation that accepts the possibility of failure, if not its necessity when working in complex environments (Power 2004). However, Moore encourages us to also recognise that in higher education the intelligent interpretation of risk depends on institutions being true to their vocation by fostering cultures of challenge. This is an aspect of academic governance, including the day-to-day operation of academic committees, that the regulator should not ignore in its assessment of an institution’s competence, and it is an issue to which I shall return towards the end of this paper.
Academic freedom and academic freedom of expression are central to sustaining a ‘culture of challenge’, and one source of evidence for such a culture is to be found in the papers of a provider’s academic committees. Academic Audit Associates (2017). The conduct of academic governance: a discussion note.
Types of ‘risk-based’ approach
This paper started with a comment on the ‘otiose’ quality of the term ‘risk’, its elusiveness and contestability. In an attempt to clarify its meaning, I observed a tendency to measure ‘risk’ in terms of an institution’s ‘track record' or ‘performance’ as reflected by its student outcomes, tempered by a growing recognition of the need to take account of a provider’s ‘competence’ in managing the risks to which it is exposed (see paragraphs 17-18, above). Irrespective of whether the focus is on competence or performance, once a risk has been identified an effective approach to risk-based regulation will necessarily ensure, through a process of ‘appropriate challenge’, that the assessment of the risk is based on the ‘right’ intelligence.
Once it has identified and assessed a risk, a regulator (or, at institutional level and in relation to academic matters, a provider’s senior academic body) must decide how it is going to act. There are three broad options:
it can vary the nature and intensity of scrutiny according to its assessment of the risk to a department, the institution, its students or, indeed, to the sector as a whole;
it could provide targeted support for the units or institutions that are either at risk or are having to contend with significant threats or hazards; and,
it could intervene in the management of a department’s or institution’s affairs in a manner that, in extreme cases, would be comparable to placing a company in receivership or a school in special measures.
We can use these distinctions (between the ways in which risk might be identified and then acted upon) to construct a typology that describes a variety of approaches to ‘risk-based’ regulation. These approaches occupy or straddle a total of nine categories which provide the basis for a chart that we can use to plot the course of the post-2011 debate on the risk-based regulation of higher education institutions (see para 2, above). It should also enable us to identify possible future destinations for regulators as well as institutions.
ACTION
Scrutiny
Support
Intervention
FOCUS
Performance
A
D
G
Exposure
B
E
H
Competence
C
F
I
When the Government first announced its intention to introduce a risk-based approach, the proposed quality assessment regime would have been located in Category A. 'Significantly less use of full institutional reviews' was envisaged for those providers that had a 'sustained, demonstrable track record of high-quality provision' and, conversely, there would be ‘more regular and in-depth review' for new providers with a 'shorter track record of quality' (BIS 2011). QAA seemed to occupy the same position when, at the beginning of the debate on the Quality Assessment Framework, its representatives defined a risk-based approach as one that varied the intensity and focus of review on the basis of ‘track record and student outcomes’. The Agency was then joined by the Russell Group (2015) when it proposed that ‘high performing institutions with a consistent track record (of high quality provision) should be subject to less regulatory intrusion and fewer audits’ (see para 11, above).
Five years on, the 2016 White Paper proposed that scrutiny should be varied in accordance with assessments of both performance and competence (Categories A and C). A provider’s passage through each of the Sector’s five gateways will be regulated on the basis of assessments that combine, to varying degrees, the two approaches. A similar approach is proposed for the oversight of ‘established providers’ (BIS 2016).
Some applicants will earn a right to awarding powers on the basis of their past performance or 'track record', although 'high quality providers' might enter the awarding sector 'on the basis of their potential', with the effect that they should be able to offer their own degrees 'while building up a 3 year track record for full DAPs'. Its juxtaposition against 'track record' might suggest that what the White Paper means by 'potential' is close to what we have termed 'competence'. Unfortunately, the meaning of 'potential' is not that clear and the White Paper does not explain how a provider might merit the appellation, 'high quality'. Tests of competence are indicated, however, by the White Paper's references to 'rigorous quality controls': the need to achieve approved status and to have met the FSMG requirements before a DAP applicant can be considered. There is also an assurance that an applicant can only obtain 'full' (that is, indefinite) DAPs once it has demonstrated that it is 'a well-founded, cohesive academic community'. It should be noted, however, this assurance is immediately preceded by the statement that the review of a provider before it is granted full DAPs will be informed by 'specified and rigorous outcome measures' (that is, measures of the provider's performance and not of its competence). 'Annual data monitoring' will draw upon ten 'key indicators', all but two of which fall into the 'output' or 'lag' category. A provider’s competence would then come into focus if and when the new regulator – the Office for Students (OfS) – detects 'significant shifts' in its monitoring data. Such shifts, where 'annual monitoring gives cause for concern', would prompt an assessment of competence – ‘a more detailed and targeted investigation' in the form of a ‘Quality Review’ (QAA 2016).
We know that this investigation will entail a 'quality review visit' of the kind described by a recently published QAA Handbook (QAA 2016b). The White Paper describes a process which would focus on a range of 'lead' or 'input' indicators, all relating to a provider's competence, potential and 'future capacity'. These would include the provider's curriculum, staffing levels and qualifications, facilities, assessment processes and its arrangements for learner support. The QAA Handbook makes it clear that Quality Review is also a 'gateway process' – a means of testing providers that are seeking entry to the English higher education system 'against the components of the baseline regulatory requirements'.
Category C assessments of competence feature more prominently in HEFCE’s Revised Operating Model (ROM) than in the White Paper. This reflects the way in which governance and the CUC Code (2014) have occupied centre stage throughout the debate on the Quality Assessment Framework with, for example, HEFCE’s consultation document (2015b) proposing to place ‘further reliance … on an institution’s own internal governance’. Indeed, the very design of the ROM is based on the premise that 'as a provider matures' the 'pattern of scrutiny', and thus its mode of engagement with the funding bodies, 'shifts from detailed testing of baseline requirements to testing the effectiveness of a governing body to continue to discharge its responsibilities' (HEFCE 2016).
This focus on a governing body's ‘capability and approach’, its competence, is apparent in several components of the ROM. The purpose of the one-off 'Verification' exercise is to assess the effectiveness of an institution’s arrangements for internal review. The five-yearly 're-focused Assurance Review visit' will then act as a periodic check on 'the basis on which a governing body can provide assurances about the provider's activities' (HEFCE 2016). In this respect, Verification and Assurance Review complement the emphasis that both Annual Provider Review and the Teaching Excellence Framework will place on academic and student outcomes. Thus, in broad conformity with the White Paper’s proposals, the ways in which a college or university will be scrutinised will vary in accordance with assessments of both its competence and its performance.
Turning now from the assessment of risk to its management, Categories D-F would be exemplified by Roger King’s suggestion that ‘regulatory attention’ should be focused on ‘building up the resilience of the higher education sector’, and by the support that the funding bodies can provide for institutions that are in financial difficulty. HEFCE’s ROM suggests that the regulator might offer some support for governing bodies and for new, 'less mature' providers – those which might be deemed to be not-yet-competent in their handling of the responsibilities of an 'approved' provider. Irrespective of whether that support is supplied directly (by the regulator itself) or through some other agency, this aspect of the ROM extends the approach to risk-based regulation into Category F.
In UK higher education there have been some instances of a regulator exercising the third option – intervention (Categories G-I). These include the restructuring of University College Cardiff in the early 1990s, and the Funding Council’s later involvement in the internal affairs of Southampton Institute, Thames Valley University and London Metropolitan University. A recent example would be the action taken by QAA when it found that ‘15 per cent of the recommendations for improvement (in HER reports) related to programmes leading to Pearson awards’.
For examples, see D Warner and D Palfreyman (2003). By publishing its own guidance for institutions, the Agency had acted in a way that would normally be reserved for the awarding body itself (QAA 2015a).
A college or university is suffering a ‘full blown’ crisis when its problems ‘cannot be resolved without external intervention – a deus ex machina in the form of an imposed “company doctor”, a new vice chancellor or a forced merger’ (Scott 2003). In the cases cited in the previous paragraph, the intervention occurred when the institution was already in crisis, so that action was necessary to deal with the consequences of a risk that had been realised rather than to avert a potential crisis by managing the risk itself. Whilst the measures that were taken were of a remedial nature, the Funding Council and the Welsh Office have at their disposal the means to detect ‘incubating’ or ‘incipient’ risks and the potential to take preemptive action (Fender 2003). If and when a regulator were to act in this way, its approach would move from Category G to Categories H and I. The likelihood and appropriateness of such action is an issue that I shall tackle in the next section.
An appropriate use of powers
Our typology is based in part on a three-fold distinction between the ways in which either a regulator, or a provider’s senior academic body, might act once it has identified a risk (Para 31). In considering the same issue, the report of the Regulatory Reform Committee concluded that ‘there is scope for regulators … to use their powers of enforcement more effectively’ and to be ‘intrusive rather than light-touch when appropriate’. In this section I shall consider what would constitute an appropriate relationship between a regulator and its regulatees. We are, in effect, addressing the issue of 'proportionality', a term that offical documents often place in tandem with ‘risk’.
Rather than using the distinction between ‘intrusive’ and ‘light touch’, the regulatory relationship will be described in terms of a continuum of possibilities, ranging from the autonomy of a self-regulating institution or a self-managing department to the heteronomous status of an organisation or unit that is externally governed. In a regulatory context, and stopping short of these two extremes, Lee Dow and Braithwaite (2013) make a helpful distinction between ‘regulating as object’ and ‘regulating as partner’. In the former case ‘the flow of events (is steered) through prescriptive requirements or through controlling objects’, whilst the latter ‘recognises the expertise, knowledge and commitment of the party being regulated’.
The fact that the move to risk-based approaches has been accompanied by a ‘"deregulatory" and marketisation rhetoric’ (Hommel and King 2013) does not rule out the possibility of intervention. Whilst a lightening of the regulator’s touch might be the preferred position, autonomy must be earned: it should be conditional on an institution demonstrating its competence in the management of its affairs, including the management of the risks to which it is exposed. In Australia, this much has been recognised by the Group of Eight (Gallagher 2010, 2011) and by its UK sister body, the Russell Group. If an institution should prove ‘to be unable to meet (its) responsibility to self-regulate, the regulator (can take) a more interventionist approach to ensure the compliance obligations are met’ (Lee Dow and Braithwaite 2013).
In its recently published annual report, TEQSA states its intention to take regulatory action ‘only where there is no effective alternative way to achieve compliance with the HE Standards Framework’ (TEQSA 2016b). In these circumstances, the relationship between regulator and regulatee turns from ‘regulation as partner’ to ‘regulation as object’.
The report of the Regulatory Reform Committee concluded that, in acting ‘in a proportionate manner’, regulators should seek to ‘match the experience and weight of those they regulate’. This interpretation of proportionality implies that the mode of engagement with an individual provider should be governed by the regulator’s assessment of its competence. The ROM is in this sense competence-focused. It promises action that would be ‘proportionate’ relative to the robustness of an institution’s methodology for internal review and the maturity of its governance arrangements: in other words, its competence in the identification, assessment and management of risk.
This approach is consistent with Michael Power’s (2004) suggestion that 'regulation is likely to be more effective and more acceptable if it works with the grain of private (or in the present context, institutional) control systems'. It also reflects HEFCE’s declared commitment to the ‘core principles’ of ‘co-regulation’ and the need to respect the autonomy of institutions and the sector as a whole (Quality Assessment Review Steering Group 2015).
Especially those with their own degree awarding powers In a possibly unconscious echo of the principle of ‘earned autonomy’, HEFCE’s January 2015 discussion document emphasised the conditional nature of institutional autonomy: whilst ‘the primary responsibility for the quality of education and standards…lies with each university and college’, the funding bodies must ‘assure themselves…that providers are indeed discharging this responsibility appropriately and well’.
At what point, then, would it be appropriate for a regulator to act? The limits to intervention are likely to vary from sector to sector. For higher education in England, if not in all parts of the United Kingdom, HEFCE's core principles would suggest that it would only be appropriate in extremis for the regulator to intervene in an institution’s affairs.
The situation in Wales may be different given the nature of the relationship between higher education institutions and the Welsh Assembly. It is for this reason that there are relatively few examples of action on the part of either HEFCE or QAA that would place their approaches to risk-based regulation in Categories G-I in our typology (see paragraph 38-39, above). It would also explain the perennial complaint that external review methods are inappropriately intrusive,
See for example Sursock 2002: ‘The recent UK developments have shown the limitations of an approach that was perceived as too intrusive. A quality assurance system that is perceived as creating work instead of creating quality will not yield the anticipated results. It induces compliance and window dressing. (This) compliance serves no one: not the students, not governments, and not the institutions themselves’. See also Raban and Cairns (2015) for a discussion of the tendency for QAA’s recent review methods to demand an institution’s compliance (rather than engagement) with a prescriptive Quality Code, particularly if and when these reviews are conducted in an ‘inspectorial’ manner. and the funding bodies’ apparent awareness that intervention to assure the comparability of degree standards might be construed as an act of trespass on sector territory.
See, for example, HEFCE (2016) paras 140, 144
Internal quality management
Risk is not a matter for regulators alone and, on various occasions, this paper has touched on the ways in which institutions might apply the concept to develop their own methods for assuring the standards and enhancing the quality of their provision. It has also been suggested that much of the advice offered by the 2009 Regulatory Reform Committee could be usefully considered by institutions. In this section I observe that the notion of a ‘risk-based’ approach to internal quality management has a modish quality, and I consider the various arguments that are and might be used to justify its adoption. I go on to identify the strengths and shortcomings of the ways in which institutions have acted upon their commitment to a risk-based approach, and I draw upon the earlier discussion to suggest certain design principles that might ensure the effectiveness of such an approach to quality management.
A scan of HER reports reveals that many, perhaps most, institutions have developed their own ‘risk-based’ approaches. In the great majority of cases, some form of risk assessment is used to guide the development, approval and (less commonly) the review of collaborative partnerships and programmes.
Based on the 45 HER reports published by October 2016, 35 from a total of 45 universities had developed risk-based approaches. Rather than focusing narrowly on outcomes (Category A), these risk assessments aim to be predictive, seeking to identify those partner organisations or programmes that might pose future quality, reputational or financial risks and which will require, therefore, closer scrutiny before they can be approved (Category C).
The similarity between institutions’ approaches to risk-based quality management is striking. For example, many employ what is commonly called a ‘risk assessment tool’ (RAT), listing various attributes from which an institution might deduce the level of risk presented by a potential partnership. The resulting risk assessments are competence-focused, and they have the merit of being easy to complete. However, the approach is formulaic and presumptive. It presumes that certain types of programme or partner will always present a particular level of risk and, typically, these RATs employ scoring systems that are based on a collection of closed questions with decisions being triggered by simplistic ‘RAG ratings’.
A ‘RAG rating’ is the classification, in risk registers using a ‘traffic light system’, of an issue or item as red, amber or green In view of the significance of these decisions, the rudimentary nature of the method is itself a source of risk.
For example, the design of these tools often assumes that private sector providers are inherently more risky than those that receive public funding. This, of course, is not invariably the case, and the assumption might lead a university to adopt an approach to the approval of a partnership with a further education college that is so 'light touch' that it never discovers, until it is too late, that that college has some real shortcomings. There can be no substitute for thorough, sometimes forensic, due diligence enquiries.
The report on the HEFCE-funded quality risk management project commented on ‘the a priori assessment of a proposed partnership against standardised and predetermined criteria, honed (and sometimes reviewed) in the light of hardwon experience’. It suggested, instead, that ‘the responsible department (and, indeed, the partner organisation itself) should take responsibility for identifying and assessing risk on a day-to-day basis once the partnership has been established. It is important that this ‘ongoing’ (or empirical ) risk assessment should not be undertaken against pre-determined criteria. Our world is uncertain, particularly so for collaborative provision developed and maintained in turbulent policy, legal and market environments’ (Raban and Turner, 2005).
Dashboards, RAG ratings and RATs are in such common use that one might suspect that they are symptomatic of a ‘herd-like’ response to uncertainty: as Roger King (2011a) has put it, ‘isomorphism and copying others may appear the safest protective strategy … at times of rapid change’. Mimetic action of that kind would justify Michael Power’s (2004) suggestion that we might be witnessing ‘just one more management craze’. Power, however, recognises another possibility – that the ‘invasion’ of ‘organisational life’ by ‘ideas about risk and its management’ could represent ‘a rational response to an increasingly risky world.’ As such, the development of a risk-based approach could be an attempt by an institution to enhance its resilience and competitiveness by reducing the burden and costs imposed by a quality management system, by improving its effectiveness, or by engaging in some combination of the two.
I have touched already on the appeal of the ‘cost and burden’ rationale for regulatory reform (see paragraph 20, above). Risk-based quality management has similar attractions for those who are seeking to reduce costs and lessen the load on academic departments. From their point of view, discursive and labour-intensive approaches to quality assurance could be replaced by ones that are ‘data-driven’ with management decisions being prompted by ‘dashboards’ and ‘traffic lights’. A case in point is the use of risk assessment tools. They offer a simple, straightforward and undemanding (though ultimately inadequate) method for identifying and acting upon the risks presented by new ventures.
Of course, uncertainty could prompt institutions not to join the herd, but to behave like hedgehogs. They could adhere to ‘established’ arrangements, secure in the knowledge that they are ‘complying’ with the ‘requirements’ of the Quality Code. But neither kind of conformity – copying others or doing nothing – can provide a secure basis for the construction of systems that will enable an institution to manage its responsibilities in the current turbulent state of the sector. At a time of such rapid change, a conventional approach to quality management might itself become a source of risk, impeding us in our efforts to foresee and forestall threats to our institutions and to their provision.
As was explained in an earlier piece, ‘approaches to quality assurance that were fit for the purpose of managing an institution’s responsibilities in one set of circumstances may be inappropriate in another’. Now, more than ever, institutions need to act in a creative and self-determining manner, developing quality strategies that will better equip them to 'cope with the challenges of the future' (Raban and Turner 2006). In that earlier article we went on to offer a number of principles which might inform the development of a risk-based quality management system. One of these, and possibly the most important, was that an institution’s approach to academic governance should recognise that ‘responsibility for “at risk” provision is shared between teaching staff and their managers’. To this end, institutions would need to establish ‘a climate in which staff are encouraged to disclose evidence that provision is “at risk” and identify factors which might jeopardise the quality and standards of academic provision in the future’. We argued that such a ‘culture of challenge’ (see para 29, above) is crucial if a college or university is to develop the relationships of trust that will enable it to become and remain a ‘learning organisation’ with effective academic governance.
Uncertainty and turbulence place a premium on a provider’s ability to act as a learning organisation. In this context, then, an effective quality management system would ensure that the institution has a continuously improving capacity to identify, evaluate and act on risks that arise from both within and outside the organisation. That much is obvious; less clear are the practical implications, although we can derive some guidance from our earlier discussion on risk-based regulation.
An approach to quality management that is both ‘risk-based’ and effective would have three characteristics:
Monitoring and review procedures would perform a reconnaissance and not just a surveillance function: they would attempt to anticipate future risks (see paras 7 and 12) and those that originate from the organisation’s (or a department’s) operating environment (paras 14-15)].
The evidence upon which assessments of risk are based should not be over-reliant on metrics and presumptive judgements (see paras 19-24).
Judgements of competence as well as performance should inform the way in which an institution’s senior academic body engages with academic departments, the intensity with which they are scrutinised and the ways in which they are supported (see paras 13, 16-18)
To these characteristics we might add a fourth:
Front line staff deliver the core business of an institution and they will be a primary (although not the only) source of its risk intelligence. This means that the design of a quality management system should secure the mutual accountability of managers and their staff, and of those located within central and academic departments.
The principle of mutual accountability, taken together with the emphasis I have placed on ensuring that the interpretation of risk is subject to expert judgement and challenge (see paras 24-29), underscores the importance of an institution’s arrangements for academic governance. Its deliberative structure, its committees, should provide the arenas within which all parties – academic and support staff, students and managers – can call each other to account, risks can be assessed and actions can be agreed.
There is a potential tension between our characterisation of the governance aspects of an effective quality management system and the ‘cost and burden’ argument for reform. As we have seen, a common justification for risk-based, ‘lean’ and, where merited, ‘light touch’ approaches is that they free up institutions and their departments to compete more effectively in the marketplace: the organisation is said to become more ‘agile’, ‘fleet of foot’, ‘responsive’ and ‘adaptable’. These points are sometimes invoked in arguments for ‘streamlining’ an institution’s committee structure, suggesting that universities are ‘ill-positioned for an age of fast change and higher uncertainty’ because ‘decisions become lost in a welter of committees’ (Burton Clark 2001). If and when providers respond by reducing the significance of committees and strengthening the decision making powers of their executive teams they limit the scope for the expert judgement and challenge that is essential if quality management, risk-based or otherwise, is to be effective.
Michael Shattock (2013) has argued that an ‘uncertain, unstable environment’ is ‘a forcing house for the concentration of decision-making powers into small groups’. Not only might this concentration of power curtail opportunities for deliberation, but it could also create the kind of organisational culture in which the views of front line staff are not valued and in which they are unwilling to recognise and disclose risk (see paras 28-29, above). Shattock goes on to describe this concentration of powers as a dysfunctional reaction to ‘an uncertain and volatile external environment’. If universities are to survive and succeed in current conditions, ‘governance, leadership and management (need to become) more effective’ by being ‘open to bottom-up influence and to an influx of new ideas and initiatives’. Shattock concludes that those universities that are able to ‘resist the concentration of decision-making powers into smaller groups … are likely to emerge with a more distinctive academic culture and a better academic product’.
Shattock’s conclusion is consistent with Burton Clark’s ‘entrepreneurial counter narrative’ and his notion of a ‘stimulated academic heartland’. The argument has been taken further by Susan Lapworth (2004) in her plea for shared governance, and by Robin Middlehurst’s (2013) argument that ‘command and control’ styles of internal governance are ‘inappropriate and inadequate to meet the challenges of the era of globalisation and the “knowledge, communication and information revolutions” that are now underway’.
In this section I have drawn upon the earlier part of this paper to propose conditions for establishing an effective risk-based approach to internal quality management. The debates on and proposals for risk-based regulation should not, however, be treated as a blueprint for the design of institutional systems. For example, whilst it might be an appropriate use of its powers for a regulator not to intervene in an institution’s affairs (see paras 40-45), that provider’s senior academic body could not adopt the same stance without exposing itself to the charge of abdication. And, crucially, colleges and universities will need to go beyond the current terms of the debate on regulation to consider how they might establish and maintain the institutional climate, the culture of challenge rather than compliance, that will ensure that their quality management systems are effective. That is an issue for all, irrespective of whether they intend to develop a specifically risk-based approach to quality management.
Conclusion
This paper started with a brief review of the recent debate on the regulation of higher education institutions (paragraph 2) and it went on to consider in more detail the Funding Council’s revised operating model (ROM). The ROM is a sub-species of the government’s avowedly risk-based approach, distinguished by the way in which it brings the ‘internal governance’ of institutions to centre stage (see paras 27 and 35, above). It places the onus on institutions, and ultimately on their governing bodies, to demonstrate the fitness for purpose and effectiveness of their arrangements for ‘internal’ or academic governance – their management and committee structures, together with their approach to the approval, monitoring and review of their academic provision.
Times are changing. The new regulatory regime, and the unprecedented turbulence now facing the sector, create the opportunity and perhaps the need for fresh thinking on the design and operation of internal quality management systems. This might include the development of risk-based approaches, drawing inspiration from the recent debates on risk-based regulation. If so, the advice of the Regulatory Reform Committee stands, for regulators and providers alike. Both need to consider the meaning of ‘risk’, how they should focus their assessments of risk, the nature of the evidence that should support such assessments, and the governance arrangements that will be necessary to ensure that the evidence and its implications are subject to expert judgement and challenge. It is these quality management arrangements, including – where appropriate – the effectiveness of a provider’s own ‘risk-based’ approach, that will be tested through HEFCE’s (and soon the Office for Students’) quinquennial Assurance Review process (para 36).
30 June 2017
Comments and correspondence should be addressed to colin.raban@gmail.com References
Academic Audit Associates (2017). The conduct of academic governance: a discussion note.
Adams, J. (2001). Risk. Routledge
Adams, J. (2006). Risk Management: Making God Laugh. Financial World
Better Regulation Task Force. (2003). Principles of Good Regulation. Cabinet Office.
BIS (Department for Business Innovation and Skills). (2011). Students at the Heart of the System. Cm 8122.
BIS (2015) Fulfilling our Potential: teaching excellence, social mobility and student choice. Cm 9141.
BIS (Department for Business Innovation and Skills). (2016). Success as a Knowledge Economy: teaching excellence, social mobility and student choice.
Brown, R (2015). QAA: a watchdog is for life. Times Higher Education, July 9.
Clarke, B (2011). The Enrepreneurial University: new foundations for collegiality, autonomy and achievement. Higher Education Management, Vol 13, No 2.
Competition and Markets Authority. (2015). An effective regulatory framework for higher education: a policy paper. 23 March. CMA42.
Committee of University Chairs. (2014). Higher Education Code of Governance.
DiMaggio, P J and Powell, W W (1983). The Iron Cage Revisited: institutional isomorphism and collective rationality in organisational fields. American Sociological Review, Vol 48, Issue 2.
Fender, B. (2003). A Funding Council Perspective, in D Warner and D Palfreyman.
FSA. (2009). The Turner Review: a regulatory response to the banking crisis.
Gallagher, M. (2010). The accountability for quality agenda in higher education. Canberra. Group of Eight.
Gallagher, M. (2011). Putting proportionality into practice: a call for a more nuanced and mutual approach to university regulation. Address at the Higher Education Congress, Sydney. March 7.
Griffiths, A and Halford, E. (2015). Zen and the art of risk assessment: what are the implications of a system of risk-based quality assurance for higher education in England? Paper presented at the European Quality Assurance Forum, November.
HEFCE. (2001). Risk Management: a guide to good practice for higher education institutions. May 01/28.
HEFCE (2012a). A risk-based approach to quality assurance: consultation. May 2012/11.
HEFCE (2012b). A risk-based approach to quality assurance: outcomes of consultation and next steps. October 2012/27.
HEFCE (2014). Update on quality assessment issues. November
HEFCE (2015a). The future of quality assessment in higher education, Quality Assessment Review Steering Group. Discussion document. January 2015
HEFCE. (2015b). Quality Assessment Review: Future approaches to quality assessment in England, Wales and Northern Ireland. Consultation document, HEFCE 2015/11 29 June
HEFCE. (2016). Revised Operating Model for Quality Assessment. March.
Hommel, U and King, R. (2013). The emergence of risk-based regulation in higher education. Journal of Management Development, Vol 23 Issue 5.
House of Commons (2009a). Themes and Trends in Regulatory Reform: ninth report of Session 2008-09. HC 329-I.
House of Commons (2009b). Themes and Trends in Regulatory Reform: ninth report of Session 2008-09. Volume II: Oral and written evidence. HC 329-II.
House of Commons. (2009c). Banking crisis: dealing with the failure of the UK banks. Treasury Committee. See also the Treasury Committee Written Evidence, Part 3 paragraphs 2.9 and 3.17, February 2009.
House of Commons. (2009d). Students and Universities. Innovation, Universities, Science and Skills Committee. July 2009.
House of Commons. (2013). Parliamentary Commission on Banking Standards. An accident waiting to happen’: the failure of HBOS. Fourth report of session 2012-13, 4 April 2013.
King, R. (2011a). Regulatory trust a two-way street. The Australian. July 6.
King, R. (2011b). The risks of risk-based regulation: the regulatory challenges of the higher education White Paper for England. HEPI.
King, R. (2014). Regulating Uncertainty: pluralism and centralism in the regulatory regime for higher education in England. Talking About Quality.
Lapworth, S (2004). Arresting decline in shared governance: towards a flexible model for academic participation. Higher Education Quarterly. Vol 58, No 4. October.
Lee Dow. K., and V. Braithwaite. (2013). Review of Higher Education Regulation. Canberra: Commonwealth of Australia.
Middlehurst, R (2013). Changing internal governance: are leadership roles and management structures in the United Kingdom universities fit for the future? Higher Education Quarterly. Vol 67, No3. July
MRUK. (2015). The future of quality assessment in higher education: an analysis of the responses to Phase 1 of the quality assessment review. June
Power, M. (2004). The Risk Management of Everything: rethinking the politics of uncertainty. Demos
QAA. (2013). Strengthening the quality assurance of UK transnational education.
QAA. (2015a). Higher Education Review: second year findings 2014-15.
QAA. (2015b). QAA Response to the Quality Assessment Review. August.
QAA. (2016a). Fulfilling Our Potential: teaching excellence, social mobility and student choice. QAA’s response. January.
QAA. (2016b). Quality Review Handbook.
Quality Assessment Review Steering Group. (2015). The Future of quality assessment in higher education. HEFCE January.
Raban, C. (2008). Partnership, prudence and the management of risk in K Clarke (ed), Quality in Partnership. CVU/Open University Press.
Raban, C. (2011). Risk and Regulation. Talking About Quality, QAA. 1 October.
Raban, C and Cairns, D. (2014). How did it come to this? Perspectives, 18, 4. January.
Raban, C and Cairns, D. (2015). Where do we go from here? Perspectives Vol 19, Number 4
Raban, C and Turner, E. (2005). Managing Academic Risk. Edge Hill.
Raban, C and Turner, E. (2006). Quality Risk Management: modernising the architecture of quality assurance. Perspectives, Vol 10, Number 2.
Russell Group. (2014). Russell Group Statement on Quality Assessment in Higher Education. 7 October.
Russell Group. (2015). HE Quality Assurance. 15 April
Scott, P. (2003). Learning the Lessons in D. Warner and D. Palfreyman
Shattock, M. (2013). University governance, leadership and management in a decade of diversification and uncertainty. Higher Education Quarterly. Vol 67, No 3. July
Sursock, A. (2002). Reflections from the Higher Education Institutions’ Point of View: accreditation and quality culture. European University Association.
TEQSA (2012). Regulatory Risk Framework. February
TEQSA. (2016a). Risk Assessment Framework. February
TEQSA (2016b). Annual Report 2015-2016. Australian Government.
Walker, D. (2009). A review of corporate governance in UK banks and other financial industry entities. HM Treasury.
Warner, D and Palfreyman, D. (2003). Managing Crisis. Open University Press.
Wilsdon, J., et. al. (2015). The Metric Tide: report of the Independent Review of the Role of Metrics in Research Assessment and Management. HEFCE, July 2015.
2
© Colin Raban and Academic Audit Associates 2017
© Colin Raban and Academic Audit Associates Ltd 2017