Analytic Quality Glossary

 

A B C D E F G H I J K L M N O P Q R S T U V W X Y Z Home

 

Citation reference: Harvey, L., 2004-24, Analytic Quality Glossary, Quality Research International, http://www.qualityresearchinternational.com/glossary/

This is a dynamic glossary and the author would welcome any e-mail suggestions for additions or amendments. Page updated 8 January, 2024 , © Lee Harvey 2004–2024.

 

Recipes

   

_________________________________________________________________

Improvement


core definition

Improvement is the process of enhancing, upgrading or enriching the quality of provision or standard of outcomes.


explanatory context

Improvement is one of the purposes of quality in higher education, the other main purposes are accountability, control and compliance.

 

Quality improvement is, however, often used as a generic term to cover both quality and standard improvement. It is also a term used to imply both a rationale for quality processes (internal or external to the institution) and the actions undertaken by an institution following a quality evaluation event.


analytical review

CHEA (2002) for example defines quality improvement as:

The expectation that an institution will have in place a plan to monitor and improve the quality of its programs. In most cases, quality assurance and accrediting agencies require that established procedures ensure that this is an ongoing process.

 

Tempus (2001) links quality improvement to efficiency and enhanced benefits:

Quality Improvement – measures undertaken in order to increase efficiency of actions and procedures with the purpose of achieving additional benefits for the organisation and its users.

 

Another approach talks about improvement orientation (of external quality evaluation).

Improvement orientation: Evaluation or assessment describing the strengths and weaknesses of a unit to facilitate improvement, and possibly including suggestions as to how to achieve it. (Campbell & Rozsnyai, 2002, p. 132)

 

Harvey (2002) summing up developments in quality evaluations notes:

External quality monitoring is also usually expected to lead to improvement: at the very least more transparent documentation. Monitoring might be specifically designed to encourage continuous improvement of the learning and teaching process. In some cases, such as the Swedish audits, it was designed to evaluate the quality improvement process by identifying improvement projects and evaluating their effectiveness. In any event, external monitoring is usually expected to result in better dissemination of (good) practice.


Øvretveit, (2009, p. 8), in the context of health, defines improvement as:

better patient experience and outcomes achieved through changing provider behaviour and organisation through using a systematic change method and strategies.


associated issues

Improvement and accountability

The relationship between improvement and accountability (be it manifested through assessment, audit or accreditation) has been the subject of extensive debate among commentators on higher education quality. The debates are of two broad types (a) whether improvement and accountability are the compatibility or incompatibility especially within the same quality review agency (b) the exploration of how accreditation and other forms of evaluation are encompassing an improvement element into their evaluation processes (Stensaker and Harvey, 2000). Woodhouse (1999, p. 39), for example argues that as accountability and improvement ‘are closely linked that it is more sensible to have the same agency sensitively attempting both than to try to separate them’.

There has been a long-running debate about the effectiveness of accountability-oriented approaches in securing improvement. That many systems of monitoring quality specifically emphasise improvement at the second stage of development of the monitoring process tends to suggest that improvement does not necessarily go hand-in-hand with accountability. Initially, in the UK, the Higher Education Quality Council (HEQC) had separate branches for ‘audit’ and for ‘enhancement’, which acknowledged the separation of accountability and improvement. The enhancement function faded away and when the HEQC was absorbed into the QAA it disappeared altogether. (Harvey, 2002)


In the review of the contributions in the first 15 years of the international journal Quality in Higher Education, Harvey and Williams (2010) write the following about the contributions on the accountability and improvement debate:

One area of concern was the tension between improvement and accountability. Despite contributions suggesting how a balance could be achieved, the overall tenor of the contributions was that external quality evaluations of whatever type were not particularly good at encouraging improvement especially when they had a strong accountability brief. Middlehurst and Woodhouse (1995) addressed the question of whether or not it is desirable, feasible, or stable to combine the functions of quality improvement and accountability in national arrangements for quality assurance in higher education. They argued that, while it is possible to specialise a system towards improvement, it is not possible to have a separate system solely for accountability, as it will inevitably overlap into improvement. Improvement and accountability must be conceptually and practically distinct, with separate resourcing. A clear understanding and respect for the separate purposes needs to be developed within both national agencies and institutions. A failure to accommodate different purposes could damage the quality and the integrity of higher education by leading to serious imbalances of power.

Thune (1996) reported the development, during the 1990s, of systematic procedures of evaluation of higher education in several European countries. Accountability and quality improvement, he argued, are often conceived as mutually exclusive goals of evaluation, which are based on different methods related to the ownership of the evaluation system. However, the character of the process is different from, and independent of, control. He argued that accountability and quality improvement may be combined in a balanced strategy and, in the Danish case, these two perspectives have been synthesised in a dual approach, with an emphasis on improvement.

Using Deming’s approach to quality in the industrial sector as a basis for analysis of quality assurance development in the US, the UK, and the Netherlands, Dill (1995) suggested that quality assurance policies are more effective in contributing to improvement when they foster the development of ‘social capital’, both within and between academic institutions.

Danø and Stensaker (2007) maintained that the role and function of external quality assurance is of great importance for the development of an internal quality culture in higher education. Research has shown that external quality assurance can stimulate but also create obstacles for institutional improvement. To strike a balance between improvement and accountability is, therefore, a key issue. They reviewed developments in eternal quality assurance in the Nordic countries and argued that although external quality assurance during the 1990s could be said to exemplify such a balance, it is questionable whether they have managed to maintain this balance over time, not least considering the introduction of various accreditation schemes in the Nordic countries as well as in the rest of Europe. They pointed to key issues on how external quality assurance could also stimulate a quality culture in the ‘age of accreditation’.

An important element of improvement, it is argued, is the follow-up after the evaluation to ensure suggested improvements are put in place. Leeuw (2002) examined the inspectorate process. Many European countries have inspectorates of education and although they differ in some ways, all focus on the quality of education, all undertake evaluations and all strive for improvement in education. He argued that reciprocity between inspectors and institutions is important. Reciprocity includes exchange of information, both what institutions give and what they get back, and transparency of operations. Reciprocity, he argued, reduced the potential for dissembling and game playing because inspectees would lose credibility as trustworthy partners in the evaluation. Reciprocity is about trust and without it inspectorates run the risk of becoming ‘trust killers’, particularly if they focus too much on their own norms and criteria without discussing them in depth with their inspectees. In practice, only a minority of the 14 European inspectorates examined are involved in a reciprocal relationship with their inspectees. Although no reciprocity is bad for practice, too much reciprocity, he claimed, can harm the independence of inspectorates and may even lead to ‘negotiating the truth’.

Despite these analyses, many agencies have failed to develop an appropriate balance, often failing to accommodate improvement and prioritising accountability. An essential element of that is the apparent dissolution of trust: an issue that recurs.


related areas

See also

enhancement


Sources

Campbell, C. & Rozsnyai, C., 2002, Quality Assurance and the Development of Course Programmes. Papers on Higher Education Regional University Network on Governance and Management of Higher Education in South East Europe Bucharest, UNESCO.

Council For Higher Education Accreditation (CHEA) 2001, Glossary of Key Terms in Quality Assurance and Accreditation http://www.chea.org/international/inter_glossary01.html, last updated 23 October 2002, accessed 18 September 2012, page not available 30 December 2016.

Danø, T. and Stensaker, B., 2007, ‘Still balancing improvement and accountability? Developments in external quality assurance in the Nordic countries 1996-2006’, Quality in Higher Education, 13(1), pp. 81–93.

Dill, D.D., 1995, ‘Through Deming’s eyes: a cross-national analysis of quality assurance policies in higher education’, Quality in Higher Education, 1(2), pp. 95–110.

Harvey, L., 2002, ‘Quality assurance in higher education: some international trends’  Higher Education Conference, Oslo, 22-23 January 2002, pp. 21–22. Paper available as a pdf here.

Harvey, L. and Williams, J. 2010, 'Fifteen Years of Quality in Higher Education', Quality in Higher Education, 16(1), pp. 4–36. Available here.

Leeuw, F.L., 2002, ‘Reciprocity and educational evaluations by European Inspectorates: assumptions and reality checks’, Quality in Higher Education, 8(2), pp. 137–149.

Middlehurst, R. and Woodhouse, D., 1995, ‘Coherent systems for external quality assurance’, Quality in Higher Education, 1(3), pp. 257–268.

Øvretveit, J., 2009, Does improving quality save money? A review of the evidence of which improvements to quality reduce costs to health service providers. London: Health Foundation.

Stensaker, B. and Harvey, L., 2004, ‘New wine in old bottles or was it the other way around? A comparison of public and private accreditation schemes in higher education’, paper at the CHER Conference, Enschede, 17–19 September. Available here.

Tempus, 2001, Glossary of the terms related to quality assurance Development of Quality Assurance System in Higher Education (QUASYS) Tempus Joint European Project, UM JEP-16015-2001 http://www.unizg.hr/tempusprojects/glossary.htm , accessed 1 September 2012, still available 14 May 2022.

Thune, C., 1996, ‘The alliance of accountability and improvement: the Danish experience’, Quality in Higher Education, 2(1), pp. 21–32.

Woodhouse, D., 1999, ‘Quality and Quality Assurance’ in Organisation for Economic Co-Operation and Development (OECD), 1999, Quality and Internationalisation in Higher Education, pp. 29–44, Programme on Institutional Management in Higher Education (IMHE), Paris, OECD.


copyright Lee Harvey 2004–2024



Top

A B C D E F G H I J K L M N O P Q R S T U V W X Y Z Home