Analytic Quality Glossary

 

A B C D E F G H I J K L M N O P Q R S T U V W X Y Z Home

 

Citation reference: Harvey, L., 2004-24, Analytic Quality Glossary, Quality Research International, http://www.qualityresearchinternational.com/glossary/

This is a dynamic glossary and the author would welcome any e-mail suggestions for additions or amendments. Page updated 8 January, 2024 , © Lee Harvey 2004–2024.

 

Recipes

   

_________________________________________________________________

Performance indicators


core definition

Performance indicators are data, usually quantitative in form, that provide a measure of some aspect of an individual’s or organisation’s performance against which changes in performance or the performance of others can be compared.


explanatory context

Although performance indicators have relatively precise meaning the term has grown to mean any statistical data related to the activity of higher education institutions, whether or not they really do throw any light on performance. Furthermore, a decade ago, Yorke (1995, p. 15) noted a tendency for performance indicators to be collected irrespective of the policy framework within which they are to be used — this was particularly evident in the United Sates in the late 1980s and early 1990s.


analytical review

CHEA (2001) defines performance indicators as:

Representations (usually numeric) of the state of, or outcome from, an education organization, its programs, or processes. Sometimes called "management indicators." Regarded as a set of tangible measures designed to provide public accountability. Often includes admission and graduate data, research records, employment of graduates, cost per student, student/staff ratios, staff workloads, student relevance, class size, laboratory and other equipment, equity, libraries, information technology, and other learning resources. Should be subject to informed interpretation and judgment.

 

The UNESCO definitions is:

Performance Indicators: A range of statistical parameters representing a measure of the extent to which a higher education institution or a programme is performing in a certain quality dimension. They are qualitative and quantitative measures of the output (short-term measures of results) or of the outcome (long-term measures of outcomes and impacts) of a system or of a programme. They allow institutions to benchmark their own performances or allow comparison among higher education institutions. (Vlãsceanu et al., 2004, p. 39)


A UNESCO study (Fielden and Abercrombie, 2001, p. 11) states:

1.9 Definitions of performance indicators
Three kinds of indicators have been noted by Cave, Hanney, Henkel and Kogan (1997). Their distinction between “simple”, “performance” and “general” indicators has been adopted in this study to assist Member States in interpreting the meaning and application of the indicators as shown in the Annex. Definitions of the three kinds of indicator are:
‘Simple indicators are usually expressed in the form of absolute figures and are intended to provide a relatively unbiased description of a situation or process.
Performance indicators differ from simple indicators in that they imply a point of reference, for example, a standard, an objective, an assessment, or a comparator, and are therefore relative rather than absolute in character. Although a simple indicator is the more neutral of the two, it may become a performance indicator if a value judgement is involved.
The third category, general indicators, are in the main derived from outside the ‘institution and are not indicators in the strict sense – they are frequently opinions, survey findings or general statistics’.

There are numerous other definitions of performance indicators, but the most comprehensive is that in the recent British study on performance indicators in higher
education (HEFCE, 1999) which states that they have five purposes; ‘to provide better and more reliable information on the performance of the sector; to allow comparison between individual institutions; to enable institutions to benchmark their own performance; to inform policy developments; and to contribute to the public accountability of higher education’.


The Higher Education Funding Council for England (HEFCE, 2011) states:

Performance indicators in higher education (HE) provide information on the nature and performance of the HE sector in the UK. They are intended as an objective and consistent set of measures of how a higher education institution is performing. These data provide: reliable information on the nature and performance of the HE sector; the basis for comparisons between similar institutions; performance benchmarks for institutions; evidence to inform policy making; information that helps to make the HE sector more publicly accountable.... The indicators currently cover the following data: widening participation indicators; non-continuation rates (including projected outcomes); module completion rates; research output; employment of graduates.


Cuenin (1986) had earlier defined performance indicators as relative measures:

performance indicators are empirical quantitative or qualitative data that are relative rather than absolute and imply a point of reference that enables an assessment of achievement against a defined objective.


associated issues

Use of performance indicators in quality review

There is considerable variation in the use of performance indicators in quality review (Cave et al., 1997). Woodhouse (1999, p. 33) notes that:

Most commonly, institutions are invited to specify their performance indicators, indicating why and how they use them. The external quality review agency, through its independent review team, then forms its own interpretation of the results. In other systems, however, higher education institutions are expected to report against a system-wide set of performance indicators, which are then available to the external quality review process.


Vlãsceanu
et al. (2004, p. 40) argue that
:

Performance indicators work efficiently only when they are used as part of a coherent set of input, process, and output indicators. As higher education institutions are engaged in a variety of activities and target a number of different objectives, it is essential to be able to identify and to implement a large range of performance indicators in order to cover the entire field of activity. Examples of frequently used performance indicators, covering various institutional activities, include: the number of applications per place, the entry scores of candidates, the staff workload, the employability of graduates, research grants and contracts, the number of articles or studies published, staff/student ratio, institutional income and expenditure, and institutional and departmental equipment and furniture. Performance indicators are related to benchmarking exercises and are identified through a specific piloting exercise in order to best serve their use in a comparative or profiling analysis.


Kells (1993, p. 7), in a study of 12 OECD countries noted:

the primary initiative and source of interest in performance indicators remains the government agencies and ministerial officials who are responsible for higher education.

 

The Linke Report (1991), in an early study, identified the following performance indicators:

Institutional context:

Equivalent full-time academic staff; Student/staff ratio; Average student entry score; Academic activity cost per student

Performance in teaching and learning

Student progress rate; Mean completion time; Research higher degree productivity rate; Perceived teaching quality (from CEQ)

Performance in research and professional service

Number of research grants; Value of research grants; Average publication rate; Professional service activity

Participation and social equity

Academic staff gender ratio; Commencing student gender ratio; Academic programme diversity.

 

Validity of performance indicators

In the early 1990s there was much research on ‘performance indicators’, most of which suggested that statistical indicators, whether reliable or not, are rarely valid operationalisations of quality (Klein & Carter, 1988; Cave & Kogan, 1990; Goedegebuure et al., 1990; Head, 1990; Johnes & Taylor, 1990; Pollitt, C, 1990; Cave et al., 1991, Gallagher, 1991; Yorke, 1991; Murphy, 1994). Furthermore, despite being ‘indicators’ it is unclear, exactly, of what performance they are indicative.

What, for example, does an increase in percentage of ‘good’ degree classifications tell us about quality? Does it indicate that the student learning performance has improved? Does this mean that the teaching staff have performed better, or are the students learning more despite the teachers? Or does it mean that academic standards have fallen? Similarly, what does the employment rate of graduates within the first six months after graduation tell us about the performance of the institution? Perhaps it says more about the vagaries of the recruitment process and the differential in take-up rates between different subject specialisms than provide any indication of the performance of the institution.

 

Harvey (1998, p. 243) argues that, in practice, performance indicators are usually simplistic, convenience measures that bear no relation to any notion of quality.

Yorke (1998) suggested that the benefit that might accrue from improving statistical measures to make them into really meaningful performance indicators is outweighed by the cost that would accrue.


In the review of the contributions in the first 15 years of the international journal Quality in Higher Education, Harvey and Williams (2010) write the following about the contributions on performance indicators:

A few contributions focused on the use of performance indicators as quality instruments. Ewell (1999) explored how and whether institutional performance measures can be beneficially used in making resource allocation decisions. He analysed different kinds of information-driven funding approaches and proposed several policy trade-offs that must be taken into consideration in designing information-driven approaches to resource allocation. He concluded that only easily verifiable ‘hard’ statistics should be used in classic performance funding approaches, though data such as surveys and the use of good practices by institutions may indirectly inform longer-term resource investments.

Yorke (1995) had examined developments in the use of performance indicators in several countries. He argued for the use of indicators designed to support the enhancement of quality in higher education, which may need to be ‘softer’ than the statistical measures (themselves not perfectly ‘hard’) used for the purposes of accountability. In a subsequent paper, Yorke (1998) stated that performance indicators are well-established in the language of accountability in higher education, and are used to serve a variety of political and micro-political ends. However, the speed of their implementation has not been matched by equivalent progress in the development of their technical qualities, particularly in the general area of the development of students. He analysed the validity and robustness of a selection of student-related indicators (including their vulnerability to manipulation) and cautioned about the use of such indicators in policy arenas.

Barrie and Ginns (2007) also critiqued the use of student survey-derived teaching performance indicators, that grew out of a new government policy in Australia in 2005, linked to the distribution of many millions of dollars of funding. Universities are thus understandably keen to enhance their performance on such measures. However, the strategies by which universities might achieve these improvements are not always readily apparent and there is a real possibility that reactive responses, based on opinion rather than evidence will result. They examined how student survey measures of ‘quality’ at the subjects level could link to national performance indicators and, importantly, the organisational issues affecting how such information is interpreted and acted upon, to usefully inform curriculum and teaching improvement.

Busch, Fallan and Pettersen (1998) examined differences in performance indicators of job satisfaction, self-efficacy, goal commitment and organisational commitment among teaching staff in the college sector in Norway. Variations in performance indicators between the faculties of nursing, teacher education, engineering and business administration were discussed and the managerial implications indicated.

Employability is another area that has been linked to performance indicators. Little (2001) reported findings from a European survey of graduate employment and showed the difficulty of trying to make international comparisons of higher education’s contribution to graduate employability. Further, comparison within the United Kingdom is also problematic and evidence is provided of the impact on employability of factors outside higher education institutions’ control. The conclusion is that employability figures are not trustworthy indicators of higher education quality. Morley (2001) similarly cast doubts on the value of graduate employment statistics as a performance indicator especially as they ignore gender, race, social class and disability. Warn and Tranter (2001), in the same special issue, argued that a generic competency model could be used (at least in Australia) to define the desired outcomes of post-compulsory education.

Rodgers (2008) suggested that the desire, in the UK, to enhance the quality of the services provided by higher education institutions has led to the development of a series of benchmarking performance indicators. He explored whether similar indicators could be developed for use as tools in the management of quality within students’ unions and identified potential benchmarks.

The overall view was that national performance indicators are viewed with suspicion, especially when they simply measure the easily measurable, rather than being carefully designed to evaluate the underlying issue.


related areas

See also

benchmark

benchmarking

external review indicators

statistical indicators


Sources

Barrie, S. and Ginns, P., 2007, ‘The linking of national teaching performance indicators to improvements in teaching and learning in classrooms’, Quality in Higher Education, 13(3), pp. 275–286.

Busch, T., Fallan, L. and Pettersen, A., 1998, ‘Disciplinary differences in job satisfaction, self-efficacy, goal commitment and organisational commitment among faculty employees in Norwegian colleges: an empirical assessment of indicators of performance’, Quality in Higher Education, 4(2), pp. 137–157.

Cave, M. and Kogan, M., 1990, ‘Some concluding observations’, in M. Cave, M. Kogan, & R. Smith (Eds.), 1990, Output and Performance Measurement in Government: The state of the art, pp. 179–87 (London, Jessica Kingsley).

Cave, M., Hanney, S., Henkel, M. and Kogan, M. 1997. The Use of Performance Indicators in Higher Education – The Challenge of the Quality Movement. Higher Education Policy Series 34. London: Jessica Kingsley Publishers.

Cave, M., Hanney, S. and Kogan, M., 1997, “The Use of Performance Indicators in Higher Education”, 3rd ed., Higher Education Policy, Series 3, Jessica Kingsley Publishers.

Council For Higher Education Accreditation (CHEA) 2001, Glossary of Key Terms in Quality Assurance and Accreditation http://www.chea.org/international/inter_glossary01.html, last updated 23 October 2002, accessed 18 September 2012, page not available 30 December 2016.

Cuenin, S., 1986, ‘International study of the development of performance indicators in higher education’, in Cave, M., Hanney, S., Kogan, M. and Trevett, G. (1991), The Use of Performance Indicators in Higher Education: A critical analysis of developing practice, second edition, London, Jessica Kingsley.

Ewell, P.T., 1999, ‘Linking performance measures to resource allocation: exploring unmapped terrain’, Quality in Higher Education, 5(3), pp. 191–209.

Fielden, J. and Abercrombie, K., 2001, Accountability and International Co-operation in the Renewal of Higher Education, a UNESCO Higher Education Indicators Study, Paris, UNESCO.

Gallagher, A., 1991, ‘Comparative value added as a performance indicator’, Higher Education Review, 23 (3), pp. 19–29.

Goedegebuure, L.C.J., Maassen, P.A.M. & Westerheijden, D.F. (Eds.), 1990, Peer Review and Performance Indicators: Quality Assessment in British and Dutch Higher Education (Culemborg, Lemma).

Harvey, L., 1998, ‘An assessment of past and current approaches to quality in higher education’, Australian Journal of Education, 42(3), pp. 237–55. Final pre-proof available here.

Harvey, L. and Williams, J. 2010, 'Fifteen Years of Quality in Higher Education', Quality in Higher Education, 16(1), pp. 4–36.  

Head, P., 1990, Performance Indicators and Quality Assurance. Information Services Discussion paper, 4 June, 1990. (London, CNAA.)

Higher Education Funding Council for England (HEFCE), 2011, Performance indicators in higher education, available at http://www.hefce.ac.uk/learning/perfind/default.asp, last updated 8 July 2011, accessed 23 January 2012, not available 28 August 2012, Early version published in 1999, HEFCE Report 99/66.

Johnes, J. and Taylor, J., 1990, Performance Indicators in Higher Education. (Buckingham, Society for Research into Higher Education (SRHE)/Open University Press).

Kells, H.R. (Ed.), 1993, The Development of Performance Indicators for Higher Education: a compendium for eleven countries. 2d ed. Paris: Organization for Economic Cooperation and Development. ED 331 355.

Klein, R. and Carter, N., 1988, ‘Performance measurement: a review of concepts and issues’, in D. Beeton (Ed.), 1988, Performance Measurement: Getting the concepts right, (London, Public Finance Foundation).

Linke Report (1991) Performance Indicators in Higher Education. Report of a Trial Evaluation Study Commissioned by the Commonwealth Department of Employment, Education and Training, Volumes I and II, AGPS: Canberra.

Little, B., 2001, ‘Reading between the lines of graduate employment’, Quality in Higher Education, 7(2), pp. 121–129.

Morley, L., 2001, ‘Producing new workers: quality, equality and employability in higher education’, Quality in Higher Education, 7(2), pp. 131–138.

Murphy, P., 1994, ‘Research quality, peer review and performance indicators’, The Australian Universities Review, 37 (1), pp. 14–18.

Pollitt, C., 1990, ‘Measuring university performance: never mind the quality, never mind the width’, Higher Education Quarterly, 44 (1):, pp. 60–81.

Rodgers, T., 2008, ‘Measuring quality in higher education: can a performance indicator approach be extended to identifying the quality of students’ union provision?’, Quality in Higher Education, 14(1), pp. 79–92.

Vlãsceanu, L., Grünberg, L., and Pârlea, D., 2004, Quality Assurance and Accreditation: A Glossary of Basic Terms and Definitions (Bucharest, UNESCO-CEPES) Papers on Higher Education.

Warn, J. and Tranter, P., 2001, ‘Measuring quality in higher education: a competency approach’, Quality in Higher Education, 7(3), pp. 191–198.

Woodhouse, D., 1999, ‘Quality and Quality Assurance' in Organisation for Economic Co-Operation and Development (OECD), 1999, Quality and Internationalisation in Higher Education, pp. 29–44, Programme on Institutional Management in Higher Education (IMHE), Paris, OECD.

Yorke, M., 1991, Performance Indicators: Observations on their use in the assurance of course quality, Council for National Academic Awards Project Report 30, January (London, CNAA).

Yorke, M., 1995, 'Taking the odds-on chance: Using performance indicators in managing for the improvement of quality in higher education', Tertiary Education and Management, 1(1), pp. 49–57.

Yorke, M., 1998, ‘Performance indicators relating to student development: can they be trusted?’ Quality in Higher Education, 4(1), pp. 45–61.


copyright Lee Harvey 2004–2024



Top

A B C D E F G H I J K L M N O P Q R S T U V W X Y Z Home