LSE - Small Logo
LSE - Small Logo

Blog Admin

November 27th, 2014

‘Frontier methods’ offer a powerful but accessible approach for measuring efficiency of public sector organisations

0 comments | 1 shares

Estimated reading time: 5 minutes

Blog Admin

November 27th, 2014

‘Frontier methods’ offer a powerful but accessible approach for measuring efficiency of public sector organisations

0 comments | 1 shares

Estimated reading time: 5 minutes

How can the efficiency of public sector organisations best be measured? Jesse Stroobants and Geert Bouckaert write that while the efficiency of an organisation is typically measured using performance indicators, there are some notable problems with this approach, such as the tendency for different indicators to produce conflicting conclusions on organisational performance. As an alternative, they outline so called ‘frontier methods’, which use direct comparisons between different organisations to create a benchmark or standard for performance.

This post originally appeared on our sister blog EUROPP.

In the current climate of budgetary constraints on the one hand and increasing demands for public services on the other hand, leaders at all levels of government are compelled to seek better ways of managing performance. Solid performance measurement, accurate performance assessment and the use of benchmarking therefore provide useful tools for analysing, interpreting, evaluating and comparing the performance of public sector entities (and their service delivery). The outcomes of using these instruments are not only meaningful for government organisations themselves, but are also of relevance for central government (e.g. as a basis for adjusting policy making) and for accountability towards third parties, such as citizens and communities.

Going further than a simple indicator

Performance measurement and the benchmarking of public sector organisations are usually done using a set of indicators, especially when carried out by practitioners and decision-makers in the policy arena. There are many reasons to recommend the use of performance indicators. They focus on specific aspects of performance (e.g. on efficiency or effectiveness), are readily measured and validated, and are easy to interpret. Indicators might therefore be useful from a managerial perspective. Such indicators – expressed in absolute terms – also provide a good starting point for benchmarking organisational performance in a simple manner, both to track an organisation’s own performance over time and to compare this performance against other similar entities or against a relevant standard.

efficiencyCredit: Franklin Heijnen (CC-BY-SA-3.0)

However, despite their merits, there are some drawbacks to using performance indicators. First, they provide only an indirect or partial indication of performance. For instance with respect to efficiency, indicators will be single-input/single-output indicators. Second, they may provide conflicting results: an organisation that appears to do well on one indicator may perform less successfully when considered using another.

In this context, ‘frontier methods’ offer alternative techniques for measuring and evaluating the performance of a group of comparable entities. Unlike single factor measures that reflect only partial aspects of performance, frontier techniques can be applied to assess overall performance by handling multiple inputs and outputs at the same time. Specifically, Data Envelopment Analysis (DEA) and Free Disposal Hull (FDH) have proven to be useful tools for assessing the relative efficiency of entities.

An accessible approach to measuring and benchmarking the efficiency of public sector organisations

At this point you may be thinking that the term ‘frontier methods’ sounds overly complex or that these techniques are only likely to be of any use to academic specialists. Yet there are a number of reasons why this interpretation would be incorrect. It is indeed true that DEA and FDH have been used predominantly by economists and econometricians, and only rarely by those employed in public administration. We should re-establish this bridge. Therefore, in a recent article, we have provided a step-by-step application of DEA/FDH to benchmark the efficiency of comparable public sector organisations (in the article’s case: public libraries in Flanders). With this gradual approach, we want to offer both academics and practitioners a basic grounding in more advanced efficiency measurement techniques.

What is required to set up FDH or DEA for use in efficiency benchmarking? First of all, good quality data is needed on one or more inputs and one or more outputs for a set of comparable entities that you wish to measure the relative efficiency of. These entities are called ‘Decision making units’ (DMUs). These DMUs can be hospitals, schools, local governments, museums, libraries, or any other relevant organisation. The only prerequisite is that they are comparable: i.e. they ‘produce’ the same kinds of outputs, by using the same types of inputs or activities.

Once you have valid and reliable input and output data for your set of DMUs, you also need software for running the analysis, since these frontier methods involve the use of linear programming to evaluate the relative performance of each DMU. There are several packages to carry out standard FDH and DEA: we would recommend the free software DEA Frontier, which is a user-friendly add-in for Microsoft Excel.

How does FDH or DEA work?

In short, these methods determine efficiency scores for each DMU by performing a comparison with best practices in the set of benchmarked entities. The analysis highlights observations which generate the same output with less resources (input efficiency) or more output with the same input (output efficiency). The DMUs that are found to be fully efficient (meaning that there are no other DMUs in the set that generate more or the same output with the same or less resources) construct a piece-wise frontier that envelops all observations of the sample.

In this manner, the performance of entities is not measured in absolute terms (as is the case using indicators), but assessed relative to each other, which can be considered a purer form of benchmarking. The difference between FDH and DEA is that FDH comparisons are only made with existing, real entities in the set of investigation, unlike DEA, where comparisons are also made with ‘virtual’ DMUs (linear combinations of the observed input-output bundles).

For a single-input/single-output case, both methods can be illustrated graphically, as shown in Figure 1 below. The dots are the entities that are benchmarked against each other. It is apparent from the figure that DMU ‘e’ is found to be relatively inefficient because both DMUs ‘b’ and ‘c’ generate more output with less input. For the DEA and FDH methods, a line (or ‘frontier’) is shown which indicates a standard of efficiency: in the case of the FDH method, this is the solid line in the figure, while the DEA method’s ‘frontier’ is shown by the dotted line. Any dot above these lines (i.e. to the left of the line shown) indicates the given DMU was above the ‘efficiency frontier’ and can therefore be deemed ‘efficient’. As can be seen, DMU ‘c’ is considered efficient in this sense if we use the FDH method, but is below the efficiency frontier if we use the DEA method. This is due to the so called ‘convexity assumption’, meaning that the efficiency of DMU ‘C’ is not only ranked against the real performers (DMUs ‘b’ and ‘d’), but also against virtual units (a linear combination of DMUs ‘b’ and ‘d’).

Figure 1: Example of FDH/DEA efficiency frontiers

Note: Each dot indicates a hypothetical organisation (DMU). The dots are placed on the figure in accordance with their ‘output’ (i.e. what they produce) and their ‘input’ (i.e. the resources which have been invested into the organisation) in a given field. The most efficient organisations are therefore those which produce high outputs from lower inputs (those at the left/top left of the figure). The DEA and FDH methods create a ‘frontier’ (shown as a dotted line in the case of DEA and a solid line in the case of FDH) whereby any dot placed on this line is assumed to be efficient. As can be seen, dots a, b, c, and d are deemed ‘efficient’ using the FDH line, but dot c is not efficient if the DEA line is used. For more information, see the authors’ longer journal article.

An integrated benchmarking approach: moving along three axes

Once you are familiar with this efficiency benchmarking procedure, you can go further and expand the performance measurement. To graphically illustrate how efficiency benchmarking can be broadened, we developed a three-dimensional space, whereby the three axes represent the possible benchmarking directions, as shown in Figure 2 below.

Figure 2: Base scheme for efficiency benchmarking

Note: The figure indicates how the basic framework shown in Figure 1 can be expanded to include other elements such as time and multiple inputs/outputs for each organisation.

First, unlike an efficiency indicator, FDH and DEA methods allow for the number of inputs and outputs included in the benchmarking to go beyond simply a single input (e.g. expenditure or staff) and a single output (e.g. the number of documents issued). FDH and DEA approaches can accommodate multiple inputs and multiple outputs simultaneously. Second, the number of DMUs in the benchmarking sample can differ. However, since it is a basic rule that only comparable entities should be benchmarked, it is beneficial to compose a deliberate set of DMUs, e.g. local governments in a certain region. Last, the passage of time is also indicated in the figure given that, as mentioned above, comparing performances across a set period of time is also a type of benchmarking.

So what’s the point?

At this point, a natural question would be how this type of analysis can be used to help improve the performance of a public sector organisation. FDH and DEA give supplementary results and are therefore of great value for organisations in identifying relevant benchmarks and providing performance objectives to work towards. Moreover, by incorporating all possible paths of expansion into a coherent whole, such integrated efficiency analysis – a combination of benchmarking over time and against peers – can build a bridge towards ‘bench-learning’ and efficiency improvement trajectories. Entities that are found to be fully efficient can evaluate if their efficiency is sustainable over longer time periods. Entities that are found to be inefficient can identify from which counterparts (‘best practices’ in the field) they can learn.

Note: This article gives the views of the author, and not the position of the Impact of Social Science blog, nor of the London School of Economics. Please review our Comments Policy if you have any concerns on posting a comment below.

About the Authors

Jesse Stroobants is a Researcher at the Public Governance Institute at the University of Leuven, Belgium. His research interests include performance management, performance measurement and efficiency issues in the public sector.

Geert Bouckaert is Professor at the Public Governance Institute at the University of Leuven, Belgium. He is currently the President of the International Institute of Administrative Sciences (IIAS). His fields of research and teaching are public management, public sector reforms, performance management and financial management.

Print Friendly, PDF & Email

About the author

Blog Admin

Posted In: Evidence-based policy | Government | Impact

Leave a Reply

Your email address will not be published. Required fields are marked *