In order for educational quality and learning outcomes to improve, planners need access to evidence-based analyses of the current situation, trends, strengths, weaknesses, and their causes. A strong monitoring and evaluation system can provide that evidence. It all begins with the development of indicators for overall system monitoring, and for keeping track of progress in the specific strategies and programmes featured in an education sector plan.
Indicators for overall system monitoring
Education systems are typically analysed in terms of the overarching context, specific inputs, social or institutional processes, and outputs. Indicators can be developed to measure issues that fall under each of these categories, such as the following examples (also see a more comprehensive description):
The final choice of indicators must be based on the actual education system, and what is available or possible to measure. Furthermore, indicators must be carefully designed in order to make it possible to measure change over time—implying stability both in the underlying construct and in the methods of measurement. Whenever possible, indicators should be disaggregated by gender, geographical area, and other important measures related to equity.
Characteristics of different types of indicators and their measurement
Information on input indicators is typically relatively easy to obtain, since inputs are often “countable” by nature and management processes involve keeping records of many inputs automatically. Context and process indicators, in contrast, are often challenging to develop and measure because they concern more complex and/or qualitative issues. Common data collection tools include surveys, inspection reports, and self-evaluations, which require special processes both for administration and for interpretation. Despite these challenges, however, process indicators are often crucial for measuring and understanding attempts to improve learning outcomes.
Output indicators typically involve measures of learning outcomes based on national examinations or international assessments. Less commonly, output indicators may also be measured through studies using surveys or systematic field observations. Output indicators provide the most important data for understanding whether educational quality and learning outcomes are improving as intended. However, input and process indicators are also essential to measure, because they offer information that is essential for accurate interpretation of the output data.
Monitoring the implementation of plans to improve learning outcomes
Indicators are also needed to keep track of progress in implementing the strategies of an education sector plan, such as the policies and programmes intended to improve learning outcomes. Each programme specified by a sector plan requires introducing particular inputs and carrying out certain activities or processes, in order to achieve the outputs that align with the plan’s strategic vision for change. Indicators can and should be developed to measure progress in all three of these elements.
A simple example can help illustrate the development of these different types of indicators for programmes that are intended to improve learning outcomes. Let us suppose that Country Eruditus has identified the low level of literacy among primary school students as a key issue to be addressed. Policy-makers have explored several policy options and eventually decided on three programmes intended to address this problem. One of these programmes is “to conduct workshops for teachers on incorporating read-aloud and recreational reading time into their classroom schedule”. Three kinds of indicators can be developed with respect to this programme:
In the above example, it is clear that the different kinds of indicators serve different functions. The input and process indicators primarily measure the extent to which intended programme activities were actually carried out. The output indicators, in contrast, measure the immediate effects of the programme activities.
The importance of measuring input and process indicators, in addition to output indicators, can also be clarified through this example. Suppose that after two years, Country Eruditus discovers that there has been no improvement in the percentage of primary pupils achieving reading proficiency. Without further information, a planning department may simply decide to abandon the whole strategy of reading workshops.
However, a more in-depth look at their monitoring data may reveal, for example, that even though all planned materials were distributed and all planned workshops have taken place, workshop participants reported very low confidence in their ability to apply the in-class reading strategies. This valuable data should prompt the planners to look deeper into the quality of the workshops themselves, as well as into the assumptions that underpin their recommended in-class reading strategies. In fact, if this monitoring data were collected and analysed on time, from the very beginning of the programme, it would have been possible to improve the quality of the workshops much earlier on, without waiting for the output indicators to suggest that the programme had failed. The careful design of all three types of monitoring indicators therefore permits a learning process in which it becomes possible to continually refine and improve programmes and policies.
Click to read more about developing a monitoring framework and the role that indicators play.
Helpful further reading