Using data to improve the quality of education

Last update 22 Mar 23

There is a worldwide concern that learning outcomes have not kept pace with the expansion of education. The extent of the learning deficit is largely unknown because many countries have few systematic data on who is learning and who is not. Learning assessments provide data on the status of learning, which can be used to monitor the quality of systems and student learning outcomes. Regular monitoring can reveal changes over time in response to interventions to improve student outcomes, providing feedback and additional data for decision-making.

Learning data, in conjunction with other dimensions of quality such as context, teaching and learning environment, and learner characteristics can reveal the factors that most affect learning outcomes. By revealing gaps in student achievement and service provision, data can be used to identify those groups that are being underserved and are underperforming. Once identified, such inequities can be addressed.

Data can be used to hold the system accountable for the use of resources by showing whether increased public investment in education has resulted in measurable gains in student achievement. Although direct accountability for results rests mainly with the school, the enabling policy and practice environment is the responsibility of decision-makers at all administrative levels.

Which actor needs which type of data?

Data-driven decisions to improve learning are taken at each level of the system. The specificity of the data decreases from school to national level and the time lag between data collection and application increases. Decisions concerning individual students, classes, and schools are made locally, where raw data are produced. System-wide decisions based on aggregated data are made nationally.

Classroom teachers

Classroom teachers manage the teaching and learning process. They monitor students’ learning by informal means, such as quizzes and games, and formative tests. Teachers use the data to assess a student’s performance, strengths, weaknesses, and progress. Additional information on an individual student’s background allows the teacher to diagnose possible causes of poor performance and apply remedies. The data can also be used for self-evaluation to identify where teachers could improve their pedagogy or classroom management.

Head teachers

Head teachers assess the school’s overall performance. They examine student achievement and attainment, staff performance, and use of school resources. Head teachers set and monitor school practices, programmes, and policies. They need raw achievement data, information on teachers’ classroom practices and contribution to student outcomes, and on their own performance as rated by supervisors.

Parents and communities

Parents and communities require information on students’ achievement, including their strengths and weaknesses, and any behavioural issues. They are concerned about public examination results, since performance determines their children’s progress to further education or employment. Parents and school staff can discuss and agree an agenda for action to support student needs. Parents can support school improvement through parent-teacher associations and school boards.

District and provincial level actors

District level actors have responsibility for oversight of the management and quality of schools in the district. They collect and aggregate school data on student attendance and achievement, teacher attrition and absenteeism, and resources. They play an important role in the identification of the resource needs of schools, in monitoring standards and recommending improvement measures.

Provincial level administrators, coordinators, and supervisors make decisions based on evidence of an issue serious enough, or an opportunity good enough, to warrant commitment of time and provincial resources. Their focus is on how to plan and use interventions to provide large groups of schools with the resources and expertise to set up and evaluate their education programmes and, guided by evaluation results, to adopt procedures to improve effectiveness.

National level officials

National level officials make broad policy decisions on links between government directives and the plans and resources needed to comply with those directives. They need substantial system-wide information on current student outcomes and associated factors, together with data on long-term trends. These are collected and collated to provide the basis for decisions on the whole or on a major part of the education system. Data sources include EMIS, national examination results, and learning assessments.

What information can the data provide and how can it be used?

Learning data, augmented with background data, provide information on how well students are learning, what factors are associated with achievement, and which groups perform poorly. This information can be used for system analysis, improved resource allocation, agenda setting or during the policy-cycle.

Education system analysis

Education systems may be analyzed in terms of:

  • What students are learning;
  • Whether what they learn responds to parents’, community, and country needs and aspirations (relevance);
  • How well resources are used to produce results (internal efficiency);
  • What the main factors influencing learning are; and
  • Which aspects of the system require improvement.

If the data show some groups’ learning outcomes are low due to their location, ethnicity, religion, or disability, measures can be taken to provide additional resources, such as teachers or books, aimed at improving their achievement.

Improved resource allocation

The data may reveal issues with the provision and use of resources. School infrastructure, availability of instructional materials, and the use of instructional time influence learning outcomes. Improved instructional materials with information on their use may contribute to better achievement.

Agenda setting and policy-making

According to Clarke (2017), there are differences between countries at different income levels in the focus of their policy and design. Generally, high-income countries with established assessment programmes use data for sector-wide reforms or a programme of interventions aimed at improving learning outcomes. Low-income countries that are beginning to use the programmes tend to identify a few separate issues, such as resource allocation or teacher qualifications, as responsible for poor achievement. Resulting policies include a few discrete interventions.

Data analysis can identify areas that require improvement, from which agenda for action can be designed. For example, Meckes and Carrasco found that in Chile, publication of the correlation between students’ socio-economic status and their achievement prompted demands for policies to address equity issues (Raudonyte, 2019).

Seychelles’ use of SACMEQ findings in 2000 provides an example of using assessment results for policy formulation. SACMEQ data indicated large differences in learning outcomes among pupils in the same school, attributable to a long-established practice of streaming by ability from Grade 1. By Grade 6 the learning achievement between girls and boys had widened to such an extent that there were more girls in the elite class and more boys in the inferior class. Effective communication channels, an enabling political context, and effective dialogue among actors contributed to the decision to adopt a de-streaming policy (Leste 2005 quoted in Raudonyte, 2019).

The regular collection of learning and other related data to monitor policy implementation can inform on the status of planned activities, reveal implementation challenges, pinpoint early indications of impact, and suggest modifications to adjust shortcomings. For example, the Learn to Read initiative in Madhya Pradesh was monitored on a monthly basis through standardized tests to detect shortcomings and adjust implementation (Tobin et al., 2015).

National assessments can be used to gauge the impact of policy on learning outcomes and to provide feedback to address shortcomings. In theory, there should be a seamless progression from testing through agenda setting, policy formulation, implementation, and monitoring and evaluation based on more testing. In practice, such a feedback mechanism is often less well organized. This may be due, among other things, to lack of experience with using assessments, weak technical capacity, poor coordination between assessment and decision-making bodies, and funding shortfalls.

Challenges to data use

For data to be used effectively they must be actionable, available to all who are in a position to act and presented in an appropriate form for each group of stakeholders. Barriers to data use include the following:

Data availability

Inadequate funding of an assessment programme can mean the programme cannot be completed. Delays in analysis can prevent data from being released in a timely manner. Results may be withheld if they are below expectations. Findings may be dismissed if they do not respond to the needs of the system, or are not actionable or linked to viable policy options.

Access problems

Data access problems include: a failure to communicate results to both the public and those who are in a position to act; results retained within a ministry of education to restrict their use by other stakeholders and prevent the media and public from lobbying for action; the content and format of the reports may not be suited to some or all target groups, who need a variety of data and presentation modes.

Quality issues

Issues with the design, relevance, and credibility of the assessment programme can lead to data being withheld or ignored. Real or perceived deficiencies in assessment instrumentation, sampling and analysis can raise validity and relevance issues. Occasional or ill-designed assessments mean that skills and content are not comparable over time. Caution is needed when developing policy messages based on assessment results without an analysis of supplementary data.

Limited capacity and skills to assess and use the data

Ministries of education may lack experience with national assessments, have poorly established decision-making procedures and low technical capacity. Technical personnel may lack expertise in assessment design, in-depth data analysis, and interpretation. This may result in recommendations being superficial and uninformative. Policy-makers may not understand the implications of the assessment or may not focus on the analysis due to time constraints. Data collection, analysis, availability, and use may be adversely affected by funding constraints.

Political climate

Conflict and political unrest may impact assessment implementation. Political sensitivities due to low levels of achievement can prevent data use. There may be a lack of political will to act on a recommendation.

Minimizing the challenges

Credibility and acceptability issues can be addressed by involving all relevant stakeholders in the design and implementation of an assessment. The assessment team should have the technical competence to design, administer the assessment and analyze results. Ongoing technical training of existing and potential staff is necessary to ensure quality and to allow for attrition.

Building local capacity or establishing a regional coordinating body are possibilities. Both options require substantial investment in capacity building that could be costly and time-consuming.

Judicious use of media channels at all stages of the assessment including dissemination of results, and regular stakeholder discussions will ensure the public are kept informed. Distribution will be facilitated if there is a budget for dissemination, a dissemination plan and if the reports prepared are tailored to different users’ needs.

Existing structures, policy-making and decision-making processes within ministries can also be a barrier to data use. In order to adapt to a data-driven decision-making culture, ministries of education may need to restructure and redefine the roles and responsibilities within the organization. Links among staff and with relevant outside institutions need to be established and sustained.

References and sources

Australia. Department of Foreign Affairs and Trade. 2018. ‘Education learning and development module: Learning assessment. Canberra: DFAT.

Best, M.; Knight, P.; Lietz, P.; Lockwood, C.; Nugroho, D.; Tobin, M. 2013. The impact of national and international assessment programmes on education policy, particularly policies regarding resource allocation and teaching and learning practices in developing countries. Final report. London: EPPI-Centre, Social Science Research Unit, Institute of Education, University of London.

Birdsall, N.; Bruns, B.; Madan, J. 2016. Learning data for better policy: A global agenda. Washington, DC: Center for Global Development.

Clarke, P. 2017. Making use of assessments for creating stronger education systems and improving teaching and learning’. Paper commissioned for the 2017/18 Global Education Monitoring Report, Accountability in education: Meeting our commitments. Paris: UNESCO.

Custer, S.; King, E. M.; Atinc, T. M.; Read, L.; Sethi, T. 2018. Towards data driven education systems: Insights into using information to measure results and manage change. Washington, DC: Center for Universal Education at Brookings/AidData.

De Chaisemartin, T.; Schwanter, U. 2017. Ensuring learning data matters. IIEP-UNESCO Learning Portal.

Mählck, L.; Ross, K. N. 1990. Planning the quality of education: The collection and use of data for informed decision-making. Paris: IIEP-UNESCO.

Postlethwaite, T. N., Kellaghan, T. 2008. National assessments of educational achievement. Paris: IIEP-UNESCO.

Raudonyte, I.2019. Use of learning assessment data in education policy-making. Paris: IIEP-UNESCO.

Ross, K. N. 1997. ‘Research and policy: a complex mix’. In: IIEP Newsletter, 15 (1), pp. 1-–4.

Saito, M. 2015. The use of learning assessments in policy and planning. IIEP-UNESCO Learning Portal.

Tobin, M.; Lietz, P.; Nugroho, D.; Vivekanandan, R.; Nyamkhuu, T. 2015. Using large-scale assessments of students’ learning to inform education policy. Insights from the Asia-Pacific region. Melbourne/ Bangkok: ACER/ UNESCO.

UNESCO; IIEP Pôle de Dakar; World Bank; UNICEF. 2014. Education sector analysis methodological guidelines. Vol. 1: Sector-wide analysis, with emphasis on primary and secondary education. Dakar: UNESCO. IIEP Pôle de Dakar.

UNESCO Office Bangkok and Regional Bureau for Education in Asia and the Pacific. 2013. The use of student assessment for policy and learning improvement. Bangkok: UNESCO Office Bangkok.

UNESCO Office Bangkok and Regional Bureau for Education in Asia and the Pacific. 2017. Analyzing and utilizing data for better learning outcomes. Paris/ Bangkok: UNESCO/ UNESCO Office Bangkok.

UNESCO Office Bangkok and Regional Bureau for Education in Asia and the Pacific. 2017. Large-scale assessment data and learning outcomes: Linking assessments to evidence-based policy making and improved learning. Bangkok: UNESCO Office Bangkok.

UNESCO-UIS. 2018. SDG 4 data digest: Data to nurture learning. Montreal: UIS.

Willms, J. D. 2018. Learning divides: Using data to inform educational policy. Information Paper No. 54. Montreal: UIS.

Glossary

Library