Three decades of learning assessments for improving education in Latin America: Challenges for delivering on their promise

Written on 22 Jan 20 by Cecilia Galas Taboada
Educational measurement
Cross-national studies

 

Over the last three decades, countries in Latin America have seen the birth, development and growing complexity of student evaluation systems. As technical capacity continues to develop, and pushback grows in certain sectors, where should institutional efforts be directed in the coming years?
 

Learning assessments in Latin America have grown and evolved since the 1990’s. Their value and use are under scrutiny and there is controversy on where to go next.

Countries in Latin America have come a long way since the first wave of national large-scale assessments in the early and mid-nineties and the creation of ERCE (Estudio Regional Comparativo y Explicativo), the regional assessment led by UNESCO’s Laboratorio Lationamericano de Evaluación de la Calidad de la Educación (LLECE). During that decade, countries laid down local technical capacities for building evaluation systems with the support of academic institutions. Student assessments in Language and Mathematics were undertaken with results initially having little impact on policy and public dialogue and, in some cases; results were not published due to concerns about unsatisfactory outcomes. 

As the 2000’s rolled in, evaluation departments emerged, either within ministries or as independent entities, overcoming the previous reliance on academic institutions. A new wave in evaluation tendencies saw the incorporation of context questionnaires and associated factors analyses to explain student outcomes, although a simplistic view of evaluation results and expected immediate improvement prevailed. Despite the continuous publication of evaluation results, there was no substantial engagement in the public discourse.

This past decade has seen a significant change on the last issue. As evaluation systems have matured and grown in robustness and complexity, student assessments have become intertwined with other evaluation mechanisms, raising the stakes and increasing public awareness. Unintended uses, misinterpretations of results and less than desirable practices related to teaching for the test and excluding certain students from evaluations, have led to question about the existence of these assessments, particularly in the face of seeing little to no growth in education quality since their adoption thirty years ago.

Some demand outright abolition of assessments while others argue for more complex evaluations to hone measures of quality. Defendants of formative evaluation in some countries suggest using large-scale assessments exclusively as low-stakes diagnostic tools, while in other countries some are championing extending their use for certification or similar purposes to offset their cost.

We appear to have reached an impasse before a crossroads, where one path leads to caving into the temptation of calling it quits and the other to the risk of piling more layers on an already complex system of evaluations to please all sides. Before deciding on either path, two issues must be addressed: what are the specific expectations for these assessments related to education improvement and why have they, in practice, fallen short.

Legal and institutional frameworks for evaluation show little interconnection between evaluation and improvement processes

Much has been written on political, institutional, technical and communication factors that influence the use of evaluation results, and scholars in the region have done very good work in disentangling how these elements interconnect. A recurrent topic in the literature is the lack of clarity and misalignment of expectations regarding the purpose of student assessments and the theory of change connecting evaluation results to improvements in education quality.

A review of legal evaluation frameworks in the region shows how, despite a consensus on the importance of ensuring quality education for all students and evaluating outcomes being a crucial part in this, there is less clarity on the role evaluation plays in improvement, and how that improvement comes about. As legal frameworks move from constitutions to laws, to regulations, guidelines and technical specifications, these issues become blurred. There is certain attention paid to aligning “who” evaluates “what”, “when” and even a general “for what”. However, high-stakes scenarios aside, there is little evidence of articulated mechanisms between different departments within the ministry to establish “…then, who does what” to improve the different components that influence those results: curriculum, teacher preparation, organizational constrains at the school level, etc. These synergies appear to be left to the initiative of current leaders and policy agendas which are subject to change with each administration, rather than having institutional frameworks based on a culture of continuous improvement within ministries. 

A first step towards connecting evaluation and improvement: reframing the role of learning assessment data in connection with other types of information and institutional dynamics throughout the decision-making process.

Studies on how ministry officials use of learning assessment data  reveal  a very limited and isolated use. Often, officials will say evaluation results are not reported in a timely fashion, or that decisions are made on other, more pertinent information. Evaluation teams complain that their findings and results are misinterpreted, misused or simply ignored.

The underlying issue points to the question of what dictates the decision of how to use evaluation results: is it the needs of decision-makers or the technical design chosen by the evaluating teams?

Perhaps a deeper question would be What is the role of learning assessment in decision-making and improvement processes? This would lead to discussions on additional sources of information, which could be or not other evaluations that must be used and revised at different points of the planning and operational cycle.

We cannot continue to ask learning assessments, and even evaluation systems, to provide all the information required to inform a process as complex as improving education. Neither can we ask them to make up for the analysis, integration and discussion of related processes and conditions that users must address in order to improve the education system.

Decision-making processes in a system are complex and improvement even more so. Conversations around the value and contribution of learning assessments to education improvement cannot be held in isolation, as they are just one piece of the many required to drive change. Because their value is contingent on their use, and this use is dependent on political, institutional, technical and human factors, the dynamics between these factors cannot be absent in the discussion of how can (and if) learning assessments drive improvement. This is even more the case in the current regional discussions of whether learning assessments should stay and grow or be discarded.

Note: This article is based on a background study commissioned by the IIEP-UNESCO Buenos Aires Office for a regional field study on the political economy of actors that influence the use of large-scale assessment data in policymaking. The field study will mirror the research currently underway in Sub-Saharan Africa led by the IIEP Office in Paris.

Bookmark this