Governance by Assessment Data: How Far Does It Go?

Written on 24 Jul 20 by Ieva Raudonyte
Cross-national studies

 

 

Reposted from NORRAG Special Issue 03: Global Monitoring of National Educational Development: Coercive or Constructive? (Published October 2019) 

The article looks at how large-scale international assessments exercise soft power in global education governance. It argues that although learning data can inform education policies in meaningful ways, there are risks that international partners and governments need to consider when using it, especially in developing countries.

From Descriptive Statistics to Comparison-driven Policies

The number of countries conducting national, regional, and international large-scale student assessments has significantly increased over the past two decades (UNESCO, 2015). There has been strong international support for this expansion. Multiple international actors have highlighted that possessing and effectively using reliable learning assessment data is essential to diagnose the health of the education sector, design appropriate strategies, trace hidden exclusions, foster stronger political engagement as well as reduce system inefficiencies (World Bank, 2018; UIS, 2017). While these data hold strong potential to inform decision-making for improved education policies, their use as centerpieces of policy is not a neutral technical exercise. It may considerably shape how education systems are analyzed and ultimately which aspects receive policy-makers’ attention (Breakspear, 2014).

The role and influence of international student assessments in global education governance have been evolving over time. In the 1980s, international education statistics produced by the United Nations Educational, Scientific and Cultural Organisation (UNESCO) received strong criticism for their descriptive nature. They did not provide country rankings on different indicators and although data were methodologically sound, related publications did not engage in a more complex statistical analysis correlating systems’ inputs and outputs (Cussó and D’Amico, 2005). Organizations such as the Organisation for Economic Cooperation and Development (OECD) and the World Bank, mainly under pressure from the United States, insisted on the need to generate more comparative data about learning and analyze the underlying political reforms (Cussó and D’Amico, 2005). The OECD itself experienced an evolution in its evaluation culture shifting away from the skepticism regarding international educational performance comparisons toward embracing them. This quest for comparative data was facilitated by a context in which education was quickly becoming a way to measure countries’ economic potential (Addey et al., 2017). Education had become a global currency of knowledge economies.

The growing influence of international large-scale learning assessments is therefore strongly driven by international comparisons that might provide a distorted view of reality, as they are ranking countries with varying levels of resources. Setting international standards through comparisons that put peer pressure on those being ranked is a strong tool to exercise influence (Martens, 2007). However, countries in league tables have different means at their disposal and varying technical capacities, which might distort the comparison: ‘cultural, contextual, and organizational characteristics prevent straightforward cross-national comparison of student achievement (Wiseman et al., 2010:12)’. It is an important element to consider when relying on international comparative data.

The Legitimacy that Hard Data Represent

As a high-level policy actor in Uruguay recently said, ‘anything is legitimated as long as you start your sentence with ‘PISA says…’(Addey, 2018).

The analysis of learning assessment data and recommendations that follow are increasingly used as a strong tool for legitimizing education reforms. The needs of this data are often perceived as an objective reality not subject to contestation and this constitutes a strong argument for policy-makers (Cussó and D’Amico, 2005). This scientific approach to policymaking is one of the main drivers of OECD’s success. Policy-makers and the general public accept PISA as a legitimate proxy for education system performance (Breakspear, 2014) and the organization enacts soft regulation through its publications, studies, reports and international comparisons that have a high reputation in terms of the quality of their analysis (Morgan and Shahjahan, 2014).

However, policy-makers can use learning data to support decisions made on other grounds. The use of assessment data sometimes appears as a solution in the search for the right problem, justifying political agendas already in place (Fischman et al., 2018). Baird et al. (2011) use an example of France explaining that in the past its government exaggerated the country’s poor performance in the Programme for International Student Assessment (PISA) to justify its foreseen reforms refocusing on fundamentals with an emphasis on literacy and science. In Uganda, there is some evidence that the government is also using UWEZO assessment data to support its agenda, as low assessment results were used to refuse higher pay to teachers (Elks, 2016). In these situations, a political decision precedes recommendations coming from data analysis, which reduces its ability to effectively inform education policies.

Definition of Education Goals through Measurement

‘What we choose to measure in education shapes what we collectively strive to achieve’ (Breakspear, 2014, p. 4).

What is measured in education systems matters, as this is likely to influence the way governments approach education reforms. Using an example of PISA, Breakspear (2014) argues that policy-makers start using PISA lenses to examine their systems and this is likely to influence the definition of the end-goals of education. Meyer and Benavot (2013) share this view noting that PISA does have the potential to create changes when it comes to goals and organization of national education systems. Changes in the curriculum are the most explicit examples of this influence. Multiple countries (e.g. Korea, Mexico, Greece, Luxembourg) revised their curriculum to align it to the PISA framework and to include competencies that PISA tests (Breakspear, 2012).

There is, therefore, a risk of narrowing down education system goals to the improvement of a set of international indicators. While they can provide useful information on student performance in certain areas, they cannot be equated to the purposes of the education systems (Breakspear, 2014). The definition of the end-goals of education requires broader democratic deliberation: ‘the discussion of educational end-goals involves ethical deliberation about what matters in education and what an educated person should be’(Breakspear, 2014, p. 11). Likewise, Biesta et al. (2007, p. 18) explain that ‘a democratic society is precisely one in which the purpose of education is not given but is a constant topic for discussion and deliberation.’ However, international largescale assessments tend to reduce this democratic space by putting pressure on countries to improve their scores in a set of comparable indicators.

Increasing External Pressure on Developing Countries

New US government policy mandates the State Department and USAID to demonstrate increases in the “percent of learners who attain minimum grade-level proficiency in reading at the end of grade 2 and at the end of primary school” in countries receiving US support. With an $800 million international basic education budget on the line, there are high stakes around how “minimum grade-level proficiency” is defined and measured (Bruns, 2018).

Developing countries are under increasing pressure to participate in international large-scale assessments as their participation is often related to development partners’ aid conditions. Financing agencies ask for learning data as a valuable benchmark to evaluate education progress (Addey et al., 2017). Recognizing this role, the Global Partnership for Education (GPE) included the availability of learning data (or a strategy to improve this availability) as a requirement in its funding model (GPE, 2017). Moreover, countries also use assessment data as evidence to obtain financial resources for projects aiming to increase student results (Addey et al., 2017; Shamatov & Sainazarov K., 2006). Using empirical evidence, Kijima, and Lipscy (2016) show that participation in international learning assessments is actually associated with an increase in foreign aid inflows to education.

In addition, participation in standardized learning assessments allows demonstrating countries’ adherence to global education values (Knight et al., 2012). The Education 2030 Agenda emphasizes both the importance of improved learning outcomes and their measurement, which is strongly supported by development partners (GPE, 2017; World Bank, 2018). Participation in assessments is therefore valued as a process itself that represents support to international standards (Addey et al., 2017).

Although learning data can inform education policies in meaningful ways, its growing influence in global education governance has not been accompanied by a systematic study of risks that its use implies. Such risks are crucial to explore to make the best use of the potential that learning data hold to improve education policies. A new UNESCO International Institute for Educational Planning (UNESCO-IIEP) research project on the use of learning assessment data will provide new insights into some of these aspects. It will explore how learning data are used in a number of Sub-Saharan Africa and Latin American countries in the education planning cycle, analyzing elements linked to the political economy of actors.

References

 

Bookmark this