Reposted from NORRAG Special Issue 03: Global Monitoring of National Educational Development: Coercive or Constructive? (Published October 2019)
The article looks at how large-scale international assessments exercise soft power in global education governance. It argues that although learning data can inform education policies in meaningful ways, there are risks that international partners and governments need to consider when using it, especially in developing countries.
From Descriptive Statistics to Comparison-driven Policies
The number of countries conducting national, regional, and international large-scale student assessments has significantly increased over the past two decades (UNESCO, 2015). There has been strong international support for this expansion. Multiple international actors have highlighted that possessing and effectively using reliable learning assessment data is essential to diagnose the health of the education sector, design appropriate strategies, trace hidden exclusions, foster stronger political engagement as well as reduce system inefficiencies (World Bank, 2018; UIS, 2017). While these data hold strong potential to inform decision-making for improved education policies, their use as centerpieces of policy is not a neutral technical exercise. It may considerably shape how education systems are analyzed and ultimately which aspects receive policy-makers’ attention (Breakspear, 2014).
The role and influence of international student assessments in global education governance have been evolving over time. In the 1980s, international education statistics produced by the United Nations Educational, Scientific and Cultural Organisation (UNESCO) received strong criticism for their descriptive nature. They did not provide country rankings on different indicators and although data were methodologically sound, related publications did not engage in a more complex statistical analysis correlating systems’ inputs and outputs (Cussó and D’Amico, 2005). Organizations such as the Organisation for Economic Cooperation and Development (OECD) and the World Bank, mainly under pressure from the United States, insisted on the need to generate more comparative data about learning and analyze the underlying political reforms (Cussó and D’Amico, 2005). The OECD itself experienced an evolution in its evaluation culture shifting away from the skepticism regarding international educational performance comparisons toward embracing them. This quest for comparative data was facilitated by a context in which education was quickly becoming a way to measure countries’ economic potential (Addey et al., 2017). Education had become a global currency of knowledge economies.
The growing influence of international large-scale learning assessments is therefore strongly driven by international comparisons that might provide a distorted view of reality, as they are ranking countries with varying levels of resources. Setting international standards through comparisons that put peer pressure on those being ranked is a strong tool to exercise influence (Martens, 2007). However, countries in league tables have different means at their disposal and varying technical capacities, which might distort the comparison: ‘cultural, contextual, and organizational characteristics prevent straightforward cross-national comparison of student achievement (Wiseman et al., 2010:12)’. It is an important element to consider when relying on international comparative data.
The Legitimacy that Hard Data Represent
As a high-level policy actor in Uruguay recently said, ‘anything is legitimated as long as you start your sentence with ‘PISA says…’(Addey, 2018).
The analysis of learning assessment data and recommendations that follow are increasingly used as a strong tool for legitimizing education reforms. The needs of this data are often perceived as an objective reality not subject to contestation and this constitutes a strong argument for policy-makers (Cussó and D’Amico, 2005). This scientific approach to policymaking is one of the main drivers of OECD’s success. Policy-makers and the general public accept PISA as a legitimate proxy for education system performance (Breakspear, 2014) and the organization enacts soft regulation through its publications, studies, reports and international comparisons that have a high reputation in terms of the quality of their analysis (Morgan and Shahjahan, 2014).
However, policy-makers can use learning data to support decisions made on other grounds. The use of assessment data sometimes appears as a solution in the search for the right problem, justifying political agendas already in place (Fischman et al., 2018). Baird et al. (2011) use an example of France explaining that in the past its government exaggerated the country’s poor performance in the Programme for International Student Assessment (PISA) to justify its foreseen reforms refocusing on fundamentals with an emphasis on literacy and science. In Uganda, there is some evidence that the government is also using UWEZO assessment data to support its agenda, as low assessment results were used to refuse higher pay to teachers (Elks, 2016). In these situations, a political decision precedes recommendations coming from data analysis, which reduces its ability to effectively inform education policies.
Definition of Education Goals through Measurement
‘What we choose to measure in education shapes what we collectively strive to achieve’ (Breakspear, 2014, p. 4).
What is measured in education systems matters, as this is likely to influence the way governments approach education reforms. Using an example of PISA, Breakspear (2014) argues that policy-makers start using PISA lenses to examine their systems and this is likely to influence the definition of the end-goals of education. Meyer and Benavot (2013) share this view noting that PISA does have the potential to create changes when it comes to goals and organization of national education systems. Changes in the curriculum are the most explicit examples of this influence. Multiple countries (e.g. Korea, Mexico, Greece, Luxembourg) revised their curriculum to align it to the PISA framework and to include competencies that PISA tests (Breakspear, 2012).
There is, therefore, a risk of narrowing down education system goals to the improvement of a set of international indicators. While they can provide useful information on student performance in certain areas, they cannot be equated to the purposes of the education systems (Breakspear, 2014). The definition of the end-goals of education requires broader democratic deliberation: ‘the discussion of educational end-goals involves ethical deliberation about what matters in education and what an educated person should be’(Breakspear, 2014, p. 11). Likewise, Biesta et al. (2007, p. 18) explain that ‘a democratic society is precisely one in which the purpose of education is not given but is a constant topic for discussion and deliberation.’ However, international largescale assessments tend to reduce this democratic space by putting pressure on countries to improve their scores in a set of comparable indicators.
Increasing External Pressure on Developing Countries
New US government policy mandates the State Department and USAID to demonstrate increases in the “percent of learners who attain minimum grade-level proficiency in reading at the end of grade 2 and at the end of primary school” in countries receiving US support. With an $800 million international basic education budget on the line, there are high stakes around how “minimum grade-level proficiency” is defined and measured (Bruns, 2018).
Developing countries are under increasing pressure to participate in international large-scale assessments as their participation is often related to development partners’ aid conditions. Financing agencies ask for learning data as a valuable benchmark to evaluate education progress (Addey et al., 2017). Recognizing this role, the Global Partnership for Education (GPE) included the availability of learning data (or a strategy to improve this availability) as a requirement in its funding model (GPE, 2017). Moreover, countries also use assessment data as evidence to obtain financial resources for projects aiming to increase student results (Addey et al., 2017; Shamatov & Sainazarov K., 2006). Using empirical evidence, Kijima, and Lipscy (2016) show that participation in international learning assessments is actually associated with an increase in foreign aid inflows to education.
In addition, participation in standardized learning assessments allows demonstrating countries’ adherence to global education values (Knight et al., 2012). The Education 2030 Agenda emphasizes both the importance of improved learning outcomes and their measurement, which is strongly supported by development partners (GPE, 2017; World Bank, 2018). Participation in assessments is therefore valued as a process itself that represents support to international standards (Addey et al., 2017).
Although learning data can inform education policies in meaningful ways, its growing influence in global education governance has not been accompanied by a systematic study of risks that its use implies. Such risks are crucial to explore to make the best use of the potential that learning data hold to improve education policies. A new UNESCO International Institute for Educational Planning (UNESCO-IIEP) research project on the use of learning assessment data will provide new insights into some of these aspects. It will explore how learning data are used in a number of Sub-Saharan Africa and Latin American countries in the education planning cycle, analyzing elements linked to the political economy of actors.
References
- Addey, C. (2018). ‘Why does PISA appear to be everyone’s solution?’ Laboratory of International Assessment Studies Blogs.
- Addey, C., Lingard, B., Sellar, S., Steiner-Khamsi, G. & Verger, A. (2017). ‘The rise of international large-scale assessments and rationales for participation’. In: Compare: A Journal of Comparative and International Education, 47 (3): 434-452.
- Baird, J., Daugherty, R., Isaacs, T., Johnson S., Sprague, T., Stobart, G. & Yu, G. (2011). Policy effects of PISA. Report Commissioned by Pearson UK.
- Beaton, A., Postlethwaite, T., Ross, K., Spearritt, D. & Wolf, R. (1999). The benefits and limitations of international educational achievement studies. Paris: IIEP-UNESCO.
- Biesta, G. (2007). ‘Why ‘‘What Works’’ Won’t Work: Evidence-based practice and the democratic deficit in educational research’. In: Educational Theory 57 (1): 1-21.
- Breakspear, S. (2012). ‘The Policy impact of Pisa: An exploration of the normative effects of international benchmarking in school system performance’. Working Paper. Paris: OECD.
- Breakspear, S. (2014). ‘How does PISA shape education policy making? Why how we measure learning determines what counts in education’. Seminar Series Paper No. 240.Centre for Strategic Education.
- Bruns, B. (2018). ‘Three Years after SDG Adoption: It’s Time for Action on Learning Data’. Center for Global Development Blogs.
- Cussó, R. & D’Amico, S. (2005). ‘From development comparatism to globalization comparativism: towards more normative international education statistics’. In: Comparative Education, 41(2): 199-216.
- Elks, P. (2016). ‘The impact of assessment results on education policy and practice in East Africa’. Think piece. London: Department for International Development.
- Fischman G., Goebel J., Holloway J., Silova, I. & Topper, A.; (2018). ‘Examining the influence of international large-scale assessments on national education policies’. In: Journal of Education Policy.
- Froese-Germain, B. (2010). ‘The OECD, PISA and the impacts on educational policy’. Virtual Research Center.
- Global Partnership for Education (GPE). (2017). How GPE supports teaching and learning. Policy brief.
- Hamilton, M. (2017). ‘How International Large-Scale Skills Assessments engage with national actors: mobilizing networks through policy, media, and public knowledge’. In: Critical Studies in Education, 58(3): 280-294.
- Kellaghan, T., Greaney, V. & Murray, T. (2009). Using the results of a national assessment of educational achievement. Volume 5. Washington DC: World Bank.
- Kijima R. & Lipscy, P. (2016). ‘The Politics of international testing’. Paper prepared for the “Assessment Power in World Politics” conference hosted by Harvard University and International Organization, May 6-7, 2016 and the associated APSA mini-conference, September 2, 2016.
- Knight, P., Lietz, P., Nugroho, D. & Tobin, M. (2012). ‘The Impact of national and international assessment programmes on educational policy, particularly policies regarding resource allocation and teaching and learning practices in developing countries’. EPPI-Centre in University of London.
- Martens, K. (2007). ‘How to become an influential actor – The ‘comparative turn’ in OECD education policy’. In: K. Martens, A. Rusconi & K. Leuze (Ed.), New arenas of education governance. The impact of international organizations and markets on education policy making (pp.41-55). New York: Palgrave.
- Meyer, D. & Benavot, A. (2013). PISA, power, and policy. The Emergence of global educational governance. Symposium Books.
- Morgan, C. & Shahjahan, R. (2014). ‘The legitimation of OECD’s global educational governance: examining PISA and AHELO test production’. In: Comparative Education, 50 (2): 192-205.
- Shamatov D. & Sainazarov K. (2006). ‘The impact of standardized testing on education quality in Kyrgyzstan: The case of the Program for International Student Assessment (PISA) in 2006. In: International Perspectives on Education and Society, 13, 145-179.
- UNESCO Institute for Statistics (UIS). (2017). More than one-half of children and adolescents are not learning worldwide. Factsheet No. 46. Montreal: UIS.
- UNESCO. (2015). EFA Global Monitoring Report. Education for All 2000-2015: achievements and challenges. Paris: UNESCO.
- Wiseman, A., Whitty, G., Tobin, J. & Tsui, A. (2010). ‘The uses of evidence for educational policy-making: global contexts and international trends’. In: Review of Research in Education, 34, 1-24.
- World Bank. (2018). Learning to realize education’s promise. World Development Report.
