The use of learning assessments in policy and planning

Written on 24 Oct 15 by Mioko Saito
Use of data


The lighter you are, the faster you can run a marathon. A marathon runner may measure his/her weight, body mass, and running time every day. But does measuring every day actually improve the time in a full marathon?

Scale or running shoes?

Assessment results can answer many questions in which policy makers may be interested. Is the performance of our country improving or deteriorating throughout time? Has the gender gap in learning achievement in core subjects narrowed or widened? Which school resources and teacher background information most influence the improvement of learning outcomes? Appropriate assessment instrument­ations and sound survey methodology allow these questions to be answered. Then evidence-based policy suggestions could be considered to make the agenda for actions for policy development.

Since the Goal #6 of EFA has emphasized the importance of measurable learning outcomes, many countries in the world have adopted the assessment culture. IIEP was one of the implementing partners for the Viet Nam’s first large-scale national assessment of Grade 5 in 2001, resulting in many policy suggestions. Successively, Viet Nam carried out the Grade 6 national assessment in 2007, 2011, and 2014. It also participated in EGRA for Grades 1 and 3 in 2012-2013 and 2013-2014. It also implemented national assessments for Grade 9 in 2008-2009 and 2012-2013, and Grade 11 in 2011-2012 and 2014. Viet Nam also participated in PASEC for Grades 2 and 5 in 2011-2012, and PISA for 15-years old students in 2012, and it is planning to participate in TIMSS 2015.

Although Viet Nam has experienced many large-scale national assess­ments since 2001, the results made no impact on Viet Nam’s policies. According to the Director of the Center of Educational Quality Evaluation, it was because different agencies with different interests kept these assess­ments ‘not linked’, and the separate research reports were not commonly shared among the stake­holders. It required Viet Nam to be part of a comparable international assessment for the policy makers to realize the vital role of the learning outcomes in policy making. During 2010 the changes in educational policy documents started to appear, focusing more on the learning progress of students and the important role of teachers’ assessment skills. Changes in assessment orientation, from know­ledge-based to competency-based, were also made. It was a pleasant surprise to see the 2013 PISA results putting Viet Nam at a comfortable top quartile regardless of its low socio-economic level.

If you have 100 dollars, would you purchase a new sophisticated scale that measures weight in different unit and body mass, sign up for a nutrition programme to control calories, or buy a pair of good running shoes in order to seriously train for a marathon? Measurement alone cannot achieve much unless it is paired with systematic analysis and action.

Designing useful assessments

Learning assessments need to be implemented in a systematic fashion to collect both the learning outcomes and factors related to them, so as to inform education policy and practice in view of improving the quality of learning for all. This is easy to say. However, we often hear criticisms about the fact that the assessment data may not be used much for policy purpose. This could be because the collected data may not address the priority policy concerns of Ministries of Education.

Technically speaking, the practice of learning assessment should be considered as only one small component of in the entire policy reform cycle. Within a policy reform cycle, the policy concerns of decision makers should guide the design of learning assessments, which in turn provide policy suggestions that could be implemented. In this way results of learning assessments can be reflected in an education reform. It goes without saying that the involvement of decision makers at the design stage of the learning assessment would be inevitable.

Before embarking in any international or regional learning assessments, educational decision makers would be interested in reviewing the match between the policy concerns of the country and the assessment design set forward.

For example, if the policy concern states “after the universal free primary education policy, we have more boys and girls in school, but we don’t know if pupils at the end of primary cycle have the necessary reading comprehension in order to continue further education in the secondary level”, the target population would be the upper end of the primary level, and the test framework would be based on the school curriculum of this level. Certain burning issues such as gender differences or school resource shortages also need to be incorporated in the background questionnaires so that the results could be connected to the learning achievement. This was an example of the first Zimbabwe national assessment for Grade 6 in 1991 as well as the first Vietnam national assessment for Grade 5 in 2001. 

On the other hand, if the policy concern states “after children enter into the secondary education, we are not sure if the young men and women are receiving relevant education that are necessary for the world of work”, then the target population would be rather the final stage of the secondary education, and the competency expected may or may not be based on the school curriculum. For both cases, assessments could be administered with a carefully selected sample of students in school. However, if the target population is based on the age (rather than the grade), and if not all the students of a given age are not at the same education level, or if some are already out of school, this may complicate matters both for data collection as well as the use of data for policy purpose. That is, the data collection will go beyond schools, which requires time, cost, and capacity of data collectors. In addition, since the policy suggestions generated from the assessment data tend to link to the school-related policy, evidence associated with the out-of-school children may not contribute to any policy actions.

Information brokers interpret the data for policy-makers

Even if the policy concerns of decision makers are reflected in the assessment design, this would not guarantee for the policy suggestions to be integrated into the policy debate and then to lead to an education reform. In fact, the research team that was involved in the assessment implementation (often composed of educational planners, EMIS officers, and colleagues from examination unit) will have to play an additional role of “information broker”. Information brokers are not only responding to the original policy concerns, but also informing the decision makers about other aspects of data that are critical which may or may not have been in the original policy concerns.

An example of information brokerage is documented in the Seychelles case study. In 2000 the finding of the SACMEQ data indicated that there were very large differences among pupils within the same school, in all schools of Seychelles. This was because in Seychelles, a streaming practice to cluster pupils based on the ability was taking place, although it was not recommended in the policy. In addition, since this practice commenced at the Grade 1, based on the criteria that are associated with socially accepted norms of behaviors, the gender differences (in favour of girls) on learning achievement has widened by the time boys and girls reached Grade 6. That is, in the elite class, there were more girls, and in the inferior class, there were more boys.

After these findings, the Seychelles research team has come up with the following policy suggestion: “The Ministry of Education should enlist the help of the head teachers and teachers to implement a policy against streaming and to develop strategies to promote mixed-ability teaching in the primary school”. Here it is important for the policy suggestion to be evidence based, feasible, and holding ‘someone accountable’.

However, the policy actions leading to the implementation of the de-streaming policy at Grade 1 were not linear. The research team was on one hand involved heavily in the consultation and policy debate with stakeholders. This included consultation with the Minister, using cross-national comparison as the ‘energizer’ for more research dissemi­nation, and more dialogues with teachers and school heads who were in the ‘field’. At the same time, the research team was also involved in the policy reform. This included setting up of a ‘de-streaming’ working committee, formulating a policy memorandum, and developing a plan to implement the policy. Strategies included the development of teacher training for mixed ability teaching, mobilization of parental support, and strengthening of monitoring mechanism. The implement­ation of the de-streaming policy commenced at Grade 1 in 2003. The impact of de-streaming in Seychelles is expected be seen in the next SACMEQ report.

Bookmark this