This page was last modified on 4 August 2016, at 09:03.

Publications:Focus on: Does good evidence make good education policy

From Eurydice

Jump to: navigation, search


Date of publication: 23 June 2016

'Everything that can be counted does not necessarily count; everything that counts cannot necessarily be counted' – Albert Einstein

In recent decades there has been a strong move towards evidence-based policy making in the public sector. Few education policy-makers today would dare to introduce a reform that is not backed up by data and research evidence. But what is the nature of evidence being used, and can we rely on it to make better policy?

New Public Management (NPM) theory, bringing private sector business practices into the public sector, has been key to the development of evidence-based policy. Rather than bureaucratic, central-level decision-making, NPM argues that decisions affecting public services are best taken by professional managers as close to citizens as possible. The central level's role is then to build accountability systems – measuring performance and assuring quality – and to support competition among service providers to reduce costs and drive up quality.

At the international level, measuring and comparing educational outcomes has been most successfully developed by the OECD and IEA, through well-known surveys such asPISA (testing 15 year olds), TIMSS (eighth graders), and PIAAC (adult skills). These surveys measure the achievement of students in core areas, and have been used to assess education systems, highlighting comparative strengths and weaknesses. The purpose is to entice Member States – especially the low performers – to implement education reforms that hopefully take account of experience in better performing countries.

There is no doubt that these surveys have had positive effects on educational debate. Discussions backed up by information have become more scientific, and less based on personal beliefs. Testing has also come to the forefront of national debates. The recent Eurydice report on national tests in languages, and the 2009 report on the use of national tests in general both highlight the trend that national testing has increased dramatically since the late 1990s.

But have education policies improved as a result? No doubt in some ways they have, as policy actions can be assessed through an analysis of the likely quantitative consequences of different actions. But there have also been unintended side effects. 

A good example can be found in the field of foreign languages, where there has been an explosion in testing since 2000. While most countries emphasise that all competences (writing, reading, listening and speaking) are of equal importance, assessing speaking competences poses specific challenges. Hence, in national tests, speaking is generally the least assessed competence. If policy is informed by the evidence of such test results, the danger is that communication competences may be overlooked. 

Similar problems, whereby taught content is greatly influenced by what will be tested (known as the washback effect) arise in other subject fields. There is a risk that teaching becomes too focused on helping students pass the assessment tasks. Another potential side effect is that students focus more on their own individual performance, becoming more competitive, and neglect cooperation and contributing to the success of groups. 

The OECD's Teaching and Learning International Survey (TALIS) survey appears to support this. Teachers in countries that perform well in PISA (such as Finland, Korea, Singapore, Poland and Belgium (Flanders)) report using active teaching practices less often than teachers in other countries. Students work less in small groups, less on projects lasting longer than one week and they use ICT less for projects and class work than in other countries. 

So could it be that some forms of assessment are keeping school education in a straightjacket? Are competences such as active citizenship, communication, co-operation and entrepreneurship not sufficiently developed because they are too difficult to test? Or because they are squeezed out of the curriculum by subjects where competences can more easily be measured?

Eurydice's report on entrepreneurship education indicates exactly this. While there is broad recognition that entrepreneurship is an essential competence in the current economic context, countries report difficulty at school level in defining learning outcomes and building assessment systems.

Recently, however, some countries have taken action in a different direction. Finland, despite being one of the best performers in PISA, has decided to reform the entire compulsory education system, realising that maths, reading and science are not enough in a fast-changing world. The curricula planners want all children to re-discover the joy of learning, to develop an active learning role, and to co-operate in project work in a positive school climate. 

Perhaps it is time to consider whether the explosion of assessment tests as a basis for quantitative evidence-based policy making and accountability comes at too high a cost. As we generate more and more educational data, are we forgetting that not everything that counts can be measured? Tests, indicators and monitoring systems should not come at the expense of the joy of learning. Developing the creativity and innovative capacity of students and teachers alike is essential if we are to be prepared for tomorrow's societal challenges. 

Authors: Lars Jakobsen and David Crosier