What are the different purposes of language tests, how can they help learners to develop language proficiency and how can fair assessment be assured? We spoke to Johanna Motteram, project manager in the British Council’s Assessment Solutions team, about these questions. 

  • How has the British Council played its role in language test development (history of IELTS and Aptis)?   

The British Council has been involved in the international language test development community since 1941, when an agreement was signed with the University of Cambridge Local Examinations Syndicate to develop and deliver English language tests in partnership. The British Council has employed senior academics, and sponsored significant projects, in language assessment since then. The British Council was central in the development of IELTS, and the British Council Aptis test, a ground-breaking computer-based language test celebrates its 10th birthday this year. 

In addition to language test development, the British Council has been involved in much work investigating how language tests influence teachers, learners, textbook publishers, and many other stakeholders in education systems. These investigations inform the advice we give to governments and institutions in many countries around the world about how best to influence change in language education, by considering teaching and learning, assessment, curriculum, and standards, and communicating with all stakeholders. 

  • What do IELTS/Aptis aim for?  How do they help English education or developing practical English ability? 

Skills tests like IELTS and Aptis reflect the way that we use language in everyday life; we speak and listen, and we write and read. We know from research that introducing assessment in speaking and writing is important if we want students and teachers to spend time and energy developing speaking and writing skills because students and teachers pay attention to what is assessed. 

When the tasks in tests include the kinds of language functions that we need to do things with other people, students and teachers practice those things inside and outside the classroom. In this sense, including speaking and writing tasks which reflect the ways we use language with other people to get things done can help teachers and students to focus on authentic and practical use of language…in other words, they focus on useful language learning. 

When young people have access to English as a useful tool for communication, they can then start making connections around the world. Online communication, international school and university exchanges, and eventually participating in the global economy are all more possible with confident English speaking and writing skills. 

One task that we use in speaking tests is asking candidates to describe a picture of some familiar thing or some familiar experience. Describing things and experience is very useful in a lot of scenarios outside the classroom, so learning how to do it well helps learners in a lot of different circumstances; making friends by talking about themselves, helping other people find or choose things, or sharing examples to convince people of their ideas. 

  • How can we fairly assess 4 skills in a big group? 

For the productive skills of Speaking and Writing, basically, we ask each candidate to do very similar things, we record them doing those things, and then we evaluate how well they did those things against the same criteria. For example, for Speaking, we ask them all to describe an unseen picture of something familiar, then we listen carefully to how they completed that task, and then we link their performance to a set of criteria. 

These criteria are based on many years of development and use and describe different levels of spoken proficiency. We train our examiners on how to use them to evaluate performances and then test the examiners’ application of the criteria. Once the examiners can demonstrate their consistent and reliable application of the criteria, we let them start evaluating students’ performances. We have a lot of different ways to monitor examiner’s performance, including using two or three examiners for each performance, random inclusion of performances which have been marked by many examiners, periodic double checking, and statistical modelling. We use combinations of these methods depending on how the score from the test is going to be used. The more important the decision being made based on the score, the more careful we are with monitoring our examiners. 

Regarding large groups, at the British Council we’ve been focussed on how to use technology to solve practical challenges in assessment for quite some time, and we have 10 years of experience in using computer-based testing to assess speaking and writing quickly and reliably with Aptis. We have a global network of trained examiners available 24 / 7 / 365 to work through the marking queues of test candidates from around the world. We use secure internet connectivity to connect candidate performances, examiners, and our certification systems so we can return results in a couple of days. When we have a very large group of candidates, it takes a bit longer and we have more examiners working, but we use the same tried and tested principles. 

  • Some people in Japan say it is difficult to score speaking skills fairly.  Can we assess them? 

My first reaction to this question is to say that yes, it is difficult to score speaking skills fairly if you have not been trained to do it, and if you do not have clear guidelines about how to do it. That’s why it is important to involve experts in assessing language skills. 

At the British Council we have very strict training and monitoring for our examiners. We only employ experienced English language teachers with very high levels of proficiency to become examiners. We then train the new examiners and check their scoring reliability. When we confirm their consistency, we allow them to start scoring. We then monitor their reliability every time they score any test performances. In the Aptis test we use random inclusion of performances which have been marked by many examiners to make sure each examiner is scoring reliably in alignment with the criteria. 

The criteria we score speaking skills against are linked to international standards, which have been developed over many years. They describe how learners can use language at different levels, with no reference to native speakers or to first languages. They focus on what candidates can do with language, and reward attempts to communicate. 



Johanna Motteram works in the Global Assessments team in the British Council. Her role involves conceptualizing, delivering and evaluating assessment solutions for institutional and government clients. Recent projects she has been involved with include the Workplace Literacy and Numeracy (WPLN) assessments development for SkillsFuture Singapore and a bespoke IELTS preparation course for Health Education England (an agency of the UK NHS) in collaboration with British Council English and Exams in India. Johanna’s background is as an English language teacher and academic. She has taught English and Applied Linguistics, and researched aspects of teaching, learning and assessment in a variety of contexts. Her current passion is playing tennis, but when she has to work for a living, she is very happy to work on impactful projects which leverage the power of tests for positive educational and societal change.