Are you testing your students' true language proficiency? knowledge vs performance-based testing



Students taking a language assessment

Following on from my recent post about language assessment, another distinction between types of testing emerges when we consider the difference between knowledge and performance. It is fair to say that most traditional language testing that goes on in schools around the world is biased towards knowledge of vocabulary, grammar and other features of language rather than the ability to use them appropriately in speech or writing. One reason for this is the amount of time it takes to mark student-produced work as compared to single-word, sentence or multiple choice answers.


However, the primary purpose of language is to communicate - to use the words and structures that we know to interact with others proactively. This is the principle behind performance-based testing: assessment which asks test-takers to do something with the language they have learnt. Performance can be tested through longer stretches of speech or writing, or through purposeful reading and/or listening activity, performed under relatively free conditions. Once students are required to write or speak at length on designed topics, or to planned functions, their true proficiency with language can be assessed.


Types of performance-based testing


Most performance-based tests focus on the skills of speaking and writing, as these productive skills are where language is organised and put forward by test-takers. Written language exams commonly feature essays of different genres (argumentation, compare/contrast, analyse, agree/disagree…) and can be standardised based on formulaic question types and easily marked at a distance by examiners.


Spoken examinations, however, often require an examiner to be present for the assessment. Speaking is an interactive skill which operates in real-time. We speak in response to others’ ideas, questions and comments, so it follows that a good spoken language exam should be based on the same principles of language use, and should incorporate some kind of listening activity to support speaking.


Examples of spoken exam format


The Cambridge IELTS speaking component sets a single candidate with a single examiner, who asks questions which are answered according to the expected framework of the test. Candidates typically know when a short answer is required, when the examiner will ask certain types of question, and when they are freer to develop their ideas more spontaneously. Where this type of test assesses language accuracy and performance to some degree, it does not actually create a very authentic speaking situation - learners rarely sit and get interviewed on a topic in this way in life outside of testing and assessment.


Cambridge suite exams put candidates into groups with an interlocutor (examiner) who prompts discussion between them on specific topics, or aspects of topics. This is more realistic when we consider university study groups, work interaction or settings where groups of people come together to plan or organise something. This exam tests the ability of test-takers to negotiate interaction between them, collaborate to reach conclusions and share the talking time between them. Again, these are valuable speaking skills and the resulting assessment can be communicatively broader than that of IELTS. However, the topic and interactive setting is still quite restricted to the exam situation itself (the test construct), and there is still an element of ‘box-ticking’ in the test-takers’ speech, which could be seen as artificial, especially given that they may have prepared for the test using similar structures of talk in their classes leading up to the test itself.


Towards more holistic performance testing


An emerging trend in performance-based language testing is removing some of these artificially imposed restrictions, allowing for test-takers to demonstrate more communicative competences, or wider interaction skills, which go beyond the assessed types of language and behaviour designed into assessed tasks given to learners.


An example of this is Trinity College London’s GESE and ISE suite of exams. In all sections of these exams, the speaking component is designed around the test-taker’s initiation of interaction. The candidate brings an item, picture or other stimulus to present to the examiner, who has no idea of what will appear in front of them in that test. The effect of this is genuine questions, reactions and comments from the examiner, which prompt further, more authentic interaction from the test-taker. Assessment is still based on fixed criteria, but according to a framework which brings in social, cognitive, linguistic and strategic competences, which can be developed in preparation classes, but cannot be practised in the same way as an IELTS, TOEFL or Cambridge examination.


Another example of more holistic assessment might be a group presentation, where several test-takers have to collaborate to produce a presentation on a given topic. If they are allowed the freedom to communicate their ideas in any way they like, drawing on the strengths of the different members of the group, and the presentation is followed by a genuine, authentic Q and A session from an examiner, assessment can be based on a greater package than simply and individual’s speech. Assessment criteria can be designed for the collaborative preparation stage (including research skills, management of interaction, organisation and role-setting), the delivery stage (including presence, appropriacy of language, use of visual or other support, engagement, etc.) and the post-delivery stage (spontaneous response, initiative, reference to the presentation, etc.). This type of assessment, commonly used in university EAP courses, mirrors quite closely the expectations of any university student working towards a presentation in class.


In summary, performance-based testing goes beyond the typical language exam, with multiple choice answers and the notion of ‘correct’ or ‘incorrect’ responses designed by the assessors, and moves into a more context-specific, interpersonal and authentic form of testing. The closer the assessment mirrors real-life situations, the more reliable the result for the test-takers’ future use of English in their life.

Tom Garside is Director of Teacher Training for Language Point Teacher Education. Language Point delivers the internationally recognised RQF level 5 Trinity CertTESOL in a totally online mode of study, and the new RQF level 6 Trinity College Certificate for Practising Teachers, a contextually-informed teacher development qualification with a specific focus on specific contexts within ESOL, including exam preparation and assessment literacy.


If you are interested to know more about these new qualifications, or you want take your teaching to a new level with our teacher education courses, contact us or visit our course pages for details.



286 views0 comments

Recent Posts

See All