top of page
  • Writer's pictureTom Garside

Four ways to ensure ESOL students use AI ethically

Trinity CertTESOL

With ongoing developments in generative AI, the nature of English language study is changing. Institutions worldwide are working hard to find solutions to the potential risks to academic integrity, and ways to encourage students to harness this new tool as a resource for their study.

The balance between these two sides to the issue is fragile - students’ AI use may be encouraged by forward-thinking schools and universities, but at the same time, this use needs to be ethical and uphold the academic standards that are necessary for effective demonstration of learning to take place. 

The changing role of language accuracy work

For language educators, there is an additional challenge: with students using a tool such as GenAI which can produce 100% accurate (if not always authentic or reliable) content, what role does language education have in their futures? If accurate content can be produced in seconds in answer to almost any question, how can we ensure that students are using it productively, ethically, and retaining their own authorship of their work?

This depends not on whether students use AI or not in their work, but how it is employed when they do use it. As a translation tool, it may be effective and save a lot of time for learners researching content in a second language, but to correct language accuracy in student essays, this may go beyond traditional spelling and grammar checks that we all use every day and become an unacceptable use of AI.

AI - a tool or a substitute for learning?

To define the ways in which students use GenAI (or any other learning tool, for that matter), there is a useful distinction to draw: When the student uses this resource, are they using it as a substitute for other learning that they would have to do without that tool, or are they using it to aid their learning, and working with it as a reference to produce their own authored work?

The first approach can be defined as substitutive use of AI. An example of this would be a student who feels that they can’t write a third paragraph in an essay, as they don’t have the ideas or language to do so effectively, so they ask ChatGPT to write it for them. They copy and paste the text into their essay and submit it without review. In this case, ChatGPT is being used as a substitute for the learning which the student needed in order to fulfil that task themselves.

The second approach can be defined as facilitative, where the GenAI system is being used to facilitate the student’s learning. An example of this would be if the same student was having trouble thinking of a third paragraph idea, and turned to ChatGPT to ask it about topics that relate to the essay prompt they are writing to. Taking the response from ChatGPT, the student could choose one topic that fit the essay, and ask ChatGPT to write a paragraph outline for that topic. Finally, the student could evaluate the AI-generated paragraph and decide that as it stands, it does not fit in the third paragraph position for the essay, so using the AI-generated idea and basic paragraph structure, the student rewrites it to fit the tone and flow of ideas in his essay. The AI-generated content may end up being unrecognisable as such, as it would fit into the student’s essay well, and develop its ideas in the same way as the rest of the essay, with appropriate examples produced by the student.

In the second example, above, the student has employed much more than language skills in writing this paragraph. He has turned to a research resource to gather ideas, evaluated the ideas generated and synthesised these ideas and the language generated in order to connect it into the text he was writing. He may well have learnt some ways of developing ideas, and some language used to develop paragraph ideas along the way. In this way, the AI has been used as a facilitative tool, not a short-cut or substitute to delegate the work of thinking or learning to.

Enforcing facilitative use of AI

Given the choice, you might say, then any student is more likely to reduce their workload by using a new resource such as AI in a substitutive way. How can we stop them doing this? Well, the best way to guide student behaviour in learning or assessment situations is through the design of the tasks that we set them.

By instructing students to perform the language tasks we set in certain ‘AI-proof’ ways, we can encourage its use in a more productive and academically ethical way. Here are some ways of limiting AI use to facilitative tasks, rather than leaving the door open for any use of AI, including substitutive study methods:

AI-independent planning and strategy work

One way of side-stepping AI is to alternate between planning and producing stages of a task, and leading any planning or preparation stages through traditional pen-and-paper work. Asking students to draw mind maps, planning charts or tables, draw up and select from lists of ideas, essay plans, etc. the strategic thinking about how a task is going to be performed is achieved by the learner, so the learner is more likely to follow this through with their own work.

If AI is employed in future steps of the process, then the final result of the task can be compared with the original paper-based plan, and a discussion about the changes to content, and how the work was produced can be held. This turns even unethical use of AI into a more ethical discussion about use of new resources.

Portfolio assessment and Process-based tasks

Another assessment method which is resistant to GenAI interference is assessment of ongoing work as built into a portfolio. Smaller chunks of work can be gathered and stored for regular assessments throughout a period of study, with a final grade given for progression in specific language or skills areas. If GenAI is used throughout the portfolio, the grade will naturally be low, as progress will be limited, even if the content of the portfolio is 100% accurate in terms of grammar and vocabulary.

Even in product-focused writing assessments, tasks can be worded to include evidence of the process followed to get to the final written piece. Asking students to include their notes, or comments relating to how and why they chose to write the work in this way, help to focus on aspects of their work which cannot be generated by AI. After all, AI does not think, it just produces language to order. By asking students to justify the thinking that went in to a piece of work, assessment goes beyond what AI is currently capable of.

Personal and reflective content

Another way of working beyond what is possible through use of GenAI is to cap tasks that learners perform in or out of class with AI-independent reflection. Asking students to prepare and present a spoken reflection or in-class written piece on what they found easy, difficult, challenging or interesting about the task process, will ensure a more facilitative approach. A reflection on work produced entirely by GenAI is not possible, and even if it is attempted, would not be convincing. 

Tasks which focus on personal experience or issues local to the learner group are another way of AI-proofing lesson content. AI tends to produce generalities, and gets less convincing as it is focused on more personal, specific and human issues. By contrast, these make good starting points for AI-independent discussion and project-based tasks, so can be another way of preventing substitutive use of AI.

Focus on style, tone and organisation of message

Finally, where AI excels at creating texts with ‘perfect’ vocabulary and grammar accuracy, where it falls down is in use of tone, or depth of argumentation, or a specific ‘voice’ which runs through a text. Focusing on these aspects of communication, and weighting these as assessment points, will preclude students from delegating their development to AI. 

If AI becomes a regular fixture in the workplaces of the near future, the biggest successes will emerge from achievements made by humans. These are the achievements which will stand out: the personal touch, empathy and generosity from employers and businesses to clients, and creative thinking which goes outside the boxed responses that AI is capable of. 

The thinking and communication skills which will bring this level of human touch to the workplace and the classroom comes from effective learning in the ways suggested above. By developing a focus on personal, proses-based, strategic and reflective thinking, we can train our students to succeed with or without support from Artificial Intelligence, whatever role it will play in their futures.

Language Point Teacher Education Ltd. delivers the internationally recognised RQF level 5 Trinity CertTESOL over 12 weeks, part-time in an entirely online mode of study, and level 6 Trinity College Certificate for Practising Teachers, a contextually-informed teacher development qualification with specific courses which focus on online language education or online methodology.

If you are interested to know more about these qualifications, or you want take your teaching to a new level with our teacher development courses, contact us or see our course dates and fees for details.

Upcoming online training courses:

Level 5 Trinity CertTESOL (12 weeks online):  July 1st - September 20th, 2024

Level 6 Trinity CertPT (10 weeks online): September 2nd - November 8th, 2024



Commenting has been turned off.
bottom of page