Schools and universities are panicking about artificial intelligence (AI) and cheating. But AI poses a much bigger threat to equality in education.
Fear of cheating usually stems from a concern about fairness. How is it fair that one student spends weeks working on an essay while another asks ChatGPT to write the same thing in a few minutes? Worrying about giving every student a “fair start” is essential to preserving the idea of New Zealand as an egalitarian country.
But like the myth of the “American dream,” New Zealand’s egalitarian narrative masks more damaging inequalities, such as structural racism and the Housing crisisboth of which have an outsized – and decidedly unfair – influence on today’s students.
These persistent inequalities dwarf the threat of AI cheating. Rather than hand-wringing excessively about cheating, educators would benefit from preparing for AI’s other inequalities, all of which are showcased in OpenAI’s latest major language model (LLM): GPT-4.
GPT-4 is here, for a price
GPT-4, which has refined guardrails and more parameters than ChatGPT, is touted as safer and more accurate than its predecessors. But there is a catch. GPT-4 costs US$20 per month.
For some, that price will be unimportant. But for those whose budgets are squeezed by skyrocketing inflation, it could be a deal breaker. The democratization potential of AI technology is there, but only if you can afford it.
This digital divide places students and educational institutions in two camps. Those with sufficient resources to reap the benefits of AI tools. And those without the same financial flexibility who are left behind.
It may seem small now, but as the cost of AI tools rises, this digital divide could widen into an immense one. This should worry educators who have long been concerned about manners unequal access to learning technologies creates inequality among students.
Read more: Evolution, not revolution: Why GPT-4 is remarkable, but not groundbreaking
AI threatens indigenous languages and data
AI tools are also perpetuating the global dominance of English at the expense of other languages, especially oral and indigenous languages. I recently spoke with a Microsoft executive who called these other languages ”edge cases” – a term used to describe unusual cases that cause problems for computer code.
But native languages are only a “problem” for AI tools because large language models learn from online datasets with little native content and an overwhelming amount of English content.
The dominance of English content online is no coincidence. English rules the internet because centuries of British colonization and American cultural imperialism have made English the best lingua franca of global capitalism, education and internet discourse. From this perspective, other languages are not inferior to English; they just don’t make as much money as English language content.
But Māori speakers are rightly wary of attempts to commercialize their language. Too often the commercialization of indigenous knowledge does not benefit indigenous people. Therefore, it is essential for Indigenous communities to maintain control over their own information, an idea known as Native data sovereignty.
Read more: Growing numbers of non-Māori New Zealanders are embracing learning to reo – but there’s more to it than just language
Without Indigenous data sovereignty, these billion-dollar tech companies could get value from these so-called edge cases and later decide not to invest in them anymore.
For educators, these threats are important because AI tools will soon be included in Microsoft Officesearch engines and other learning platforms.
At Massey University, where I teach, students can submit assignments in te reo Māori or in English. But if the AI writing tools compose better in English than in Māori, then they put Māori language learners at a disadvantage. And if Māori language learners are forced to use tools that compromise Indigenous data sovereignty, that’s also a problem.
Banning AI in education also creates inequality
While it is tempting to ban AI in education – so some schools And academic journals and even some to land have already done – this also exacerbates existing inequalities. People with disabilities can benefit from communicating with AI tools. But like laptop prohibited from earlier eras, AI bans deny students with disabilities access to key learning technologies.
Banning AI will also penalize multilingual students who may struggle to write in English. AI tools can help multilingual learners learn key English language genres, structures, prose styles, and grammar – all skills that contribute to social mobility. But banning AI puts these multilingual students at a disadvantage.
Read more: ChatGPT is the push higher education needs to rethink assessment
Instead of banning AI, teachers would be better off adapt their curricula, pedagogy and assessments for the AI tools that will soon become ubiquitous. But revisions like this take more time and resources, something teachers at school And university teachers both have been striking lately. Educational institutions must be willing to invest not only in AI tools, but also in teachers who are essential to help students think critically about how to use them.