Professor Balbir Dean, Academic Dean of the Faculty of Science and Technology, explains why academia should embrace rather than fight against the AI platform
For academics not up to speed with emerging AI platforms, ChatGPT is a conversational chatbox that uses a generative, pre-trained large language model (trained to predict the next token) using natural language parsing. The scale of the trained model is derived from a dataset constituting almost 1 trillion words.
Google searches for ChatGPT have shot up since the release of the platform in Nov 2022. It has also been the subject of media attention. Up and down the country, academic staff rooms (both virtual and real) have been in various stages of shock as faculty staff attempt to establish what it means for course design, student learning and quality assurance. There are already early cases of academic misconduct through the use of the ChatGPT platform for student assignments.
It presents us with real challenges as it could make cheating and misconduct easier. Will the shift to moving away from exams be halted Its tracks as universities aim to ensure quality control and stamp out dishonest behaviour?
I believe neither burying our heads in the sand or yearning to go back to what we had before is the answer. It never has been when it comes to the opportunities and risks posed by new technology. I’m suggesting we embrace rather than fight.
For Higher Education (HE), two key stakeholders are affected – the teachers and the students. This note advances the notion that these stakeholders need to invert their thinking and approach to ChatGPT especially in the context of broader digitalisation agenda in HE.
Faculty staff need to avoid entering an arms race that they will lose. Every request to ChatGPT generates a unique response making both detection and provision of sufficient proof time consuming. Detection mechanisms may exist as there are already cases of software apps claiming to detect ChatGPT outputs, but the underlying large language models only get better and detection may get harder. OpenAI (the owners of ChatGPT) itself may provide digital watermarks. We will have to wait and see as the move of OpenAI from an open platform/“not for profit”mode to a new model where profit is capped to 100 times the investment for investors, is a fundamental direction change in the ethos of the company.
Other product companies working in plagiarism detection, such as Turnitin, will also enter the arms war. Costs will inevitably rise for institutions as new “value-add” services such AI Detection are bolted on.
Academic departments will re-examine their assessment design. The recent welcome shift away from unrealistic (and not terribly useful in real life) modes of assessment such as exams will return with vengeance and on steroids. This exam artillery was never helpful and now it may be damaging especially to those from minority backgrounds.
Other strategies will include efforts at “designing out” misconduct opportunities. Multiple assessment points will be embedded. All will lead to increased costs and take away valuable time that can be used to support students properly.
Meanwhile, what of the students, who are potentially future consumers of enhanced ChatGPT 4.0 services? Students driven to potential academic misconduct will spot an opportunity to move away from essay mills and other contract services and want to have more direct engagement to their course assignments and for free.
Given the range of capabilities (albeit with limitations) many more opportunities for misconduct will become available. Students will be able to use ChatGPT on their smart phones during in-class synchronous discussion, as well seeking support in writing 3000 word complex essays or even software programmes.
We are safe in the assumption that ChatGPT or similar platforms are only going to increase in their availability and usage so an alternative approach from both stakeholders is required.
That’s why I’m proposing that given ChatGPT offers a new sweet spot of opportunity for 21st Century HE, we should embrace it rather than fight it.
Here are some alternative ways of using ChatGPT:
A third stakeholder also stands to benefit from this new learning technology. New software tools can be imagined. Just think: An essay authoring environment that integrate argumentation frameworks such as Toulmin, with ChatGPT capability, Google scholar search with Mendelay citation. Could that be a potential software package that students might get bundled in with the latest tablet device at the start of term?
Pro-vice chancellors of Education are hopefully commissioning local research to get to grips with understanding the impact of ChatGPT. There are lots of pedagogic research questions worth exploring such as protocol frameworks for using ChatGPT as an additional virtual teaching assistant, or even implementing/prototyping example tool chains such as that mentioned earlier. It’s not all gloom. We can make ChatGPT work!
MDX academic @josiebarnard talks about what it means to be digitally excluded - and thinks about some solutions - o… https://t.co/99rU4PWV6K
Our business school research fellow @rogerkline has co-authored a new report about disproportionate referrals of… https://t.co/dNg7WuPjwE
RT @DrAnneElliott: Just caught up with @ProfTEvans latest political discussion on @ShareRadioUK. Great insight and clarification on the mos…