Using AI Tools Like ChatGPT to Help with Schoolwork: It's Already Happening
AI is on the rise, and with it has come a bevy of sophisticated tools to help you with your schoolwork. Some teachers and profs are embracing it.
You've probably heard about ChatGPT by now — maybe even used it yourself, just for fun, or even to give yourself a leg-up in an assignment. For the uninitiated, ChatGPT is a tool that responds to prompts using a form of artificial intelligence (AI). You ask the bot a question, and it gives you the answer. Or you could prompt it to write a song about blue whales, and bam, you get a set of lyrics.
And yes, ChatGPT can even write essays.
ChatGPT, and other AI tools like it, are poised to make bigger changes to the academic environment than calculators and spell checkers ever could. If a robot can write as well as a student, in much less time, with much less effort, where does that leave our education system? Some professors and teachers are fighting it — and others are embracing it. In any case, there's no turning back now: you can't unscramble an egg!
So, AI tools are here to stay, and they're only going to become "smarter" and more powerful with time. Where does that leave us as students and educators?
Critical thinking skills
ChatGPT gives us the opportunity to focus more on critical thinking skills than memorizing facts or internalizing grammar rules. Users of AI tools need to know how to ask the right questions and offer the right inputs to get the desired outputs, and also need to critically evaluate the output for quality and accuracy.
Users of AI tools also need critical thinking skills to ask the right questions or provide the right prompts. The quality of your AI output will depend on how well you format your input, so if you don't know how to think your way through a problem, you won't get very far, with or without AI.
Embracing AI: new assignments and approaches
Some teachers clearly see the writing on the wall, and have begun adapting their classroom practices to our brave new world.
The key to using AI effectively is understanding how your inputs are transformed into outputs. You can't ask an AI to think through a problem for you if you haven't identified the various factors that make up a problem in the first place.
Teachers are asking students to experiment with chatbots, and then reflect on how their "conversations" went, and what the students learned from them. Others are helping students to identify patterns in AI-generated text: clues and giveaways that might indicate a given piece wasn't written by a human.
What's important is not taking the AI output as the final word, and instead interrogating the results, asking questions about why the answer turned out the way it did, and considering how other questions and prompts might lead to other answers. Analysis like this requires critical thinking — the kind of thinking that AI can't do for you.
In short, a new generation of teachers is trying to help students understand AI as just another tool in the toolbox, and see that with the right kind of critical-thinking mindset, humans can actually perform better than an AI — at least on some tasks. Other, more rudimentary tasks can be left to the machines.
The latest version of ChatGPT, currently open only to paid subscribers, can already take tests, like the SAT or bar exam, as well or better than the average human. It's still struggling with English classes, so be careful if you're thinking about outsourcing your next essay!
Limits and potential problems with AI
AI tools have their limits, of course. A handful have been identified and discussed quite a bit in the space, and doubtless, others will be uncovered as more and more people use this technology. Here are a few things to watch out for when using AI tools:
Prejudice and bias
Because tools like ChatGPT are trained on an immense volume of text, conversations, fiction, and more, AI can pick up implicit biases from the source material and transmit them right back to users. Without careful moderation, some AIs may unwittingly reflect majority points of view, and obscure other ideas. Users who receive the output may not be aware that results are skewed in favour of a particular perspective.
This is an ongoing concern for AI tool developers, and as the population of AI users grows and diversifies, we should see prejudicial or biased outputs decrease in frequency.
Misinformation stated as fact
Getting the wrong info in response to the right question can be frustrating, and it's pretty common for those who have spent a good deal of time playing with AI tools. Misinformation comes in many forms, from innocent to malevolent, but if you're getting your facts wrong because an AI gave you bad info, the responsibility is still yours.
Growing sophistication in AI tools, and wider bodies of learning data to train the algorithms, should cut down on instances of misinformation, but there's no way to prevent it entirely. The only cure is a skeptical user who knows how to evaluate information. Again: critical thinking wins the day!
ChatGPT, which we've discussed quite a bit, is getting better all the time on this, though. The latest version, available only to paid subscribers, claims to be 40% more likely to produce factual info than the previous version — how big an improvement this actually is is yet to be seen.
Ownership of AI-generated material
This is a thorny issue you may not have considered. Who owns your chat log with ChatGPT? Or the memes you've generated with DALL-E? The answer may not be clear-cut, and will vary depending on the tools you use. This may not matter much to you when you're working on an essay, but as AI evolves, deciding exactly who owns what material, and what can be done with it, will be a big question, with lots of potential answers.
Is using AI tools in school plagiarism?
Whether using an AI for your schoolwork constitutes plagiarism is another big question. Plagiarism, you probably know, is taking someone's work and passing it off as your own, without attribution. In school, we're taught never to plagiarize. But do the same rules apply if text is generated by an AI? What about if that AI has referenced others' work to pull together its answer? Who do you credit?
It's important for students to understand the difference between using AI tools as a helpful resource versus using them to produce work that is not their own. Students should always give credit where credit is due and seek guidance from their teachers or professors if they are unsure about whether their use of an AI tool would be considered plagiarism.
Teachers need to have conversations with students about plagiarism, how to avoid it, and how tools like ChatGPT and DALL-E interact with the concept. AI tools give us the opportunity to interrogate our own definitions and decide if they still serve us, or if we need a new standard for the new technology. As with so many things AI, these questions have lots of answers, and different people will have different perspectives.
(Some teachers have turned to using ChatGPT to track down plagiarized work, asking the tool if it generated a particular text! Students and teachers should both be careful here: ChatGPT took credit for some original text in this very article!)
AI: like it or not, it's here to stay
One thing that's not up for debate is the fact that AI is here to stay. It's driving cars, suggesting follow-up purchases, and making art. Companies are using it to launch marketing strategies and develop new products. Resisting AI is like resisting the calculator, or the printing press; sooner or later, you'll get left behind.
So, learning to use AI constructively, without falling victim to the pitfalls, will be a big part of modern education going forward. You may be inclined to push back against the incoming AI deluge, but as the Borg so famously said, "resistance is futile."
Fun fact: one of the paragraphs in this article was written by ChatGPT. Can you guess which one?
Read more about using ChatGPT in the classroom