I'm going to add to this with advice for any teacher running into this situation.
Ask to borrow the kid's computer for a second, and use the AI. Pick a word, then pick a letter that is not in that word. Ask chatGPT how many times that letter appears in said word. (Avoid "how many Ns in Mayonnaise" because that went viral and got trained out.) Hell, give ChatGPT multiple tries. Ask it to demonstrate each time that letter appears in a word.
Let the entire class witness chatGPT fail. Because it cannot count. It cannot spell. It cannot think. Please put your lesson plans aside for a class and use it as a learning opportunity.
To add to your arsenal for educating these kids, please look into the concept of AI hallucination. AI cannot perceive things and has no ability to think critically, which means it cannot tell what's real and what's not. Really drill into these kids that they are better off asking advice from a toddler.
I used the characterAI bot instead of chatGPT in this case, but chatGPT has the same issues, because neither bot is capable of thinking about what it's saying.
Calling these things "artificial intelligence" is a core part of the problem. They are not intelligent in any sense of the word. They are less intelligent than the spell check and grammar check functions in Microsoft Word circa 2010.
Part of the problem is that ChatGPT has been advertised as being able to do all sorts of things, when its core function is not that. ChatGPT mimics human speech and responses.
That’s all it does. Even though it has been fed the entirety of the content available on the internet and public archives, it only uses that in order to mimic speech of certain situations. If you ask it to write you a poem, it’s gonna crunch the numbers and determine the statistically most-likely next word in the series based on its collection of works labeled as ‘poetry’. If it happens to spit out something profound-sounding, that’s entirely human interpretation. Likewise when you ask it a question about facts. The machine has a directive to respond to questions with definitive -SOUNDING answers, and picks out the statistically most common ‘confident’ language based on its database of research papers.
If you ask chatGPT to give you citations for its information, it will use the MLA formula and put in the statistically most-likely piece of information in each spot.
You can ask ChatGPT for a citation of a doctoral dissertation on the soul of a toaster oven by a German Sikh in 300 BC, and you will get it, despite the fact that such a document doesn’t exist.
The program will spit out a German name for the author, a title that references Sikh beliefs, with a journal that publishes dissertations as a publication source, and 300BC as the date.
Because that’s all ChatGPT is. It’s longform autofill. If you’ve ever been irritated by the keyboard on your phone trying to automatically shove in the next word you’re typing, why would you want chatGPT to do anything for you?