Cause of and solution to the problem of AI?
Channeling Homer Simpson.
New York City public schools have restricted access to ChatGPT, the AI system that can generate text on a range of subjects and in various styles, on school networks and devices. As widely reported this morning and confirmed to TechCrunch by a New York City Department of Education spokesperson, the restriction was implemented due to concerns about “[the] negative impacts on student learning” and “the safety and accuracy” of the content that ChatGPT produces.
I immediately thought of this:
When reached for comment, an OpenAI spokesperson said the company is developing “mitigations” to help anyone spot text generated by ChatGPT. That’s significant. While TechCrunch reported recently that OpenAI was experimenting with a watermarking technique for AI-generated text, it’s the first time OpenAI has confirmed that it’s working on tools specifically for identifying text that came from ChatGPT.
I’m curious to learn more about this “watermarking” feature. Will it be a distinctive and readily identifiable pattern of assembling words? What stops a student from learning the algorithm and looking for what to modify? While it’s probably just easier to do the assignment, some will undoubtedly prefer this path.
Thanks for reading Dan Decker’s Writing Newsletter! Subscribe for free to receive new posts and support my work.