Yes, another opinion piece about ChatGPT
It’s syllabus day, and your professor announces that all written assignments for the class will be timed, and you will use a pen and paper. Also, you are only allowed to talk to the professor using monosyllabic grunts and hand gestures. Just kidding—mostly.
Reverting back to pen-and-paper and timed writing assessments is just one panicked response to the release of OpenAI’s ChatGPT late last year; the Guardian reported on January 9 that “leading universities” in Australia are returning to pen and paper exams. Seattle and NYC public schools have also issued bans on ChatGPT. That isn’t likely to happen here at Pacific, but it demonstrates the level of consternation among professors and school administrators.
When sources of information are being banned, the question arises, what are we losing access to?
Just in case your social media and news feeds haven’t already been flooded with pundits discussing the new AI chatbot, here’s a description of ChatGPT, written by ChatGPT itself: “I am a machine learning model known as a language model, specifically a type of neural network called a transformer, which has been trained on a large dataset of text. I am capable of generating human-like text and answering questions based on the patterns and information present in the data I was trained on.”
Let’s be clear about what ChatGPT doesn’t do. It does not produce facts from concrete sources, and it does not reason. ChatGPT’s reference dataset of text is over a hundred billion data points gathered from all over the internet—often without the authors’ express permission. One unfortunate result is the potential for ChatGPT to regurgitate whole chunks of plagiarized text. At other times it produces “hallucinations,” fallacies, and untruths that appear to the model to be human-like. Twitter abounds with users who’ve gotten ChatGPT to say, for example, that 10+10=25, or who manage to confuse it with simple word problems.
Most people seem to agree that “ethical use” does not include copy and pasting verbatim responses produced by AIs. In an interview with The Pacific Index, Pacific professor, Associate Dean, and Director of the School of Arts and Humanities Mike Geraci reported that Pacific’s administration will be changing the student code of conduct “to say that plagiarism can take place by copying a person, or a thing, or an artificially intelligent thing,” and that it will be up to academic programs and professors to decide how that is interpreted and implemented.
In the short term, this gives educators the tool they need to combat blatant academic dishonesty. But in the minutiae, things get weird. Traditionally, citations are required any time you use or summarize an idea that is from someone else, and is not general knowledge. However, generating ideas, outlines, and summarizing concepts are some of ChatGPT’s strongest abilities. When ideas come from chatbots, we are going to need a way to cite them. Perhaps a good way to think about ChatGPT is as a friend with whom you are bouncing ideas around. We haven’t needed to cite our buddies or partners in the past, but real people also don’t have the entire internet behind their brainstorming.
Pacific University professor of politics and Director of the McCall Center Jim Moore said that in his experiments, ChatGPT never would have gotten a grade higher than a C. “It doesn’t do what I would expect to have happen in an intro course,” he said. “It can’t think critically.”
Geraci, who also has a graduate degree in information science, said, “We are evolving, in our heads, what valid, credible information looks like. And now, ChatGPT or AI generated content is simply the next step. We need to come to an understanding with our students on how to use it appropriately and ethically.” He added, “Jim Moore was telling me today, ‘I can use this in an intro class, because it basically preloads students with somewhat valid concepts for a paper they have to write.’”
The number of ways AI writing might be used in a teaching environment are almost limitless, but it’s easy to forget that many of us have been using AI writing aids for a long time. Programs like Grammarly and ProWritingAid are editing softwares that rely on very similar language generating models. Geraci said, “In my many years here, I have never heard anybody mention the potential issues with those kinds of tools. . .. Nobody ever blinked an eye at Grammarly.”
AI chatbots and editors are only going to become more ubiquitous, advanced, and integrated. Already, Microsoft has announced plans to integrate a new version of ChatGPT into its Bing browser in March. An expanding family including bots that specialize in outlines, creativity, editing, and even paraphrasing. Just as Grammarly eased pressure on writers to self-edit for grammar and spelling, so ChatGPT will ease pressure on professionals and students alike for brainstorming, research, and copywriting. These tools are coming soon to an industry near you, and students need to be up to speed to use them in future careers.
As students, the best way to prepare ourselves for a world with internet-connected artificial intelligences is to explore their abilities and deficiencies now. It is our responsibility to join the conversation with our educators about what constitutes ethical and unethical use, lest we be unprepared for a world which has few such reservations.
The best response to bans on something is always research and self-education. Who banned it, and why? If we look to the recent spate of book bans across the country, we see that the victims of bans on sources of information generally fall into two camps. The first becomes far more interested in the source. The second camp goes on with their lives, unaware or apathetic to the information or opportunities that have been taken away. — Lane Johnson