ChatGPT - What You Need to Know

From WSU Technology Knowledge Base
Jump to navigation Jump to search
From the desk of
Liberty.png
Dr. Liberty Kohn

WSU Professor of English

-Kohn, L., February 2023

About the author

Dr. Liberty Kohn is an English Professor at Winona State University (WSU) and the Director of the WSU Writing Center. Dr. Kohn is also the former Writing Across the Curriculum Director and former Chair of Faculty Development.

What is ChatGPT

ChatGPT is an artificial intelligence that can generate original answers with flawless writing styles and often flawless information. How is this Bot new and different from traditional copy/paste plagiarism? ChatGPT doesn’t write essays, articles, or papers by finding and copying them verbatim from elsewhere on the internet. Rather, ChatGPT locates the many, many instances of a particular explanation, summary, or answer on the internet, then uses these multiple examples to predict what an original statement or essay would likely say. Thus, ChatGPT constructs a new essay by predicting what a new essay would say—a process very similar to your own cognition.

ChatGPT has already passed graduate-level and professional-level high-stakes testing—although with only slightly above average marks. ChatGPT can be asked to write in different styles and genres. It can write in the voice of a journalist, a scientist, a poet, or a 10-year-old writing a letter to her grandmother. It’s ability to write in different voices is one feature that makes its work hard to identify. Because ChatGPT writes original text, its writing isn’t technically plagiarism, making it harder for plagiarism checkers to catch.

Many good articles exist that will help you understand ChatGPT. The current version of ChatGPT will consistently be improved to take on new tasks. Thus, any advice to prevent students from using ChatGPT is temporary, and future versions of ChatGPT will get better at writing college papers, or answering questions, or taking exams. However, industry will produce equivalent AI to help detect papers written by a Bot, and you should update your knowledge of these Bot Stoppers periodically.

What can I do?

Based upon initial analysis of ChatGPT by international experts and those in the WSU community, I might suggest the following: First, ChatGPT can write many types of papers, but this AI writes some types of papers better than others. For instance, ChatGPT has a very easy time summarizing or writing an opinion paper with very little use of sources. If you are asking your students to interpret a single book in your field, to write a book report, or only write upon one reading, ChatGPT and your students will have an easy time cheating on your assignment. Critics have pointed out that ChatGPT doesn’t write scientific literature as well as common humanities assignments. However, ChatGPT can easily write a general essay on a particular scientific topic, but may struggle more to write a lab report or IMRD paper with a user’s own original data. ChatGPT most easily writes the Introduction and Abstract sections of a scientific paper.

If we think of information or communication existing on a continuum that stretches from left to right and is comprised of, on the left, summary, description, and explanation, and on the right, analysis, criticism, evaluation, and synthesis of ideas, ChatGPT will have an easier time completing tasks from the left side of the continuum. ChatGPT can also complete tasks from the right side of the continuum, although the artificial intelligence is more prone to mistakes in the selection of quotations, statistics, and other elements that are integral to analysis, criticism, evaluation, and synthesis. ChatGPT can write compare and contrast papers, a high-level cognitive skill; this ability is likely because a high degree of comparison and contrast writing on the web (from all fields and topics) allows ChatGPT to draw upon examples from across the web to predict an original, strong comparison and contrast essay. However, in my own research, I found that ChatGPT did not automatically perform similar high-level tasks, such as comparing, contrasting, or synthesizing findings in a Literature Review, when ChatGPT was asked to write a Literature Review. However, all experimentations and advice are partial, as ChatGPT produces original work and is constantly being improved.

How can I design assignments so that ChatGPT is hard or impossible to use?

The following sections will point to methods to potentially identify ChatGPT writing and design assignments around ChatGPT’s potential weaknesses. However, I’d like to first discuss some traditional methods of stopping plagiarism, the purchasing of essays, and other methods of helping student produce their own original work. Some of these methods may make it harder for your students to use ChatGPT, while others may not. All are good practices for designing writing assignments and for increasing critical thinking in your courses, regardless of your concern about ChatGPT.

  • Have students summarize and select their sources ahead of the date the paper is due.
  • Do not let students write a paper the night before. Structure your students’ paper planning and writing in small doses into a several week period before the paper is due.
  • Have students write a rough draft; have students slowly write their paper in chunks; track students’ production on an original paper; if the paper suddenly changes drastically in topic, background knowledge, or voice, you’ll have evidence to be suspicious.
  • Have students visit the WSU Writing Center to help relieve their anxiety about writing a paper and to help them plan a paper and timeline. Relieving anxiety about major projects and developing a timeline can decrease plagiarism, ChatGPT, and cheating. The WSU Writing Center can help you with this.

The above bullets are good advice to help students write better. These strategies motivate and help students produce original writing of their own. However, asking for an outline or summary of sources may not be enough: ChatGPT can produce these items along with a paper. Asking for all student work (outline, annotated bibliography, etc.) on a due date, without structuring the work into your course, will still allow students to potentially derive all assignment materials from ChatGPT.

You may wish to have a simple base of your students’ writing that you can later compare to any writing produced by AI. You can have your students write in class, particularly on D2L, so you get to know their writing, their voice, their capabilities. Later you can compare writing samples if you believe a student used ChatGPT.

Later sections of this wiki article will discuss some potential signs of an essay written by ChatGPT, but for now, we might look for traditional signs of cheating or plagiarism to detect ChatGPT:

  • Be wary of papers with perfect grammar and organization; most students do not write perfectly on the first try, or even later, especially if assignments are challenging.
  • Be wary of papers with advanced vocabulary, excellent “flow” in the information, and other linguistic signs of advanced understanding of a field. Students can periodically achieve these, but even the best students have some hiccups in thought or word.
  • Have a policy: Is it OK to feed original work into ChatGPT to clean up the grammar and organization? What about the content? What’s the line between help from artificial intelligence and cheating? Have a course policy.
  • Eventually, AI may appear that can trace these flawless, original papers through counter algorithms. For now, your experience and intuition are good guards against ChatGPT.

After working with faculty across all of WSU’s disciplines on writing instruction over my career, I might close this section with some final advice: What do you, as a faculty member, want? Do you just want a paper that’s nice and easy to read? Many faculty have suggested to me this is the case. ChatGPT will do that.

However, if you are willing to read, without annoyance, perhaps with pride, student writing that has mistakes in critical thinking as well as in the writing (organization, paragraphing, grammar and readability), then you are helping your students to avoid the Chatbot. The signals you send about expecting and valuing mistakes in thought and writing may encourage students to write flawed papers, rather than use ChatGPT because they are afraid of being downgraded for errors. Tell students you expect to see typical student mistakes, such as a student writer wandering away from your assignment’s task, having poor selection of evidence, losing their self-chosen thesis, and similar mistakes. These mistakes, and a chance to revise them, are signs they are growing as thinkers and writers in their field. And these mistakes are proof that they, not AI, is earning a college degree. Also, if reading pages of student writing sounds tedious, shorter classroom assignments with grading focused on several select, achievable critical thinking goals (see final section) may help students feel compelled to write a paper for you because they are not anxious and the writing goals seem achievable.

Lastly, I suspect that educators can use, in limited ways, ChatGPT as employers certainly will. ChatGPT will write an initial draft or statement, but humans will have to look for mistakes in information or thinking as well as identify highly general statements that are imprecise in information, critical thinking, or answering a question directly. Again, you will have to decide how to use ChatGPT, but requiring students to point out ChatGPT’s poor selection of information, poorly explained or too general concepts, and missing ideas will help your students read better, critique a Bot’s writing, and perhaps guide students toward doing their own work. This option is a different way to help students think critically, should ChatGPT become an unstoppable feature of academic life.

For more information on designing assignments or identifying work produced by ChatGPT, keep reading.

What are ChatGPT’s limitations?

This section will approach the answer to the above question in two ways: 1) the current potential limitations of ChatGPT’s artificial intelligence; 2) the limitations of what students can easily, in unconvoluted language, ask ChatGPT to do. Either limitation may decrease the use of AI in your course.

While ChatGPT is capable of writing in a large number of genres with flawless grammar and strong organization, there are a few considerations or potential limitations in the current version, although these limitations depend on the original task and improvements in AI:

  1. ChatGPT seems much better at explaining than at complex arguing or position taking, although ChatGPT can also produce complex position papers. Essentially, if you are going to assign writing, and you only ask students to “explain” or summarize a topic, you are playing into ChatGPT’s strengths. Expect cheating.
  2. ChatGPT sometimes shows its limits when asked to connect multiple perspectives, to have doubts about a position it supports, or to include counterarguments in a paper. These additional, simple requirements add cognitive complexity to your assignments, and these requirements will be harder for students to request of ChatGPT. Why? Because taking a simple “position” or writing a five-paragraph essay with three simple points is not too much different than explaining or summarizing, at least in terms of Artificial Intelligence scraping the web for examples to predict what to say next in a (position) paper. However, asking students to include a counterargument, or to include several different perspectives, or to include a section on what is known versus what is currently unsettled or debated on a topic, or to include a short review of literature to begin a position paper, seem to be harder for ChatGPT. Why? These brief forays into academic thinking are not common in most web writing ChatGPT uses to predict its original statements. Students will have a much tougher time telling ChatGPT to do these specific, highly isolated writing “moves” (counterargument, unsettled issues, a short literature review in intro, etc.) in ChatGPT’s command window. Lastly, these specific, isolated writing moves to defuse ChatGPT are excellent for students’ critical thinking and academic writing. (See final section.)
  3. Similar to #2, ChatGPT seems to predictably use sources in one particular way: providing a claim, topic, or main idea, then providing a general quote that generally matches or evidences the main idea. However, critics have noted that sometimes ChatGPT’s selected quotes or statistics do not always match the main idea—a weakness in ChatGPT’s writing. Thus, you may ask your students to better explain how their evidence, especially a supporting quote, proves their main idea. Making students deliberately explain their evidence in their essay improves their writing and critical thinking. Also, requiring students to have impeccable quotes or evidence to support a claim will help your students read better, and this requirement, even for a cheating student, may result in the student having to re-read a source, then modify a paper written by a Chatbot. Thus, even in a cheating scenario, students are forced to re-read and revise a draft from ChatGPT. (This is a likely workplace skill in an AI future.)
  4. As per #3, you could ask students to use a source in a specific way that is harder for ChatGPT to integrate. For instance, you might require students to include a source that disagrees with a part of their argument or position. You might ask students to find a source that nearly agrees with their own position, but then explain both this source and how the student’s own opinion is slightly different. This use of sources is common in advanced academic thought. You may have to teach your students these critical thinking skills during class time, as well as a sensible location for them to appear in a paper. Moreover, these specific uses of sources create more cognitive dissonance and learning in your students. Lastly, this use of sources will be harder for students to request from ChatGPT, even if ChatGPT can accomplish the task. For those in the social sciences and sciences, having students place their findings in a Discussion section (which requires synthesis of information) may reveal the inadequacies of ChatGPT’s ability, although critics have noted that ChatGPT can produce a basic scientific paper in some disciplines, but excels mainly in Introductions and Abstracts.
  5. I’d like to focus on incorporation of research into writing to highlight how to design assignments that may limit ChatGPT’s usefulness to students. In my own analysis of a Literature Review on employment trends written by ChatGPT, the program did not include specific citations (author, year, etc.) when setting up or concluding a summary of study findings in the body of the paper. Rather, ChatGPT simply said “One study found…” repeatedly without any attribution of the article it summarized. This too general set up of a source is one sign that you may have a paper written by ChatGPT. Other critics have noted that they had to search the web to find the source summarized by ChatGPT as “One study…” Thus, if you allow your students to say “according to one study,” as opposed to requiring students to say “According to Smith and Johnson” while citing in APA, MLA, IEEE, or a similar style, you are making it easier to use ChatGPT. That is, if you allow your students to be general in their summaries and citation, you are making it easier for ChatGPT to complete your assignments.

Critics have also noted that ChatGPT seems to select sources based on number of scholarly citations. You might have students find their own research sources and explain in writing previous to their paper why they have chosen this research. ChatGPT will be challenged to synthesize the article exactly as the student expressed, although ChatGPT can have materials fed into it that must be included in a paper. Or you may require students to cite articles new and important to the field, as ChatGPT struggles to use information from the last 24 months. Mainly, the issue is how and where AI can make these sources appear, and how you ask students to have them appear elsewhere as part of writing in your field.

Also, concerning a Literature Review, the summaries in the Review I found were very basic, likely too general, at least for my tastes and requirements as a faculty member of the English Department. Asking your students to point out the unique finding of each study would likely force students to read a source closely and write a better description than ChatGPT does. My own guideline for summarizing is that a summary in a Literature Review must distinguish and highlight the unique finding or idea of each study or source. If a summary is so general that it could function as a summary of many different articles, the summary is poor. I also stipulate that this major finding must appear in the first sentence of the summary or annotation, another simple requirement to defeat a Bot, which doesn’t automatically do this. Asking for uniqueness in summary and annotation may help your students shift away from ChatGPT-generated summaries.

I’d like to note that this problem of simple versus complex summary is really a “reading instruction” issue, yet reading instruction is a task most faculty don’t feel they should have to teach, as students should already “know how to read.” However, teaching your students to read for, then write, both the general and the unique findings of each source in a Literature Review will force your students to do their own work, or to read and heavily modify the initial writing of ChatGPT. In the end, each faculty member will need to decide how much generality they are willing to accept in a summary. The greater the generality in a summary a faculty member accepts, the easier it will be for ChatGPT to produce.

One way to potentially foil ChatGPT: in a Literature Review ChatGPT did not compare or contrast the findings of the different literature reviewed, nor did ChatGPT do what most human-written Literature Reviews do: slowly synthesize the findings of the sources to draw larger findings across multiple studies. Again, if faculty members are willing to accept summaries of readings without basic cognitive work such as periodic comparison, contrast, or synthesis of the sources, then your students will have an easier time using ChatGPT in your course. All faculty members would benefit from looking for what and where higher-level thinking (comparison, contrast, evaluation, synthesis, etc.) happens in a genre they assign, then mandating and teaching these skills for the assignment. This will shift focus away from the easy summary, explanation, and single position taking that ChatGPT does easily and flawlessly.

How can I potentially recognize papers or writing produced by ChatGPT, not my students?

As previously mentioned, ChatGPT can write convincing explanations and arguments. However, ChatGPT’s incorporation of sources or synthesis of texts seems to be a potential point of weakness in a task. Sources are sometimes summarized too generally, de-contexualized, do not support a claim, or provide faulty information. These are also common mistakes that students make when using research sources; thus, exploring how to use a source in different ways in your discipline creates cognitive dissonance in students, helps them grow as critical thinkers and writers, and highlights an area that artificial intelligence has not yet fully conquered. Perhaps revealing these high-level skills to your students, and practicing them in class, will highlight that students have more to learn than artificial intelligence has learned.

What is critical thinking at the college level, and how can I use it to defeat ChatGPT?

I’d like to close with a basic continuum of critical thinking that can be applied to any discipline. Multiple studies abound to answer the question “What is critical thinking” or “What is critical thinking at the college level?” My favorite answer relies heavily on Harvard educational psychologist William Perry, although my answer synthesizes others’ research, including traditional models such as Bloom’s Taxonomy. Perry outlines four basic stages that we wish to lead students through, regardless of our discipline. I prefer to explain them as follows:

Stage One Thinking

Stage One Thinking is where students believe that all questions have specific answers. The world and its issues, problems, and solutions are fairly “black and white.” The world of knowledge is divided into right and wrong answers, and students must simply memorize the “correct” answers: and this is what college is for—to memorize all the correct answers in the gauntlet of classes they take. Many students arrive at college with the mindset. Faculty spend countless hours moving students beyond this stage into later stages.

Stage Two Thinking

Stage Two Thinking is where students are starting to realize, as their professors keep telling them, that knowledge is contingent, answers are limited to contexts, and that multiple perspectives exist and no one perspective is the “correct” answer while all others are the “wrong” answer. However, in this stage, students may now see all answers as equally good—after all, if there is no single correct answer, then all answers are equally good, right? In this stage, students are seeing that education and critical thinking are more than just memorizing correct answers. However, students don’t yet have the capacity to consistently, successfully generate and apply their own criteria to evaluate ideas as “better” or “worse.” Faculty must supply and model criteria and critical thinking.

Stage Three Thinking

Stage Three Thinking is where students now understand that all knowledge is contingent and limited, knowledge and claims are to be tested in idea or methodology (or ethical implications), and that multiple perspectives not only exist on any given issue, but that some perspectives have stronger evidence, better meet contextual criteria, and/or provide a better real-world ethical foundation. Students can now approach a project with this cognitive mindset, and students will begin to evaluate strengths and weaknesses based upon criteria they generate. They are not reading and writing to merely memorize, but to evaluate.

Stage Four Thinking

Stage Four Thinking is where students can perform third stage thinking, but do it from multiple, acute perspectives. That is, they can approach a problem from multiple learned perspectives, theories, or methodologies. If they are a double major, they can evaluate a problem according to both fields, or multiple theories or methods from a single field, recognizing the strengths and weaknesses of multiple perspectives. They can consider a problem or task as both a geologist and a physicist. Or as both a social worker and a cognitive psychologist focused on childhood development. Or as both an environmental historian and a public health professional. Stage Four lets students develop multiple approaches to a problem and weigh the costs and benefits of these approaches toward real-world problems.

As faculty, we hope students will rehearse their way to these final two stages during their time in college. Many students do achieve these stages routinely and develop this intellectual capacity. Research shows that not all students reach these later capacities consistently. But we can help them.

Let Students Make Mistakes

While ChatGPT can produce original work related to all four stages, assignments asking students to complete tasks related to the third and fourth stages are more rewarding for students, increase critical thinking, prepare students for our complicated world, and are the types of thought and assignments that ChatGPT struggles with the most. While many technologies will be developed to help flag writing performed by ChatGPT, designing assignments toward skills that AI performs only moderately well is a good start to avoid the Bot and give your students real-world critical thinking experiences.

Lastly, note that the best way to discourage ChatGPT often requires faculty to allow students to make mistakes, revise work, or re-read homework to revise ChatGPT’s mistakes. All of these tasks require a “second chance” for students to revisit previous work. Many educators offer students only one opportunity to succeed on a task before the course moves on to new material. With only one opportunity to succeed, no wonder students turn to AI to complete their homework for them. Thus, faculty and course design, as well as student anxiety, are implicated in motivations for cheating, not just low student motivation. Designing a course where students can make mistakes and have opportunities to fix mistakes without penalty may help defuse the use of ChatGPT. Allowing your students to make human mistakes is, technically, required to move them into the higher stages of cognition, therein increasing human learning in our classrooms and, perhaps, relegating Bots’ usefulness to life after college.

 

Except where otherwise noted, text is available under the Creative Commons Attribution 4.0 International License (CC BY 4.0).