ChatGPT

From WSU Technology Knowledge Base
Jump to navigation Jump to search
Caution.JPG REVISION IN PROGRESS: This article contains useful information, but is being revised to reflect recent updates. Direct questions to TLT (tlt@winona.edu).

ChatGPT is a web-based chatbot released in November 2022 by OpenAI that uses artificial intelligence (AI) to respond in natural language to user prompts. It has been trained on an enormous amount of data, retains the context of the question or commands for conversational-style interaction, and can modify the initial response based on user suggestions (e.g., "Now give me the answer in the style of a 5-year-old"). An earlier version of ChatGPT is available for free online, and the current version is available for a monthly fee.

How ChatGPT compares with other AI tools

We have all been using AI-enabled tools for years (e.g., Google Maps), we interact with chatbots online all the time, and many of us already talk to Apple's Siri and Google Assistant on a daily basis. What's so different about ChatGPT? Isn't it just another way to interact with online information? Once you start using ChatGPT, the differences are immediately tangible. You will likely be struck by its ability to generate unique, reasonable answers to fairly complex questions quickly after some minor cajoling and guidance from you. This often leads to a "wait a minute, this is something different" moment.

Large versus specific

Unlike AI trained on specific topics (e.g., plants) to do specific things (e.g., identify the plant in your phone's camera viewfinder), ChatGPT is trained on an enormous amount of online information from 2021 backward and uses algorithms with billions of parameters to synthesize responses to all sorts of questions and prompts. These so-called large language models (LLMs), in use since 2018 operate behind the scenes using processes similar to the human brain, picking and choosing information to synthesize into a response.

Synthesis versus indexing

Currently, when you ask Google Assistant or Siri a question, it searches an index of all the information it can access and presents the results of that search, giving you a canned response that already exists in its index (e.g., "According to Wikipedia, the airspeed velocity of an unladen swallow is..."). ChatGPT uses rules to extract information from multiple sources and synthesizes a unique response. Google and Microsoft are racing to include LLM response synthesis in their search engines, so expect ChatGPT-like interaction to come soon to a browser near you.

Fine tuning, styles, and personas

You can ask ChatGPT to modify the original response in various ways (e.g., emphasizing or deemphasizing certain elements, adding information). You can also ask it to answer a question or respond to a command using a specific style or from the perspective of a specific persona. You can ask it to write a poem in the style of Yeats or to turn itself into a Linux terminal. You can ask it to rewrite responses as an 8th grader who has suffered childhood trauma or make a response funnier.

Text versus voice

Currently, you interact with ChatGPT by entering text and images into a chatbot interface, but voice-based interaction is probably not too far off.

What's the problem?

Hallucination

One problem with ChatGPT is that the responses it generates can include errors. These may be hard to spot given the "confident" nature of the response and the absence of citations to the sources used to generate it. To an untrained eye, most ChatGPT responses seem perfectly reasonable. This tendency for AI to confidently present inaccurate responses is called hallucination. Some suggest the ChatGPT hallucination rate is as high as 20%. The LLM behind ChatGPT was recently upgraded and OpenAI claims that the hallucination rate has improved, but this is difficult to measure because OpenAI's code is not open. One positive step forward is that this new version (GPT-4) can now cite sources in its responses if asked. Unfortunately, this version requires a $20/month subscription. Most students will likely use the free, more error-prone, earlier version (GPT-3). OpenAI sees this problem as solvable with future improvements, but some say that hallucination is inherent to LLMs, just like it is in humans.

Exploitation

ChatGPT reached more than 100 million users within two months of its November 2022 launch, setting the record for the fastest-growing platform, and averages about 13 million unique visitors per day. Our WSU students are among them. Analysis of Warrior network traffic in March 2023 revealed between 500 to 1000 users accessing ChatGPT per day. The concern among educators is that students will use ChatGPT to generate responses to course assignments and try to pass them off as their own. Many instructors have added language about ChatGPT to their syllabi and have talked with their students about such plagiarism during class meetings.

Disruption

Rapidly emerging generative AI writing applications like ChatGPT could certainly be considered disruptive innovation and we should expect some turmoil in traditional, expensive, and previously difficult-to-access systems that rely on writing in all languages, including coding languages. ChatGPT will allow anyone with access to it to generate well-formed written work regardless of their skill, training, or understanding of the work produced. This will likely impact such professions as journalism, advertising, and software development. One traditional system that relies heavily on writing is higher education. If the written work students produce is no longer a valid and reliable indication of what they know, it's useless as a means of academic assessment. Some faculty may need to redesign their writing assignments, which will take time, effort, and support.

ClosedAI

These problems are compounded by OpenAI's recent move from an open, non-profit committed to sharing its innovations with the world to a closed, for-profit committed to making money. Although this is common in the tech industry, OpenAI has closed off all access to its code and methods and we now have very little insight into how ChatGPT is trained, what it does, and how it does it. We are completely dependent on OpenAI to do the right thing. Even more concerning is their recent argument that AI research and development should be done in secret for the good of humanity.

What instructors should do now

There are things that instructors can do in the short term to address ChatGPT usage in their upcoming summer and fall courses.

Learn about it

Take some time to learn about ChatGPT to understand better how it might impact your courses. The best way is to see it in action. You can do this by trying it yourself, watching any of the hundreds of YouTube videos on the subject, attending a TLT workshop, or contacting TLT (tlt@winona.edu). You can easily bring yourself up to speed on the basic capabilities of ChatGPT in under 30 minutes. Make sure to familiarize yourself with its strengths and weaknesses.

Develop a short-term plan

Once you know more about ChatGPT, develop a short-term plan for addressing its use in your courses. Will you prohibit it? Will you allow or encourage its use under certain conditions (e.g., proper attribution)?

Adjust your syllabus as needed

Add instructions to your syllabi as needed. There are many examples of such language that you can use as templates.

Talk with your students

Have a conversation with your students about ChatGPT. Address it with your students during a class meeting, in a Brightspace discussion, or through other communication channels. Make sure they are clear on its weaknesses and limitations and your position and policies regarding its use. In addition to opening up a dialog with your students about this resource, it informs them that you know what ChatGPT is and how it works.

Start working on a long-term plan

While you might be able to deter cheating in the short term, if it's easy for students to generate reasonable responses to your assignments using ChatGPT, you will probably want to redesign them. This may take a significant amount of time. Which assignments are most in need of revision? When will you have time to do this? Do you need help?

What administrators should do now

The responsibility for managing the impact of generative AI on teaching and learning does not rest on the shoulders of instructors exclusively. This affects the entire higher education enterprise, and academic leadership and administration must also respond.

Learn about it

Just like faculty, the first step for academic leaders and administrators is to learn about ChatGPT.

Review academic integrity policies

Generative AI may need to be addressed in campus academic integrity policies directly. Whether and how to do this should be taken up by the committees and stakeholders responsible for those policies. Even if the institution decides not to make revisions, this discussion is necessary and the resulting clarity will benefit faculty and students.

Review plagiarism deterrence and detection services

Through what campus-wide services does the institution support instructors' ability to deter and detect plagiarism? Commercial plagiarism detection services like Turnitin claim to be adding ChatGPT detection to their systems. Are these systems effective? Does the institution have a plan in place to continue supporting such tools as they inevitably become more costly? What about the free web-based ChatGPT detectors like GPTZero? Should faculty submit student work to these free services without students' permission? Faculty and students need clear guidance on these issues.

Develop a faculty support plan

If instructors need to revise their course assignments to adapt to the widespread use of generative AI tools like ChatGPT, what support will the institution provide? This will take time, effort, and multiple avenues of support.

More wiki articles

External links

 

Except where otherwise noted, text is available under the Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License.