Written by Nate Brown
Attendant to the rise of powerful artificial intelligence (AI) technologies like ChatGPT 4, Perplexity, Jasper, YouChat, Chatsonic, and others, instructors in higher education must consider how students and teachers will (or will not) use AI tools in the classroom and beyond.
There is already ample evidence that AI technologies built on large language models (LLMs) can produce text, images, and computer code essentially indistinguishable from work produced by humans. Ethan Mollick at the Wharton School of Business at the University of Pennsylvania has signaled cautious optimism about AI’s potential to save time on iterative technical and compositional tasks in the workplace. AI skeptics, on the other hand, express concern about the use of the technology to produce false or misleading information and/or to misrepresent authorship.
As of this writing, there is a clear prerogative to help students navigate these new tools so that their original intellectual efforts and products are presented responsibly and assessed fairly.
In what follows, we lay out four starting assumptions about AI that guide our recommendations. These first principles are drawn from the current literature and may be helpful to discuss with students to ensure that everyone is on the same page regarding AI’s capabilities. Then, we suggest five best practices for using AI in the classroom.
Starting Principles
1. AI is here to stay.
Generative AI is already widely available to populations around the globe, which means that it’s incumbent upon professors and teaching assistants to familiarize themselves with the current AI landscape and to develop clear AI policies for their own courses.
2. Generative AI cannot create novel material in a traditional sense.
As Mollick points out, the terms associated with various generative AI technologies are still ill-defined but describe algorithmic systems that use large data sets to “[predict] what the next word in a sentence should be so it can write a paragraph for you [and] what an image should look like based on a prompt.”
In other words, generative AI creates new combinations of extant language, images, and computer code when prompted to do so.
The ethical considerations here are vast, but of particular importance in the classroom is a reminder to students that their original thoughts, analyses, scholarship, and labor are central to their educational development and to the advancement of intellectual and academic pursuits. AI cannot produce novel thought.
3. While many AI technologies are described as “generative,” current iterations are not sentient or self-aware.
The current conversation about AI is muddied by occasional claims of sentience, which make for surprising and enticing headlines. In the summer of 2022, Google Engineer Blake Lemoine made the news when he claimed that Google’s AI-driven chatbot, LaMDA, was alive.
Then, in the spring of 2023, New York Times technology writer Kevin Roose published a column entitled “A Conversation With Bing’s Chatbot Left Me Deeply Unsettled” alongside a transcript of his chat with Bing’s Chat GPT4-powered search engine.
Most technologists working in the AI space are quick to point out that enormous datasets and advances in artificial neural networks (ANNs), which are inspired by the structure of the human brain, have honed the most advanced chatbots’ ability to mimic natural language.
In discussing AI technologies with students, instructors should take care to make this distinction clear, both so that the next buzzy headline doesn’t mislead them and so that the class has an opportunity to discuss the potential uses, advantages, and pitfalls of producing text that is virtually indistinguishable from organic human language.
4. Most generative AI chat tools cannot access paywalled research.
This limitation means that what any LLM-based AI tool can generate will not necessarily include the best, most recent, or most relevant peer-reviewed data in a given field. For research purposes, then, AI-generated text is particularly limited.
While this may change over time, current chatbots are not well suited for creating reliable and legitimate university-level research. Instead, they tend to present general information culled from a massive dataset of popularly available information, like news articles, blog posts, and trade publications.
Best Practices
Here’s our best advice for how to address the use (or prohibition) of AI technologies in the classroom:
1. Have frank discussions with students about the potential uses and limitations of AI technologies in the classroom and beyond. Guiding questions might include:
- Under what circumstances might a student, instructor, researcher, journalist, public official, private citizen, employee, or public-sector worker helpfully leverage AI tools?
- Under what circumstances would you consider the use of AI tools unfair, unethical, or careless?
- Under what circumstances would the use of AI tools represent an abrogation of public trust (e.g. a politician using AI to craft an emotionally charged speech; a journalist using AI to write a news article; a thinktank using AI to craft draft legislation or a white paper)?
2. Write and make available your classroom policy regarding the use of AI tools. For instance:
- Can a student in your course use AI to generate initial ideas to get started on an assignment?
- Can students use AI to draft text for major assignments?
- Can students use AI tools to help revise their own original draft text?
- Can they use AI tools to find primary sources when doing research?
- Can they use AI tools in a limited way (for example, to edit a specific word or phrase) as they would a thesaurus or dictionary or a citation-formatting tool?
3. If you have a prohibition on using AI-based technologies in the classroom, make your reasoning clear and provide alternative approaches.
- Explain why you will not allow the use of AI tools in the classroom.
- Discuss and have a written policy outlining the specific prohibitions on using AI technologies for the creation and completion of assignments.
- In both the syllabus and on individual assignments, include a note about the prohibition of AI-based technologies in the creation and completion of assignments.
- Engage in in-class pre-writing activities such as brainstorming, ideation, drafting, peer-review, and revision to support students in the writing process.
4. If you allow students to use AI tools, experiment with them in the classroom space, and give students an opportunity to compare the structure, diction, and rhetorical features of human and AI-generated text.
- Look for structural deficiencies and proficiencies in the text: Is it legible? Is it specific? Is it informative or authoritative?
- Look for tone and style: Does the text present information in a creative or engaging way? What, if any, textual flourishes are present in the work?
- Note how the text uses information: Does it cite specific sources? Does it generalize or paraphrase information? What authority does the text appeal to, if any? Does the text employ verifiable information or facts in a credible manner?
- Give students a short, low-stakes writing assignment (a 250-word reflection on their day, for instance) and have them complete it in class. Then have them prompt an AI chat tool to write a reflection for them. Compare the texts, looking for the differences between them, and noting any correlations.
5. Reinforce that these tools are evolving, and that your course policies and broader university, governmental, and corporate policies regarding the use of AI tools will necessarily change over time, too.
This includes the information and suggestions provided in this toolkit.
View sample AI activities and policies from musicology in the Model Library.
Cited and Recommended Sources
- Chiang, Ted. “ChatGPT Is a Blurry JPEG of the Web.” The New Yorker Magazine, 9 Feb. 2023, https://www.newyorker.com/tech/annals-of-technology/chatgpt-is-a-blurry-jpeg-of-the-web
- Chui, Michael. “Forward Thinking on the Brave New World of Generative AI with Ethan Mollick.” McKinsey & Company, 31 May 2023, www.mckinsey.com/mgi/forward-thinking/forward-thinking-on-the-brave-new-world-of-generative-ai-with-ethan-mollick.
- Crompton, Helen, and Diane Burke. “Artificial Intelligence in Higher Education: The State of the Field.” International Journal of Educational Technology in Higher Education, vol. 20, no. 1, Apr. 2023, pp. 1–22. EBSCOhost, https://doi.org/10.1186/s41239-023-00392-8.
- “Hard Fork: GPT-4 Is Here, and the Silicon Valley Bank Fallout.” The New York Times, 17 Mar. 2023, https://www.nytimes.com/2023/03/17/podcasts/hard-fork-gpt-4.html.
- “Hard Fork: AI Extinction Risk and Nvidia’s Trillion-Dollar Valuation.” The New York Times, 2 June 2023, www.nytimes.com/2023/06/02/podcasts/hard-fork-chatgpt-nvidia.html.
- How Will Artificial Intelligence Change Higher Ed? – The Chronicle of Higher Education, 25 May, 2023, www.chronicle.com/article/how-will-artificial-intelligence-change-higher-ed.
- Huang, Kalley. “Alarmed by A.I. Chatbots, Universities Start Revamping How They Teach.” The New York Times, 16 Jan. 2023, https://www.nytimes.com/2023/01/16/technology/chatgpt-artificial-intelligence-universities.html.
- Metz, Cade. “Meet GPT-3. It Has Learned to Code (and Blog and Argue).” The New York Times, 24 Nov. 2020, https://www.nytimes.com/2020/11/24/science/artificial-intelligence-ai-gpt3.html.
- McMurtrie, Beth. “How Artificial Intelligence Is Changing Teaching.” The Chronicle of Higher Education, 12 Aug. 2018, https://www.chronicle.com/article/how-artificial-intelligence-is-changing-teaching.
- MLA-CCCC Joint Task Force on Writing and AI. 1 Jul, 2023, https://hcommons.org/app/uploads/sites/1003160/2023/07/MLA-CCCC-Joint-Task-Force-on-Writing-and-AI-Working-Paper-1.pdf.
- Mollick, Ethan. “ChatGPT Is a Tipping Point for AI.” The Harvard Business Review, 14 Dec. 2022, https://hbr.org/2022/12/chatgpt-is-a-tipping-point-for-ai.
- Perkins, Mike. “Academic Integrity Considerations of AI Large Language Models in the Post-Pandemic Era: ChatGPT and Beyond.” Journal of University Teaching & Learning Practice, vol. 20, no. 2, Mar. 2023, pp. 1–24. EBSCOhost, https://doi.org/10.53761/1.20.02.07.
- Reiss, Michael J. “The Use of AI in Education: Practicalities and Ethical Considerations.” London Review of Education, vol. 19, no. 1, Mar. 2021, pp. 1–14. EBSCOhost, https://doi.org/10.14324/LRE.19.1.05.
- Roose, Kevin. “Don’t Ban ChatGPT in Schools. Teach With It.” The New York Times, 13 Jan. 2023, https://www.nytimes.com/2023/01/12/technology/chatgpt-schools-teachers.html.
- Schatten, Jeff. “Will Artificial Intelligence Kill College Writing?” The Chronicle of Higher Education, 14 Sept. 2022, https://www.chronicle.com/article/will-artificial-intelligence-kill-college-writing
- Wilhelm, Ian. “Nobody Wins in the Academic Integrity Arms Race.” The Chronicle of Higher Education, 12 Jun. 2023, https://www.chronicle.com/article/nobody-wins-in-an-academic-integrity-arms-race.
- Wooldridge, Michael. “Artificial Intelligence Is a House Divided.” The Chronicle of Higher Education, 20 Jan. 2021, https://www.chronicle.com/article/artificial-intelligence-is-a-house-divided.