Written by Nate Brown

Attendant to the rise of powerful artificial intelligence (AI) technologies like ChatGPT 4, Perplexity, Jasper, YouChat, Chatsonic, and others, instructors in higher education must consider how students and teachers will (or will not) use AI tools in the classroom and beyond. 

There is already ample evidence that AI technologies built on large language models (LLMs) can produce text, images, and computer code essentially indistinguishable from work produced by humans. Ethan Mollick at the Wharton School of Business at the University of Pennsylvania has signaled cautious optimism about AI’s potential to save time on iterative technical and compositional tasks in the workplace. AI skeptics, on the other hand, express concern about the use of the technology to produce false or misleading information and/or to misrepresent authorship. 

As of this writing, there is a clear prerogative to help students navigate these new tools so that their original intellectual efforts and products are presented responsibly and assessed fairly.  

In what follows, we lay out four starting assumptions about AI that guide our recommendations. These first principles are drawn from the current literature and may be helpful to discuss with students to ensure that everyone is on the same page regarding AI’s capabilities. Then, we suggest five best practices for using AI in the classroom.  

Starting Principles

1. AI is here to stay.  

Generative AI is already widely available to populations around the globe, which means that it’s incumbent upon professors and teaching assistants to familiarize themselves with the current AI landscape and to develop clear AI policies for their own courses.

2. Generative AI cannot create novel material in a traditional sense.

As Mollick points out, the terms associated with various generative AI technologies are still ill-defined but describe algorithmic systems that use large data sets to “[predict] what the next word in a sentence should be so it can write a paragraph for you [and] what an image should look like based on a prompt.” 

In other words, generative AI creates new combinations of extant language, images, and computer code when prompted to do so. 

The ethical considerations here are vast, but of particular importance in the classroom is a reminder to students that their original thoughts, analyses, scholarship, and labor are central to their educational development and to the advancement of intellectual and academic pursuits. AI cannot produce novel thought.  

3. While many AI technologies are described as “generative,” current iterations are not sentient or self-aware.  

The current conversation about AI is muddied by occasional claims of sentience, which make for surprising and enticing headlines. In the summer of 2022, Google Engineer Blake Lemoine made the news when he claimed that Google’s AI-driven chatbot, LaMDA, was alive.  

Then, in the spring of 2023, New York Times technology writer Kevin Roose published a column entitled “A Conversation With Bing’s Chatbot Left Me Deeply Unsettled” alongside a transcript of his chat with Bing’s Chat GPT4-powered search engine.  

Most technologists working in the AI space are quick to point out that enormous datasets and advances in artificial neural networks (ANNs), which are inspired by the structure of the human brain, have honed the most advanced chatbots’ ability to mimic natural language.  
 
In discussing AI technologies with students, instructors should take care to make this distinction clear, both so that the next buzzy headline doesn’t mislead them and so that the class has an opportunity to discuss the potential uses, advantages, and pitfalls of producing text that is virtually indistinguishable from organic human language.  

4. Most generative AI chat tools cannot access paywalled research.  

This limitation means that what any LLM-based AI tool can generate will not necessarily include the best, most recent, or most relevant peer-reviewed data in a given field. For research purposes, then, AI-generated text is particularly limited.  

While this may change over time, current chatbots are not well suited for creating reliable and legitimate university-level research. Instead, they tend to present general information culled from a massive dataset of popularly available information, like news articles, blog posts, and trade publications.  

Best Practices

Here’s our best advice for how to address the use (or prohibition) of AI technologies in the classroom:

1. Have frank discussions with students about the potential uses and limitations of AI technologies in the classroom and beyond. Guiding questions might include:  

  • Under what circumstances might a student, instructor, researcher, journalist, public official, private citizen, employee, or public-sector worker helpfully leverage AI tools?  
  • Under what circumstances would you consider the use of AI tools unfair, unethical, or careless?  
  • Under what circumstances would the use of AI tools represent an abrogation of public trust (e.g. a politician using AI to craft an emotionally charged speech; a journalist using AI to write a news article; a thinktank using AI to craft draft legislation or a white paper)? 

2. Write and make available your classroom policy regarding the use of AI tools. For instance:

  • Can a student in your course use AI to generate initial ideas to get started on an assignment?  
  • Can students use AI to draft text for major assignments?   
  • Can students use AI tools to help revise their own original draft text?  
  • Can they use AI tools to find primary sources when doing research?  
  • Can they use AI tools in a limited way (for example, to edit a specific word or phrase) as they would a thesaurus or dictionary or a citation-formatting tool?

3. If you have a prohibition on using AI-based technologies in the classroom, make your reasoning clear and provide alternative approaches. 

  • Explain why you will not allow the use of AI tools in the classroom.  
  • Discuss and have a written policy outlining the specific prohibitions on using AI technologies for the creation and completion of assignments.  
  • In both the syllabus and on individual assignments, include a note about the prohibition of AI-based technologies in the creation and completion of assignments.  
  • Engage in in-class pre-writing activities such as brainstorming, ideation, drafting, peer-review, and revision to support students in the writing process. 

4. If you allow students to use AI tools, experiment with them in the classroom space, and give students an opportunity to compare the structure, diction, and rhetorical features of human and AI-generated text.

  • Look for structural deficiencies and proficiencies in the text: Is it legible? Is it specific? Is it informative or authoritative?  
  • Look for tone and style: Does the text present information in a creative or engaging way? What, if any, textual flourishes are present in the work?   
  • Note how the text uses information: Does it cite specific sources? Does it generalize or paraphrase information? What authority does the text appeal to, if any? Does the text employ verifiable information or facts in a credible manner?  
  • Give students a short, low-stakes writing assignment (a 250-word reflection on their day, for instance) and have them complete it in class. Then have them prompt an AI chat tool to write a reflection for them. Compare the texts, looking for the differences between them, and noting any correlations.  

5. Reinforce that these tools are evolving, and that your course policies and broader university, governmental, and corporate policies regarding the use of AI tools will necessarily change over time, too.

This includes the information and suggestions provided in this toolkit.  

View sample AI activities and policies from musicology in the Model Library.

Cited and Recommended Sources