What is GPT?
September 14 2023 ChatGPT for Teachers II
View the September 14 presentation slides.
If you are interesting in seeing some example syllabus language addressing these tools, please see:
- University of Delaware "Considerations for Using and Addressing Advanced Automated Tools"
- Lance Eaton's roundup of actual syllabus examples
January 27 2023 Presentation & Workshop Recording
Download the January 27, 2023 presentation slides
January 25 Intro to ChatGPT and How to sign up
Summary and FAQ
ChatGPT is a technology from Open AI that can generate text based on a prompt given to it. It has been trained on gigabytes of text, and can put together sentences, paragraphs, and even entire essays. It’s similar to an auto-complete feature on a phone, but magnified to a much higher degree. It can write error-free text and if given a specific prompt, it can do an even better job. It is not using Google searches, but has read and processed a large amount of content to find patterns. It can answer questions if the answer is general knowledge and not disputed.
What is ChatGPT used for?
ChatGPT is used for AI-generated text. Similar technologies can generate art, music, and other creative tasks. It is also used for creating or explaining code.
What are some potential risks of using GPT-3?
The potential risks of using GPT-3 include it being misused as a source of plagariasm or unintended “assistance”. More widely, there are fears of it being used for propaganda or for malicious purposes.
Why is ChatGPT free?
GPT is free because the company is searching for a business model and because it is learning from us.
Is GPT-3 able to imitate a particular literary style?
GPT-3 may not be able to imitate nuanced literary styles like Noir or Pulp Fiction, but it is able to write poems and haikus quickly.
What should teachers do when considering GPT-3 for their classes?
Teachers should try GPT-3 out, talk to younger people about it, and address it in the syllabus by outlining what is acceptable and not acceptable. They should also attempt to remove the incentive for cheating by evaluating students in different ways.
What are some limitations of it?
GPT cannot answer essay prompts that are longer than three thousand tokens.
Can we check a given text for ChatGPT usage?
Using Turnitin to identify plagiarism is not always possible, but teachers can question students on the words and concepts used in their work to gauge if they wrote it or not. It is hard to prove somebody used GPT as it is based on probabilities and the answer is lightly different each time. It is not plagiarism checking. Scaffolding writing assignments can also be used to identify plagiarism.
How could teachers leverage this tool?
Teachers can use Chat GPT to focus on higher-order thinking and critical thinking skills, rather than focusing on low-hanging fruit like definitions.
Chat GPT can help summarize long texts, and can be used to write notes in bullet points.
However, Chat GPT is not a good writer, as it tends to use passive language and is not very clear.
The technology could be used in classrooms to explain concepts and act as a substitute for Google.
ChatGPT can substitute for Google to answer questions in the context of learning.
The technology needs some tweaking, but it could be used to prevent students from cheating by requiring them to cite their sources and present their work.
How is ChatGPT useful for coding?We tried using GPT-3 for a coding class and it worked immediately. Chat GPT is a great tool for writing code, and can help students with coding skills become more employable.
What can we do about ChatGPT and Computer Science?
Computer Science instructors may need to use old methods such as exams done by hand.
Coding is tested in job interviews using pseudo code.
More Questions to ponder…
- How do teachers feel about using AI tools to assess their students?
- What incentives can be put in place to reduce cheating?
- What are we teaching when we are teaching writing?
- What opportunities does this tool provide for teachers to explore larger issues?
Advice for Teachers
The Sentient Syllabus Project (Boris Steipe)
three principles: AI should not be able to pass a course, AI contributions must be attributed and true, and the use of AI should be open and documented.
Tools such as ChatGPT threaten transparent science; here are our ground rules for their use (Nature.com)
it is high time researchers and publishers laid down ground rules about using LLMs ethically. Nature, along with all Springer Nature journals, has formulated the following two principles, which have been added to our existing guide to authors
How we’re approaching AI-generated writing on Medium (medium.com)
We welcome the responsible use of AI-assistive technology on Medium. To promote transparency, and help set reader expectations, we require that any story created with AI assistance be clearly labeled as such.
What You Need to Know About GPT-3 And Why It Matters (Fahri Karakas, Medium.com)
GPT-3 stands for “Generative Pre-trained Transformer 3” (whatever that means). In this article, I aim to make sense of GPT-3 as a layperson and explain it to non-technical people like me.
Teaching Actual Student Writing in an AI World (Kevin Jacob Kelley , Inside Higher Ed.)
Students may be tempted to use AI to automatically complete assignments because these machines are free, quick and relatively good at mimicking an academic style. Whether artificial intelligence will advance education or destroy it, faculty members need effective methods for teaching in a world with easy access to these powerful machines.
Designing Assignments in the ChatGPT Era (Susan D’Agostino, Inside Higher Ed.)
“I’d used the questions for five years because they were fun questions,” Joyner said. “But ChatGPT’s answer was so precise that I’m pretty sure it was learning from my own best students,” whom he suspected had posted their work online. Joyner replaced several of the sandwich options with avocado toast, shawarma, pigs in a blanket, Klondike bar and Monte Cristo. He also updated the academic misconduct statement on his syllabus to “basically say that copying from ChatGPT isn’t different from copying from other people.” Such efforts, Joyner acknowledges, may be a temporary fix.
ChatGPT Advice Academics Can Use Now (Susan D’Agostino, Inside Higher Ed.)
Faculty members and administrators are now reckoning in real time with how—not if—ChatGPT will impact teaching and learning. Inside Higher Ed caught up with 11 academics to ask how to harness the potential and avert the risks of this game-changing technology. The following edited, condensed advice suggests that higher ed professionals should think a few years out, invite students into the conversation and—most of all—experiment, not panic.
Update Your Course Syllabus for chatGPT (Ryan Watkins, medium.com)
Below are some easy to implement suggestions that will help you prepare for the upcoming semester.
Why does chatGPT make up fake academic papers? (David Smerdon, Twitter post)
By now, we know that the chatbot notoriously invents fake academic references. E.g. its answer to the most cited economics paper is completely made-up (see image). But why? And how does it make them?
Promises and Pitfalls of ChatGPT (Sharon Aschaiek , Inside Higher Ed.)
Ultimately, bots will not replace your school’s marcomm team, but they may augment it. ChatGPT can help with conducting research, generating content ideas and cleaning up and optimizing copy. But it’s your human marketers and communicators who know how to define your strategy and tactics, write your institution’s engaging stories about complex topics, bring color and flair to narratives…
You’re Not Going to Like How Colleges Respond to ChatGPT (Chris Gilliard and Pete Rorabaugh, slate.com)
Teachers at many levels of our educational structure are going to be adapting to what A.I. text generation will do for, with, and to students in the coming years. Some of them will embrace the tool as a writing aid; others will bunker in and interrogate students whose papers feel auto-generated. ChatGPT has given us all interesting things to imagine and worry about. However, one thing we can be sure of is this: OpenAI is not thinking about educators very much. It has decided to “disrupt” and walk away, with no afterthought about what schools should do with the program.
ChatGPT Is a Plague Upon Education (Jeremy Weissman, Inside Higher Ed.)
What winter 2020 was for Covid, winter 2023 is for ChatGPT. These are like the early days, when we thought we could stave off a pandemic through an abundance of hand sanitizer and toilet paper. We realize there is potentially a calamity about to wash upon our shores, but we still have our heads in the sand. We think this won’t really affect us, that we can avoid having to make any major changes to the way we’ve always done things. But soon the first crop of assessments will come back…
ChatGPT: Educational friend or foe? (Kathy Hirsh-Pasek and Elias Blinkoff, Brookings.edu)
In the same way that calculators became an important tool for students in math classes, ChatGPT has potential to become an important tool for writers who want to hone their critical thinking skills along with their communication skills. How might this happen? Educators are responding with valuable approaches. Adam Stevens, a high school history teacher in New York City who opposes his district’s decision to block ChatGPT, sees it as a valuable tool to promote—not limit—critical thinking.
With ChatGPT, Teachers Can Plan Lessons, Write Emails, and More. What’s the Catch? (Madeline Will, Education Week)
Thornley recently used the AI bot to help him plan a lesson about analyzing tone in written documents. He wanted to present students with 10 separate paragraphs arguing that school should start at a later time, each using a different tone. In the past, he would have written each paragraph himself. But this year, he asked ChatGPT to write the paragraphs—one that was upbeat and funny, one that was angry, one that was professional, and so on. It saved him more than an hour of time. “I was completely blown away by how much it’s capable of doing,” Thornley said of the chat bot.
ChatGPT is changing education, AI experts say — but how? (Lucas Stock, Deutsche Welle)
So, as with other AI technologies, humans are still required to review and correct AI-generated texts. That editing is often complicated and requires real knowledge of a subject, and that could be graded at universities in the future.
The College Essay Is Dead (Stephen Marche, The Atlantic)
Humanities departments judge their undergraduate students on the basis of their essays. They give Ph.D.s on the basis of a dissertation’s composition. What happens when both processes can be significantly automated? Going by my experience as a former Shakespeare professor, I figure it will take 10 years for academia to face this new reality: two years for the students to figure out the tech, three more years for the professors to recognize that students are using the tech, and then five years for university administrators to decide what, if anything, to do about it.
7 Ways Chat GPT Will Impact Education Positively (Amanda Write Now)
To put it simply, artificial intelligence tools like Chat GPT can shift our focus in English classrooms from mastering formulas for teaching personal narrative, informative, and argumentative writing, to teaching critical thinking, revision, research, discussion, and organizing ideas creatively with websites, interactive presentations, video, infographics, podcasts, blog posts, digital art, and other new and changing media forms.
ChatGPT: A Threat To Higher Education? (Dr. Jason Wingard, Forbes)
A multiplier of ability sounds like exactly what we want from new technologies. It may come with the potential to generate fake information and aid and abet cheating, but that’s nothing new, and nothing we can’t work around. Yes, as I’ve written before, higher education must change or die. But responding to the so-called threat that Chatbot software represents is not a concern.
ChatGPT Is Dumber Than You Think (Ian Bogost, The Atlantic)
But you may find comfort in knowing that the bot’s output, while fluent and persuasive as text, is consistently uninteresting as prose. It’s formulaic in structure, style, and content. John Warner, the author of the book Why They Can’t Write, has been railing against the five-paragraph essay for years and wrote a Twitter thread about how ChatGPT reflects this rules-based, standardized form of writing: “Students were essentially trained to produce imitations of writing,” he tweeted. The AI can generate credible writing, but only because writing, and our expectations for it, has become so unaspiring.
AI Tools Like ChatGPT May Reshape Teaching Materials — And Possibly Substitute Teach (Jeffrey R. Young, EdSurge)
“This tech makes the familiar claim that it is not looking to replace the teacher—that it will free teachers up to concentrate on high-level work with individual students. We know that this rarely turns out to be the case,” Selwyn wrote. “This tech is being primarily pitched as a money-saving device—so it will be taken up by school authorities that are looking to save money. As soon as a cash-strapped administrator has decided that they’re happy to let technology drive a whole lesson, then they no longer need a highly-paid professional teacher in the room—they just need someone to trouble-shoot any glitches and keep an eye on the students.”
How can we design for learning in an AI world? (Lucila Carvalho, Roberto Martinez-Maldonado, Yi-Shan Tsai, Lina Markauskaite, Maarten De Laat, Computers and Education: Artificial Intelligence)
In tackling the unpredictability of the future, we re-conceptualize the problem space of educational design in an AI world, where we would like to see educators and learners deeply reflecting on the role of AI and the design structures that will shape learning activity. These ideas are embedded in our thinking about all the elements of the pedagogical framework and learning situation
Here we show that scaling up language models greatly improves task-agnostic, few-shot performance, sometimes even reaching competitiveness with prior state-of-the-art fine-tuning approaches. Specifically, we train GPT-3, an autoregressive language model with 175 billion parameters, 10x more than any previous non-sparse language model, and test its performance in the few-shot setting.
How you can use GPT-J (Vincent Mueller, Towards Data Science)
GPT-J is a 6 billion parameter model released by a group called Eleuther AI. The goal of the group is to democratize huge language models, so they relased GPT-J and it is currently publicly available. GPT3 on the other hand, which was released by openAI has 175 billion parameters and is not openly available at the time. But don’t let that discrepancy in the number of parameters fool you. GPT-J actually outperforms GPT3 in code generation tasks.
Get a holistic score for how much of the document is written by AI. Each sentence written by AI is highlighted. Upload multiple files at once, for your entire classroom.