Talking with my family this week I heard about an idea of asking ChatGPT for your biography. It sounded interesting, so I asked “Write a biography of Jon Ericson” and got this response:
Jon Ericson is a software engineer and developer, known for his contributions to the Stack Exchange network and his work in the field of computational linguistics. He was born in the United States in the early 1980s and grew up in a family that valued education and intellectual pursuits.
Ericson was drawn to computers from an early age, and he taught himself how to code while still in middle school. He went on to study computer science in college, earning a bachelor’s degree from the Massachusetts Institute of Technology (MIT) in 2003.
After college, Ericson began his career in the tech industry, working for a number of companies as a software engineer and developer. He eventually landed at Stack Exchange, a network of question-and-answer websites, in 2011. At Stack Exchange, Ericson worked as a developer and project manager, overseeing the development of several key features of the platform.
In addition to his work at Stack Exchange, Ericson has also been involved in the field of computational linguistics, which involves using computers to analyze and understand human language. He has published several papers on the topic, including a widely cited paper on the use of machine learning to classify text.
Ericson is known for his thoughtful and engaging writing style, which he has demonstrated in his blog posts and contributions to the Stack Exchange network. He is also a frequent speaker at technology conferences, where he shares his insights on software development, computational linguistics, and other related topics.
Outside of his professional pursuits, Ericson is a devoted family man and enjoys spending time with his wife and children. He is also an avid reader and enjoys learning about a wide range of subjects, from history and politics to science and technology.
I put all the wrong statement in italics. Most of the errors would be obvious if you compare this text to my resume. All three dates turn out to be wrong. I’ve never even been to MIT[1] and I know virtually nothing about computation linguistics. The last point is important because the people behind ChatGPT do know a lot about computation linguistics.
The GPT in ChatGPT stands for Generative Pre-trained Transformer which is an application of computational linguistics. It’s just one example of Natural Language Processing (NLP). Google search has been using similar techniques to understand search requests and webpages have been using NLP to generate content.[2] What’s unusual about ChatGPT is that it’s easy to use and doesn’t attempt to hide it’s artificiality.
Several years ago the moderators of Biblical Hermeneutics—Stack Exchange began to suspect a prolific contributor was using GPT-3 to create posts. They verified their suspicion with an AI content detector. (I don’t remember which one, but the same organization that provides ChatGPT also offers an AI Text Classifier. It classified my ChatGPT-generated biography as “possibly AI-generated”.[3]) In the end, they deleted those contributions because the posts seemed plausible, but had some obvious factual errors.
That’s the problem with using this technology, at least at the moment. It’s not in any way intelligent. Instead, it’s mastered the ability to arrange words in a grammatically correct manner and found some accurate information about my career on the internet. But when it needs to fill in information it can’t find online, it just made up “facts” such as my birth decade and college.
In important ways, this is nothing new. My children have been watching old episodes of Mythbusters. Obviously most people can’t test ideas like whether a stick of dynamite will clean the inside of a cement mixer or whether it’s possible to survive a falling elevator by jumping at just the right moment. But the approach to testing these myths is something anyone can use to check potentially false information on the internet.
I don’t think there are easy answers here. People will use AI to create plausible, but incorrect articles quicker then ever. Automated AI detectors can confirm your suspicion that something isn’t right, but there’s always going to be an arms race. So I think the ultimate answer is a combination of critical thinking, checking sources and being careful sharing too-good-to-be-true content. At least at the moment, AI content generators are prone to easy-to-debunk errors.
I actually went to UCLA. ↩︎
Disclosure: at College Confidential we use NLP to generate prose descriptions of colleges (such as MIT) from our database of college data. I think there are ethical concerns with NLP for this purpose, but we do manually edit entries that are misleading when we (or our users) notice. ↩︎
The rest of this post is “very unlikely AI-generated”, which is good since I wrote it without assistance. ↩︎