Chatbots

ChatGPT landed like a ten-ton bomb upon its release in November 2022. Virtually overnight, the AI-powered chatbot infiltrated every aspect of our working lives. ChatGPT probably isn’t going to be as socially transformative as the steam engine, the telegraph, or the printing press. But back in 2022 it felt just as disruptive, jolting us into a future none of us could have imagined and (I’m willing to bet) few of us particularly wanted.  

Since then, other AI-assisted apps have risen to prominence in the university. Tools like Jenni.ai and PaperPal can produce text that resembles a literature review, one that’s been exhaustively referenced and carefully structured. Tools like Julius.ai and Scinapse can analyze research data, testing for significance and interpreting the results. And tools like Writefull and Writewise.io can produce a full draft that’s submission-ready – and even select an appropriate journal for you. So just put your feet up, relax, and wait for those autogenerated editorial letters to roll in.

These apps are based on large language models, or LLMs. Trained on a vast amount of data sourced from the internet, an LLM is a pattern-recognition engine that has learned to identify common connections between letters, words, and phrases (called ‘tokens’). LLMs generate human-like responses to our prompts because they predict what someone might say next – had that person read the contents of every single website.  

That’s what makes chatbots like ChatGPT, Copilot, and Claude so useful, but also so uncanny. They seem to know more than we do, even though technically they ‘know’ nothing. For all the talk of artificial intelligence, LLMs are designed to do one thing: model the structure of natural language. That’s it. But what a magic trick!  

There has been a lot of hand-wringing in academia about the proliferation of AI-assisted tools, with good reason. Lindebaum and Fleming believe that LLMs undermine human reflexivity, eroding our ability to think critically and act ethically. An LLM calculates and predicts within strictly quantitative parameters, so its output reflects the highest statistical probability. But humans – and especially academics – ought to question and reflect; we ought to make counterintuitive leaps and arrive at unanticipated conclusions. Our being-in-the-world is fundamentally qualitative. Chatbots do a good job of imitating our patterns of speech, but in the end they’re just algorithmic bullshit generators.

Yet LLMs are undoubtedly useful for improving academic writing. Like a glorified spell checker, an AI-assisted tool can help us fix basic grammatical and stylistic glitches in our work. Ask it to do more than a quick proof-read, though, and you’ll immediately encounter limitations.

Take the following passage, from an article published in Agricultural and Food Economics:  

Since the contributions by Davis and Goldberg and by Malassis (Davis and Goldberg 1958; Goldberg 1968; Malassis 1969), it is widely recognized that in industrial economies the production of food is organized in complex systems where diverse specialized firms interact intensively and progressively modify raw materials, adding intermediate input-ingredient services and anything that increase value to the final consumer. (Carbone, 2017: 2)

Clunky, right? Prompting ChatGPT (version GPT-4o) to rewrite this passage is instructive. The chatbot cleans up the text while keeping its essential components intact:

Since the contributions of Davis and Goldberg (1958, 1968) and Malassis (1969), it has been widely recognized that food production in industrial economies operates within complex systems. In these systems, various specialized firms interact extensively, continuously transforming raw materials by incorporating intermediate inputs, ingredient services, and other value-adding processes to enhance the final product for consumers.

ChatGPT has split a long sentence (59 words) into two shorter ones (27 words, 29 words), which makes the passage easier to read. The chatbot has also ironed out other wrinkles in the text, such as replacing the line anything that increase value to the final consumer with other value-adding processes to enhance the final product for consumers.

But that’s as far it goes. ChatGPT doesn’t have the ability to pause, to take stock, to ask whether the author is saying what they really want to say.  

Here’s how a human might rewrite the passage:  

In industrial economies, food production is a complex process (Davis and Goldberg, 1958;1968; Malassis, 1969). It involves a range of specialized firms that interact with each other and modify raw materials before the product is sold to consumers.

We don’t need a lengthy subclause to introduce the sentence (since the contributions of Davis and Goldberg and Malassis), or a main clause in the passive voice (it has been widely recognized), because the real point of the passage is this: food production is a complex process. And what does this process involve? The answer comes into focus when we cut adverbs (intensively/extensively, progressively/continuously), adjectives (diverse/various, intermediate, final), and other superfluous phrases (adding intermediate input-ingredient services – whatever this means, it’s likely covered by modifying raw materials). 

Chatbots excel at detecting mistakes or missteps in any text. But what they can’t do is care about your writing – care about the power of your argument, care about the veracity of your claims, care about the stakes of your research. Chatbots are utterly indifferent to you and your ideas, even if they are adept at emulating a kind of sterile affability. And that’s why robots aren’t coming for our jobs anytime soon, not unless we invite them into our offices and into our classrooms and let them remake our work in their image.

I’ve never used AI tools to write or edit this blog. Part of this is my own pride and stubbornness. But another part is practical. I learn more by putting the words on the page myself – slowly, sometimes painfully – than I ever would by getting a chatbot to do it for me in just a few seconds. If writing is thinking, why would I want to outsource this task to a machine?

Feel free to experiment with any AI tool at your disposal. Just remember that LLMs operate on the basis of billions and trillions of data points produced by humans. We train them – don’t let them train you.

Comments

Post a Comment