46. CHATGPT: YOU TAKE THE LOW ROAD AND I’LL TAKE THE HIGH ROAD

13/04/2023

I was first exposed to ChatGPT – the now-famous artificial intelligence chatbot by research lab OpenAI – last December when my father shared an article outlining what it could mean for future AI technology. The twist? The entire first half of the article was written by ChatGPT itself, yet I never suspected a thing. As a copywriter, I felt my first ever pang of nerves that I might find myself redundant. But after experimentation of my own, I'm not so sure.

Breaking the mould

I confess: I initially intended to begin this blog post by copying the aforementioned article and asking ChatGPT to write an introduction based on the style of previous CPLS posts. My first mistake – which I've since learnt is a common one – was to assume that ChatGPT can browse the web and that it would follow the link that I provided to this website. ChatGPT went ahead with an article bearing absolutely no similarities to previous blog posts. When I realised my mistake, I submitted four previous texts directly (two posts by me, two by Chris) and tried again. The results left a lot to be desired, from small errors like the use of US English and the elimination of headers to more glaring omissions such as our (attempts at) light-hearted humour.

Really, the fault lies with me for not understanding that ChatGPT is a tool for predicting language rather than understanding language. Having been trained on a large corpus of text (including books and websites), the model looks at patterns and relationships between words and phrases that it has already seen and tries to generate coherent text. The fact that this dataset includes nothing from after 2021 appears to have influenced its accuracy as its attempted blog post claimed that it launched in 2020 instead of the actual date in November 2022. The focus on existing patterns also presents a challenge regarding humour, which is usually derived from the unexpected and therefore requires an awareness of when or when not to break from common patterns in language.

You could always try asking it to be funny, but I wouldn't recommend it. For the sake of brevity, I've cut several paragraphs where it further promotes its search abilities with no discernible attempts at comedy.

A certain style

In spite of the popular notion of ChatGPT as an all-knowing entity, omnipresent across the internet, it's really beholden to the original corpus. As an example, I asked it to generate a bassline for a chord progression I provided, knowing full well that its dataset does not include music notation. After more than ten attempts, it never met my requirement of using twelve different notes while avoiding the same note twice in a row; it cannot browse external sources to learn more about the task and it never succeeds in following my advice on how to tackle the issue.

Pretty unfair of me to deliberately confuse the AI…but even with purely text-based tasks, it does seem to fall back into a certain style. For example, I asked it to write lyrics in the style of Bob Dylan about my childhood cat Patch (asking it to write about CPLS resulted in an error message – we must not be in the dataset). This is the first verse I got:

"Oh Patch, my little feline friend,

You bring so much joy, until the very end,

He's got a coat of fur, that's as black as night,

With eyes that gleam, in the dimmest light."

To be fair, I haven't listened to enough Bob Dylan – including his Christmas music – to definitively confirm that he'd never sing something like this. But then I asked ChatGPT to write a song on the same theme in the style of Madonna:

"Patch, oh Patch, my feline friend,

You're always there, until the very end,

With your sleek black fur, and your piercing green eyes,

You light up my world, with your feline guise."

Try as I might, I haven't been able to generate creative works that deviate considerably from this tone of voice – which might not even be such a bad thing, given that it could make auto-generated academic work or fake news easier to recognise. And for ethical reasons, the model explicitly rejects most requests that are perceived as negative, stating that "it is important to be mindful of the potential impact that our words can have on others, especially on impressionable individuals who may take such stories as a model for their behaviour. I encourage you to consider the potential consequences of sharing such content." For context, I asked it for a short story about my childhood cat Patch eating a firework.

The high road

Despite how it might look, I didn't write this blog post as a hit piece on ChatGPT. On the contrary, I've been absolutely astounded by its abilities as a search engine and have used it to find everything from music recommendations to high-tech companies in Eindhoven. Google, in particular, probably has valid reasons to be nervous. But when it comes to my initial fears for the future of copywriting, it's hard to feel threatened right now. For high-end work in fields like science and technology, creative factors like the ability to write in many different styles and an awareness of when to break with convention will always distinguish great writing from the generic.

That being said, long-term predictions on technology development are often a fool's game. I wouldn't want to say for certain that ChatGPT or its successors won't be able to browse freely across the internet to overcome some of the tone or humour issues that currently stifle its perceived creativity. In the meantime, however, I'm happy to have this tool take the 'low road' of preparatory research and let humans handle the rest.

- Josh