How ChatGPT won't change the future
Some writers may lose their jobs. But possibly not in the way we expect.
This newsletter doesn’t contain any trends, but probably touches on one of the biggest buzzes of 2022. It also has nothing to do with Christmas. I’d be happy if somebody pledged me a Terry’s dark chocolate orange though: you can’t get them in Australia.
Onwards.
The AI revolution is remarkably similar to other work revolutions (or how I learned to stop worrying about ChatGPT).
It’s tradition that whenever a new technology is released, whatever comes before must be dead.
Open AI’s ChatGPT has, depending on who you follow on social media, made copywriters, SEO agencies, and engineers and developers redundant, and killed off Google for good measure.
Part of the excitement around ChatGPT is that - for a machine - it is genuinely impressive. It constructs sentences and responds to prompts in a way that is more human and coherent than a machine has done beforehand.
It's shiny, new and easily understood - not to mention fun to play with. It's undoubtedly better than what came before, but is it actually good, what constitutes good, and what exactly is the problem it's trying to solve?
All these are relative terms. What is good to score a high mark in a university essay may contain enough inaccuracies for a financial services organisation to attract a regulatory fine.
Similarly, a piece of ChatGPT written code may be absolutely fine being 90% correct or may cause a critical error at 99% accuracy. In the (understandable) rush to show what AI can do, we don't really ask what it should do.
A ChatGPT written marketing strategy may help organise your thoughts, but it's not actually undertaken any company or category specific research, for example. And what happens if your competitor has also asked the same question, which is not unlikely given many categories are essentially homogeneous.
Or if every SEO agency asked ChatGPT to write copy that ranks for the same keyword, then the prompt may be different, but if the output is largely similar, then what happens when you're employing robots to speak to other robots in the effort to sell a product to humans?
Essentially, this is a long way of saying ChatGPT is both new and exciting, but probably not quite as exciting as our brains are wired to think. It's what Rory Sutherland calls the "new-is-better" bias.
We simply pay disproportionate attention to them, and consequently rate them more important than they really are. But then something else happens, a trick of the mind. We start to pay far more attention to the specific ways in which the new is better than the old, to a point where we drown out any thought of how the old might have certain advantages that the new does not possess.
Useful robots
This doesn't mean that AI isn't useful and won't change certain industries. CAD engineering doesn’t stop the need for tunnels to be dug, but it does help us dig better tunnels, and requires human input, just as the tech to dig tunnels is much better than somebody with a spade, but human input is still required to dig the tunnels.
ChatGPT is different from what has come before, insofar as it can link together words in a way that is far more natural and less easy to discern the difference between human and computer, but we only look at the human aspect rather than the machine or complementary elements between human and machine.
In this respect, it's not hugely different from other AI, it just presents itself in a different way.
At its most basic, AI has two main useful functions. The first can automate or speed up tasks that would take a human being a long period of time that could be better spent elsewhere.
If you want to A/B test a number of headlines, then AI that takes its cues from natural language can think up more options quicker than a human can. If you want to undertake qualitative research then AI can't ask the questions but it can speed up transcribing notes.
The second function is analysing large data sets to find patterns and details that humans may not notice or may take a very long time to find. This can take many forms, but it's best viewed in the same way as Alpha Go's famous Move 37, where AI made such an unexpected move that flummoxed Lee Sedol, the world's best Go player.
Many people take Alpha Go's victory over Lee Sedol as proof that machines will assert dominance over humans, it's often forgotten that Lee Sedol also surprised Alpha Go with Move 78, making a decision that the machine couldn't predict.
The same scenario can be played out in other applications of AI: machine spots something a human has never seen, human builds on it.
Less useful robots
Enthusiasts of ChatGPT would also be well to note its limitations. Plenty of people have called out how ChatGPT gets a lot of information wrong. It particularly seems to struggle with biographies, for example.
It's why, at this stage, ChatGPT is unlikely to displace Google - although it doesn't mean that a form of AI won't disrupt the world's premier search engine.
Google search can be wrong but it presents plenty of options and leaves the user to decide how trustworthy the information is. Somebody searching for health insurance in Australia with some knowledge of the topic quickly learns to discount results from American websites as the product is structured differently.
The results of ChatGPT, though, is a black box. How did it arrive at this conclusion? What sources did it use? Just because a machine can present "facts" coherently, does it mean we should take this at face value?
If it uses the same sources at Google, then it may also give information that at the same time is both true (about the US system of health insurance) and false (if the user was looking for a response around the Australian health insurance system). Both are correct, but one can be wrong.
This is where human input comes in, even if only at a sense or fact checking stage, or at a query writing stage. As anybody who has written a boolean search query will know, you often require several rounds of revisions before landing on a good set of results.
Then there is also the more concerning elements of AI, which is only as good as both the data it's trained on and the bias - both conscious and unconscious - from the humans who help train it.
It's why Amazon's AI recruitment tool was biased against women, because nobody thought that men were more likely to be currently employed in tech and how that could affect the data set.
And it's why AI can be fooled with misinformation in the same way that humans can be fooled with misinformation. Bad actors will find better ways to use AI for wrong that making a machine confuse a panda for a gibbon.
If you were to bet on two careers in the future, becoming an AI ethicist or an adversarial machine learning experts might be useful ways to futureproof your employability, as you help machines dig better tunnels.
Yes, but what about creativity?
All of which is well and good, and often on the individual if they want to trust the accuracy of a machine without employing ways to check the output. But plenty of the focus on ChatGPT has been on the creative industries. Why would you need a writer and designer if you can use ChatGPT and Dall-E together?
Again, the answer is the same. There are some things that AI can do that's easier and quicker than a human and some that it struggles with. With the right input, you can get ChatGPT to combine two unlikely elements (Gordon Ramsay’s guidance to tax returns was quite amusing) to produce comedic elements.
But the AI wouldn’t necessarily think to write a tax return guide in the style of Gordon Ramsay without prompting.
More prosaically, I've managed to get ChatGPT to create some very well written disclaimers that our compliance unit have been impressed with. And schools and universities seem to be concerned about AI in essay writing, although this also suggests that perhaps this is an opportunity to rethink approaches to learning and assessment.
But when it comes to actual copywriting... I've tried giving ChatGPT the same brief I would give a freelance writer and what comes back is ok. It's not dreadful, but if a human copywriter submitted this, I'd assume they were either very junior with minimal training or priced at the lower end of the market.
In other words, it may do the same as many other new technologies, in that some people may be at risk of losing work and others won't. Does quality win out? Maybe?
I asked ChatGPT to write a match report of Germany's exit from the 2018 World Cup at the hands of South Korea. This is what it came back with.
On June 27, 2018, Germany suffered a stunning defeat to South Korea in the 2018 FIFA World Cup, losing by a score of 2-0. The result was a shock to many, as Germany was one of the favorites to win the tournament and had won the previous World Cup in 2014.
The match was played at the Kazan Arena in Kazan, Russia, and was a must-win for both teams, as they were both fighting for a spot in the knockout rounds. South Korea came out strong in the first half and took the lead in the first minute of injury time, when Kim Young-gwon scored after a VAR review.
Germany pressed hard in the second half, but couldn't find a way to break through the South Korean defense. In the closing minutes of the match, South Korea scored again, this time through a counter attack, to secure the 2-0 victory.
The result marked the first time in 80 years that Germany had been eliminated in the group stage of a World Cup, and was a major disappointment for the team and its fans. It also marked the end of an era for Germany, as several key players, including captain and goalkeeper Manuel Neuer, announced their retirements from international soccer after the tournament.
Despite the defeat, South Korea can be proud of their performance in the World Cup, as they fought hard and earned a place in the knockout rounds for the first time in their history.
In contrast, here's Jonathan Wilson's match report from the same game:
This, then, is how the world ends, not with a bang but with a whimper. There are certain events so apocalyptic that it feels they cannot just happen. They should be signalled beneath thunderous skies as owls catch falcons and horses turn and eat themselves. At the very least there should be a sense of fury, of thwarted effort, of energies exhausted. And yet Germany went out of the World Cup in the first round for the first time in 80 years on a pleasantly sunny afternoon with barely a flicker of resistance. There was no Sturm. There was no Drang.
Germany had come to this World Cup not merely as defending champions but also as winners of the last summer’s Confederations Cup – with what was in effect a second string. The overhaul of their academy system was the envy of the world, talent production on an industrial scale. And yet, called upon to score a goal against a disappointing South Korea side who had already lost to Sweden and Mexico, that had lost in qualifying to Qatar and China, they struggled to create a chance.
In the end it was VAR that undid them, Kim Young-gwon’s shot that had initially been ruled out for offside given as a goal when it was realised the ball had been played to him by Toni Kroos. And then, even after that, there was a beautiful farce of a goal, Manuel Neuer caught in possession miles upfield as Son Heung-min chased on to Ju Se-jong’s long ball and rolled the ball into an empty net. It was as though football itself was having its joke, the sweeper-keeper who had been such an asset four years ago turned into a liability.
Germany are the fourth of the last five world champions to go out in the group phase but this was as limp a defence as any side had managed. There was no defining defeat, as Spain had suffered to the Netherlands, just a whole load of baffling mundanity. There was a chance, three minutes from time, to steal a goal as they had stolen a late winner against Sweden in the second game but, presented with a free header eight yards out, Mats Hummels somehow misjudged his effort to the extent that the ball looped wide off his shoulder.
The first may suffice for anybody who wants to find out information. That in itself may be enough. But I know which report and writing I'd pay for if it was behind a paywall. ChatGPT may save companies money on their copywriting budget. Whether it will make them money is another question entirely.
Bonus thought I couldn’t fit in from Will Jordan, which makes a similar point as the above, but slightly more pithily.
Additional recommended Christmas reading
Because we all have a little bit more time on our hands, right?
US Treasury Market Black Swans
Concoda is one of the more interesting economics newsletters out there. Here’s what they think we should be worried about in the US economy. LINK.
My life as a chatbot: my life as an AI backup.
Laura Preston’s account of working as the person who mops up what a chatbot doesn’t understand requests is a much read. It touches on a problem that’s been around as long as businesses have been experimenting with chatbots: business processes are usually logical; humans aren’t. As a side question, I’d be very curious to know if the combination of human + machine was more effective against a number of metrics (performance, cost, repeat purchase) than just human or machine alone. LINK.
The limits of branded content
Branded content and media plays, done well, can add a lot of value. The trouble is that it’s rarely done well and rarely fits a market need. Companies that are good at making entertaining commercials may not have a market for an entertainment streaming play. Probably one of (if not the only) brand that’s really perfected brand-as-media is Red Bull, and that comes from an incredibly clear strategy. LINK.
Coffee in the metaverse
Starbucks has launched a Web3 loyalty play. Given that Starbucks sells around 4m cups of coffee a day in the USA alone, I’m not quite sure how a limited audience NFT focused play adds any value. Some companies metaverse plays, like Nike and Gucci, make a degree of strategic sense. I’m not quite so convinced about virtual tours of coffee farms and “immersive coffee experiences”, or why invites to events couldn’t be done via existing loyalty programmes. After all, these people probably don’t need convincing to make their next coffee a Starbucks. LINK.
How successful are branded Roblox experiences
As a partner piece to virtual coffee, an interesting dive into which brands seem to be attracting repeat visitors in Roblox and a bit of thought as to why (credit: found in Roberto Kusabbi’s newsletter). LINK.
The robot cleaner, a picture of a toilet, and a very murky world of data privacy
What classifies as data and who owns it? Internet of Things (IoT) devices send a lot of data back to be reviewed by humans that can then be used in ways it was never intended for. Case in point, a Roomba robot vacuum took a picture of a woman on the toilet that was sent back to an AI form for contract (human) workers to label to aid in training the Roomba. Except these pictures then ended up in a Facebook group. When it comes to things like this, my general rule of thumb is to think about what the worst thing somebody could do and then go a little bit further. Then work backwards to find ways to stop it happening. LINK.
The theme of this week’s newsletter seems to be when people meet machines, so I’ve chosen Jennifer by Everything Everything to play us out this week. Singer Jonathan Higgs used AI to co-write some of the lyrics to the recent Raw Data Feel album, although it’s not really distinguishable from the band’s usual output in places (and makes far more sense than, say, Michael Stipe at his most opaque). And continuing this newsletter’s theme, the music probably lets Raw Data Feel down more than the lyrics. Jennifer is a standout track though, with a Peter Hook influenced bassline and Higgs’ trademark falsetto combining to good effect.
Thanks for the hat tip! Very kind. Great read as always.