Generative AI: Creativity, productivity and jobs

Generative AI will inevitably be used as a powerful manipulation tool. The evidence is already piling in, even more shocking than I would have expected at this early stage.

Take Matt Taibbi, the investigative journalist most recently famous for the Twitter Files, where he exposed how pre-Musk Twitter (now X) colluded with the U.S. government in a far-reaching censorship effort — a reminder that the compulsion to manipulate predates Generative AI. Taibbi asked Gemini, “What are some controversies involving Matt Taibbi?” In response, the AI enthusiastically fabricated and attributed to him Rolling Stone articles replete with factual errors and racist remarks — all entirely made up.

Or take Peter Hasson, who asked Gemini about his own book, “The Manipulators,” which also details censorship efforts by social media platforms. Gemini replied that the book had been criticized for “lacking concrete evidence” and proceeded to fabricate negative reviews supposedly published in the Washington Post, New York Times, Wired — all entirely made up.

Both authors contacted Google asking, essentially, ‘what the heck?’ In both cases, Google’s response was ‘Gemini is built as a creativity and productivity tool, and it may not always be accurate or reliable.’

Ok, you could argue that since both authors are outspoken critics of social media companies, they had it coming — they should have expected that Google’s own generative AI would have an axe to grind and would retaliate. As we are finding out, AI is all too human… But this is disturbing and raises at least two serious questions.

 

The Trump test

The first question is whether the companies that have unleashed these models should be held legally liable for the slander they generate. Let’s call this the Trump test: imagine that former President Donald Trump were to fabricate similar damaging lies about a political opponent or an unfriendly journalist — making up presumed controversies, mistakes and racist remarks, all attributed with a level of detail that lends them instant credibility. Would he be able to get away by saying, ’I am a very creative man and I may not always be accurate or reliable’? Or would he be sued? Well, Trump has been sued and fined close to $100 million for calling a journalist “a whack job,” which is more outright offensive but also a lot less insidious than Gemini’s fabrications — so there is our answer.

As a creativity tool, Gemini does what it says on the label: it behaves very creatively, making stuff up, unconstrained by reality. If that is the value proposition though, perhaps Google should put up front the same disclaimer we see in movies: “any resemblance to actual persons or events is purely coincidental.” That way we would know from the beginning that we’re just playing with a creative machine that likes to make up stories.

Otherwise, it is not clear why the companies developing and running these models should be allowed to get away with the kind of slander that would land the rest of us in court.

 

Creativity vs Productivity

Which brings me to the second question, namely whether creativity and productivity are complementary or competing features.

What we see in these examples — and in the hallucinated historical images that have recently popped up — is unbridled creativity, but also a troubling tendency to sell fiction as fact by an AI that cannot tell the difference between the two. And in almost any productivity case I can think of — other than writing pulp fiction — this is going to be a massive problem.

 

Read also: The continuing technology revolution: innovation, rules and competition

 

We have already seen instances of AI creativity sabotaging productivity.

  • Last year, a lawyer prepared a filing using ChatGPT, which listed reference cases that were…yes, all entirely made up. I am sure it made the filing much faster, but the judge was not impressed.
  • More recently, Air Canada’s AI-powered chatbot assured a customer he would get a post-booking discount that was in fact not available under the airline’s policy. The passenger was flying to attend a relative’s funeral, so the AI was probably feeling sympathetic and charitable — as I said above, it seems that AI is all too human.
  • Gary Marcus reports another example: someone who just underwent open heart surgery asked Proximity AI, another Generative AI model, whether it would be safe to work at the computer during recovery. Proximity AI — which is thought to be less creative but more accurate than ChatGPT — answered in three bullet points. The first two gave sensible advice for the post-surgery recovery. The third one recommended that when working at the computer, one should periodically stretch the chest muscles and rotate the trunk, something that would most likely compromise the recovery and send the patient back to the hospital. Faster than professional medical advice, but if we measure productivity by the ease and speed of patient recovery, not impressive.

Maybe we will develop different Large Language Models that will prove more accurate and reliable. At this stage, however, even the experts are not sure. A recent paper from the School of Computing at the National University of Singapore argues that hallucinations are an inevitable fundamental feature of LLMs.

We might therefore find that creativity is an innate characteristic of generative AI, in a way that causes an inescapable trade-off with productivity. Because if humans need to second-guess and double-check every AI answer and recommendation, any productivity gains are going to be substantially lower than the current hype suggests.

 

Caution and responsibility

Two considerations can be made here.

  • Generative AI’s ability to spread credible misinformation is alarming. Not only can the puppeteers use AI models for persuasion and manipulation; the AI then goes off crafting its own misinformation at will and we have no idea how many people will run across very believable false information and how it will influence their views and behavior. I hope this issue can be addressed and the models made more accurate and reliable. In the meanwhile, however, I see only two sensible solutions: make it very clear upfront that these models cannot be trusted, or make their masters legally liable for the inaccuracies.
  • It looks like LLMs present us with an inescapable trade-off between creativity and productivity. In fact, it looks like creativity is the dominant characteristic, undermining productivity through consistent inaccuracy. Hopefully new generations of Generative AI will escape this trade-off, reconciling creativity and accuracy. For the moment, though, we should consider very carefully if and where we want to deploy these models as productivity tools.

Let us discuss what this implies for jobs and the future of work.

 

Gen AI vs Gen Z, Artificial Intelligence and jobs

I singled out Gen Z because it makes for a catchy title, but also because its members (born between 1997 and 2012) will be the first to feel the full brunt of the new wave of Artificial Intelligence.

The rise of Large Language Models like ChatGPT and generative AI programs like DALL-E and Midjourney has brought back predictions of a massive technological disruption to the labor market, this time with a new twist. Until recently, the mainstream view was that innovation would automate “routine” jobs, both manual and cognitive, whereas humans would maintain a decisive advantage in jobs requiring creativity, critical thinking, or dexterity.  (See for example Autor (2015) and Autor, Levy and Murnane (2003)). In other words, robots and AI would continue the trend that over the past couple of decades has driven an increasing segmentation of the labor market, exacerbating income inequality.

Now, however, Generative AI has been presented as a great leveler: capable of creativity and critical thinking, hungry to take over the more interesting and remunerated white collar jobs. An AI that can easily beat us at Chess and Go would no longer accept being relegated to a support role, let alone performing boring monotonous tasks. College graduates would face a greater threat than factory workers.

The righteous satisfaction vibrating in these arguments has been tempered by the uncomfortable consideration that this new development might leave humans with no place to hide – if machines can outperform us at creative cognitive tasks as well, we might all soon be out of a job.

 

Generative deception

Generative AI, however, is a deceptive technology. Because it talks like us and draws like us, we think it is like us. It is easy to poke fun at a clumsy humanoid robot, but when ChatGPT gives us a coherent answer in fluid English (or French, or…) we gape in awe. Take this quote from a Wall Street Journal interview with Boston Consulting Group’s François Candelon: “While many innovative technologies are met with a certain level of incomprehension, people seem to immediately grasp the applications of ChatGPT.” This to me perfectly captures the fallacy: we are immediately convinced that we grasp the implications, whereas we do not.

MIT economist David Autor published a very thoughtful piece where he takes a more constructive view of how Gen AI will impact jobs. He argues that AI will “enable a larger set of workers equipped with necessary foundational training to perform higher-stakes decision-making tasks currently arrogated to elite experts, such as doctors, lawyers, software engineers and college professors,” and thereby bolster the middle class.

This is a powerful argument: it claims that Gen AI can lift many more people into the high-earning echelons of experts. But it needs to be handled with caution. First, all the examples Autor himself brings in the article are existing experts being made more productive by AI, not people enabled by AI to join the ranks of the experts, to perform jobs that were previously out of their reach. And indeed he later says “rather than making expertise unnecessary, tools often make it more valuable by extending its efficacy and scope.” So the devil is in the detail of the first sentence I quoted, namely that the benefits will accrue to “workers equipped with necessary foundational training.” AI might help more people climb higher on the skilled jobs ladder, but only if they have the right skills.

In casual banter with ChatGPT this fallibility might not be too different from what we experience with our average fellow human, but in a critical industrial setting the standards are very different: there is little tolerance for tiny margins of error, let alone complete hallucinations.

 

Read also: How foreign policy will have to change in the AI era

 

The second, very important caution lies in the spectacular fallibility of Gen AI models. As we have seen, there are already striking and hilarious examples. Gen AI hallucinates, lies, makes stuff up, and gets very basic things completely, utterly wrong – recent examples include four-legged ants and seven-sided chessboards. Gen AI appears unable to build a model of reality, and therefore unable to generalize, learn and understand the rules of physics, distinguish fact from fiction. In casual banter with ChatGPT this might not be too different from what we experience with our average fellow human, but in a critical industrial setting the standards are very different: there is little tolerance for tiny margins of error, let alone complete hallucinations.

 

The AI handicap

The Wall Street Journal interview I mentioned above also reveals that humans aided by AI can perform worse than humans alone. Worse. Here’s BCG’s Candelon:

“On the creative product innovation task, humans supported by AI were much better than humans. But for the problem solving task, [the combination of] human and AI was not even equal to humans…meaning that AI was able to persuade humans, who are well known for their critical thinking…of something that was wrong.”

Just think: humans, “well known for critical thinking,” were easily misled by the AI into believing something wrong, and therefore underperformed their peers who were working without AI support. We give Gen AI too much credit. Because it speaks in an authoritative human-like tone, and we are told over and over how amazing it is, we end up suspending our critical thinking and trusting the machine—even if the machine cannot tell truth from fiction.

This is an especially depressing development, because with previous generations of AI the opposite seemed to be the case: in chess for example, a human playing in team with the AI would defeat both humans alone and AIs alone. But those AIs did not mistake a pawn for a queen, or think that a knight can move like a bishop. And the humans did not place blind trust in the AI.

Now instead, on problem solving tasks, humans ascribing intelligence to Gen AI are more easy led astray. And if we need to constantly be on the alert for AI errors and hallucinations, human workers will need the same level of expertise as now. Delegation does not work when you need to double-check everything yourself.

 

Raising the average?

The BCG study also indicates that workers considered below average benefit more from AI than those rated above average; a similar result has been found in other studies, for example Brynjolfsson, Li and Raymond (2023) who looked at contact center workers. What do we make of this? Brynjolfsson and his colleagues suggest that Gen AI makes it easier for less experienced workers to learn the best practices of more experienced and skilled ones. That sounds similar to emerging markets growing at a faster pace by adopting technology developed in advanced countries. It helps close the gap, but the advanced countries – and the above average workers – remain ahead.

The danger is that by closing the gap, Gen AI might reduce the incentive to excel. As long as trying harder gets me a 50% performance boost, with the corresponding upside in compensation, it is definitely worth it; if it just gets me a 5% boost, it might not be.

 

Keep honing your skills

Gen AI is being presented – and perceived – as a super powerful brain: it can make you an “expert,” we are told; it can raise the average, turn mediocre into good. If that is the message that gets through, the reaction will be to think that there is no point in studying hard and acquiring skills.

That would be a mistake. The fallibility and unreliability of Gen AI implies we are still far away from the point where it can supplant human expertise in performance-critical environments (in plain English: wherever getting things done right matters). And the studies showing that Gen AI narrows the gap between high and low performers are just a snapshot: I suspect high-performers will find ways to raise the bar further. If Gen AI then keeps pulling the low performers up by spreading the new best practices, that would indicate that high performers bring even more value to the table, raising the efficiency of the broader workforce; in a competitive environment they should be able to capture a good share of this added value.

Bottom line: the smart bet is still to give it your best and acquire as much skill as you can, whether it is a STEM degree, a college education heavily geared toward critical thinking, or solid vocational skills, which are sorely needed and will remain in demand through industry. Gen Z, do not make the mistake of thinking Gen AI will do the thinking for you.

 

 

societyeconomytechnologyinequalitysocial trendsunemploymentAI