Think - AT LONDON BUSINESS SCHOOL

We need to talk about ChatGPT

Is OpenAI's Chatbot going to make us all redundant, then enslave us? Or is it just a human productivity tool? LBS experts weigh in

ChatGPT-1140x346

On 29 March this year, The New York Times carried an open letter from more than 1,000 tech leaders, researchers and others calling for a six-month moratorium on the development of “the most powerful artificial intelligence systems”. Claiming that AI tools presented “profound risks to society and humanity”, the letter declared that AI developers are “locked in an out-of-control race to develop and deploy ever more powerful digital minds that no one – not even their creators – can understand, predict or reliably control”. 

Given the controversy surrounding what is, after all, a chatbot, we challenged three faculty to sort the hype from the hope.

But first, what is ChatGPT?

First launched in November 2022, ChatGPT is an AI chatbot developed by OpenAI. It’s based on GTP-3 (generative pre-trained transformer), an LLM that utilises a novel neural-network architecture that was trained on a massive amount of text (everything that can be found on the internet.) As its name suggests, GPT-3 is the third iteration of the model; a fourth iteration, GPT-4, was officially launched on 13 March 2023 and is accessible with a paid subscription or through its integration with Bing, Microsoft’s search engine. A number of competing chatbots have been created by tech giants such as Meta (LlaMA) and Google (Bard); tech startups such as Google-backed Anthropic AI (Claude) and Amazon-backed Hugging Face (Bloom); and universities such as Stanford (Alpaca). 

What has made ChatGPT and its competitors such a hot topic is their ability to communicate in a manner that shows human-like understanding of language and context, showing knowledge and ability to reference examples across diverse domains and adapt their prose to mimic human diction. Their success has surprised even industry insiders – it seems that the sheer size of the models and the quantity and diversity of the data they were trained on has enabled a level of performance never before seen in computer-generated language. However, ChatGPT and other AI tools are still subject to “hallucinations”; the term used by the AI community to describe the scenario where a chatbot errs by “inventing” a fact that doesn’t exist.

Now, over to our faculty…

Ali-Aouad-527x347

 

 

 






Ali Aouad, Assistant Professor of Management Science and Operations

ChatGPT can help us do a lot of things. It’s great for writing emails and can help us program computers and so on, but there’s a consensus among the scientific community that this is not “general” intelligence at this stage of its development. It’s a fantastic productivity tool and those who learn how to use it will have an edge in the workplace, but one frontier yet to be crossed is to do with sensory-motor capabilities. That’s about the ability to assess the physical environment, interact with it and learn from it. Large-language models (LLMs) are not designed for that purpose; AI researchers believe that, if you could embed that capability, you would get much closer to human intelligence.

Discover fresh perspectives and research insights from LBS

“What really strikes you is the capability to imitate humans”

Writing in The New Yorker last February, sci-fi author Ted Chiang likened ChatGPT to “a blurry jpeg of the web”, which I think is a very good simile. It retains much of the information on the internet, but in the same way that a jpeg retains much of the information of a higher-resolution image – it compresses the information, so what you get is an approximation of the web. If you have access to the internet, how much use is a blurry image of it? This is why, in its current version, ChatGPT should not be used as a factual source of information.

The shock of the not-so-new

Like most people, I had a moment of shock when I first used ChatGPT. What really strikes you is the capability to imitate humans. Our species is fascinated by what resembles us, and we often attribute human traits to non-human entities due to our anthropomorphic bias. This may help explain why ChatGPT has become a social phenomenon as much as it has a technological advancement. My guess is that, even if it feels like a breakthrough now, in time to come ChatGPT will seem like another step in an evolution of technology that a gradual alignment of the planets has enabled. 

I believe it’s crucial to understand how we got here: there’s been a constant build-up in the past 20 years and one factor is human capital. We have trained more and more engineers and scientists in computer science and STEM fields, and researchers have developed open-source, standardised tools and built on each other’s research to get to this point. We have seen an implicit alliance between academia and the technology sector, and companies have been investing in very efficient computing infrastructure with high scalability.

A new business ecosystem

We may be seeing the appearance of a new ecosystem and people will build many applications in the same way, for example, that smartphones gave rise to a new creative generation. This could lead to the emergence of a new type of business – app-based platforms, for example – that can connect with users, so LLMs could become the go-to platform for entrepreneurs to create new applications, and many corporations are thinking about how to leverage the technology in customer services.

To open source or not?

Different companies have their own take on this. Meta AI, for example, has been much more open with its LLM, LLaMA. It has not released it as a public chatbot like ChatGPT, but as an open-source library of models to which anyone in the AI community can request access. Meta’s strategy, or so it claims, is to make the tool available, explain how it works, and make it completely open to users and the research community. 

OpenAI’s strategy seems a bit more ambiguous. ChatGPT is “open” in the sense that it can be used now (with safety measures), for which OpenAI charges a fee. The company shares what the algorithm is doing to some extent, but not much is actually known or understood about it. OpenAI’s CEO believes we should not have a fully open-source AI universe because of its potential for harm. This is a trade-off: the more open-source we are and the more transparency we have with respect to how the algorithm is built, the more accessible the technology is to a wider range of stakeholders. But, if it’s completely open-source, people of malign intent may repurpose the tool for illegal activities.

Barriers to entry

There’s another question around ease of replication: to what extent is there a barrier to entry for a company, institution or individual who wants to build their own LLM? How easy is it to do this? We need more research in this area. Researchers from Stanford University recently claimed to have developed a small AI language model on a budget of less than $1,000. It’s an interesting study because, if the results are confirmed, it would mean that the barrier to entry is not that great. But I’m sceptical – I think there will be a first-mover advantage for the big techs because they have massive computing infrastructure and huge volumes of data and human feedback. (Shortly after its release, the Stanford University research team took down their LLM due to misinformation and rising costs.) 

Anyone may be able to develop an AI tool cheaply, but that tool may not be trustworthy and may have more limitations. The real difficulty is to catch the ‘edge’ cases, such as when the tool is offensive or racist. The barrier to entry lies in the extent to which we can replicate the fine-tuning that these platforms have been engaged in, which is a non-trivial task. If we can’t replicate the models easily, we risk ending up in the hands of the tech giants, who will have a first-mover advantage and will capture much of the value added.

The lessons of history

I’m less concerned about whether the robots will take over, become the masters and make us their slaves than I am about the divide between the people who are the consumers of AI – by which I mean individuals but also corporations and countries – and the AI “masters” who develop and own the technology and capture the value added. History tells us that significant disruption in technology can give rise to inequality. I believe it’s the job of policymakers to manage technological advancement to curb its tendency to produce inequality. 

It’s clear that AI is going to be both a potential source of good and a potential source of harm, but I think the fact that it is so widely talked about is a positive development. My hope is that the discussion stirred by the ChatGPT phenomenon will inform political debate and there will be discourse around how we manage the technology to try to ensure that it is a force for good.

Nicos-Savva-527x347

 

 

 

 

 

 

 





Nicos Savva, Professor of Management Science and Operations

Most new technologies don’t have that big an impact and tend to fade away; maybe only 1% or 2% are genuinely transformative. I believe these large-language models are transformative and are going to have an impact on everything we do. Like personal computers in the 1980s and the internet in the 1990s, this is a technology that’s going to change the world. 

The next iteration of the OpenAI model, GTP-4, is even more transformative because it’s multimodal. It doesn’t just take text prompts – it can process pictures and videos and can respond to queries by providing pictures as well as text. That means it will be able to interact with the world around us pretty much like us humans do – we don’t just interact via language; we interact via visual cues and audio cues. We are multimodal and the technology is becoming multimodal, too.

Implications for complementary technology

This will have huge implications for complementary technology. Take, for example, Google Glass, a brand of smart glasses developed by Google that appeared around 10 years ago. It disappeared as quickly as it came, but not because there was anything wrong with the technology – it was simply that its live video feeds weren’t processed in such a way as to generate useful insights. That’s going to change with the advent of these LLMs. The way we interact with our computers will also change fundamentally. 

Currently, we use a mouse and a keyboard. We may still do so, but they will be secondary – inputting instructions by typing text on a keyboard and moving a mouse around will increasingly feel like the Stone Age. The technology is transformational not only because of what it’s able to do, but also because it enables existing complementary technology to become a lot more useful and to be adopted at a much faster pace.

Implications for work

In terms of the world of work, it’s hard to say what it’s not going to impact – but it’s also difficult to say exactly how it will do so. It’s going to be picked up very quickly and is currently being integrated into all sorts of programs. Many people worry about AI automating jobs and replacing humans, but I don’t think that’s right. AI doesn’t automate jobs; it automates tasks. 

A typical job involves many tasks – if it only involves a limited number of tasks that are all amenable to automation, it will cease to exist. Most jobs, however, are an amalgamation of different tasks and some will be more amenable to automation than others. For these, technology will be used progressively to improve the productivity of the worker, which frees time for more value-adding activities. For example, if you’re a doctor, you probably spend a lot of time doing note-keeping and back- office work.

“In terms of worrying whether these models are taking over the world, I don’t think we’re there yet”

As the tools are refined, you’ll spend less time on such activities and more on interacting with patients to understand their problems. That’s a big gain in human productivity and adds value in terms of human interaction. On the other hand, a job in healthcare that probably won’t exist in a few years’ time is medical-notes coding. Currently, people take patients’ notes, often given orally or in writing, then convert them into codes that are used for reimbursement and quality management. Startups are developing ChatGPT applications that do this well, so that’s one job that’s probably going to disappear.

Should we be scared?

Any tool, especially a tool that involves reducing the barriers to collecting and synthesising information, has good uses and bad uses, but I don’t think it’s fundamentally different to any other technology – it’s just on a bigger scale. It’s therefore crucially important that we develop such technology responsibly and by adding safety layers (known as “guardrails” in the industry). Nevertheless, there’s a concern that someone may train a new model from scratch without such guardrails. 

The competition between tech companies makes this scenario more likely, but it is even more likely when it comes to competition between nations. In terms of worrying whether these models are taking over the world, I don’t think we’re there yet. This is not general intelligence – it’s just a clever chatbot. In his book AI Narratives – A History of Imaginative Thinking about Intelligent Machines, published in 2020, philosopher Stephen Cave argues that our preoccupation with AI taking over the world essentially stems from a Western mindset, and that this preoccupation with power and domination is a relatively modern construct. Cave makes the point that perhaps AI will adopt a more Eastern philosophical perspective – an enlightenment mindset rather than a power-seeking one. That’s a thought I find comforting.

A humble servant of the robots

When GPT-4 was released, Oklahoma-based branding designer Jackson Greathouse Fall tasked it with making as much money as possible on a budget of $100 (£82), documenting the progress on his Twitter feed. He instructed the chatbot to be “as verbose as possible [in] making decisions for everything” and required that the solution must not involve any manual labour in order to see if AI could create a successful online business. It created GreenGadgetGuru.com, a discovery website for “Eco-Friendly Products & Tips for Sustainable Living”. 

By the end of day one, the website had attracted investment of $100. Within a week, thanks in part to Twitter marketing and direct messages, the website had generated $1,378.84 ($878.84 previous balance + $500 new investment) in cash and the company was valued at $25,000 on the back of the $500 investment for a 2% stake. “I am but a humble servant of the Robots,” opined Greathouse Fall. While this example lit up the Twittersphere, there have been innumerable more humble applications by “ordinary” people (see ‘35 Ways Real People Are Using A.I. Right Now’; The New York Times, 13 April 2023.)

Ioannis-Ioannou-527x347

 

 

 

 

 

 





Ioannis Ioannou, Associate Professor of Strategy and Entrepreneurship

What we’ve seen with the launch of ChatGPT is a step change in technology. What’s truly remarkable about it is its accessibility. Compared to previous ground-breaking technological innovations, such as the internet itself (which required a degree of know-how in order to use it in the early days), ChatGPT is by design ready to go straight out of the box – it’s accessible, it’s conversational and it doesn’t require any technical expertise. The fact that it’s instantly accessible makes its implications hard to predict, yet they will be wide-ranging and profound, both for our personal and our professional lives.

Increased human efficiency

This technology is going to bring about a quantum shift in human efficiency; whether it’s from performing mundane tasks, such as answering emails, or more advanced tasks, like negotiating rules and regulations. It’s an important complement to human ability - but not a substitute for it. Consider the following example. Recently I wanted to better understand the implications of new guidance from the UK Competition and Markets Authority (CMA), a government department that promotes competitive markets and tackles unfair competitive behaviour. 

The new guidance allows companies some flexibility to collaborate on climate-change initiatives without breaching anti-competitive laws. It is legal guidance, using specialised legal language and (unless you’re a competition lawyer) fully understanding its provisions is not straightforward. So, I started a conversation with ChatGPT, after sharing the wording of the new guidance, and refined my questions as I went along. This iterative process enabled me to understand a relatively complex legal document and its implications way faster and arguably much better than if I had performed the research without the tool. Had I done it on my own, it’s likely that I would not have fully grasped the dense legal language, let alone appreciate the full implications of the guidance. 

Of course, ChatGPT didn’t make me an instant expert, but it simplified the legal jargon and allowed me to generate (and verify) a more informed, justifiable view in a domain in which I’m not an expert.

The domain of sustainability

Extend the example above to sustainability and you quickly see a potential multiplier effect. Take one key current issue: integrating ESG considerations into investment strategies. About half of all global investments are invested in passive funds; more specifically, large sums of money are invested in exchange-traded funds (ETFs), a vehicle commonly used to implement passive investment strategies. A number of these are very old (the oldest are up to 30 years old). 

The question is, how do you integrate ESG considerations into the investment decisions of these ETFs? This is not only a technical question but also a legal one, because you need regulatory approval to change the investment mandate of an existing ETF. This gives rise to all sorts of implementation challenges. What kind of approvals are required and from whom? How easy is it to get those approvals? 

“I also think companies should encourage their employees to learn new technology and leverage it, rather than fear it”

Clearly, it’s not a simple process as it involves multiple steps and requires the approval of various stakeholders. Here, a tool such as ChatGPT could be extremely helpful for understanding both the process as well as the potential bottlenecks, and developing a strategy to change the mandate and integrate ESG. From an academic point of view, understanding this process and its underlying complexities is also important because it would (partially) explain why capital markets in general, and passive funds more specifically, have been relatively slow to integrate ESG into their investment processes.

Impact on business

With regard to the impact on business more broadly, I think what we will see is that the most forward-looking, long-term oriented companies, rather than banning ChatGPT in the workplace, will come up with creative and innovative ways to embrace it – or, at the very least, experiment with it. They will allow their employees – especially in industries in which there are immediate applications, such as marketing, communications and PR, and which produce a wealth of documents on a daily basis – full licence to experiment with the tools to automate many routine tasks. I also think companies should encourage their employees to learn new technology and leverage it, rather than fear it, as long as they have safeguards in place with regard to privacy and cyber security.

A race to the top

We are just starting to uncover the potential of this technology. One could list a plethora of domains that will be disrupted by tools like ChatGPT; the reality is, they will fundamentally transform industries. However, I’m encouraged by the fact that extensive testing is underway. I also believe it’s a good thing that tools such as ChatGPT were not given immediate access to the entire internet, thus avoiding misinformation and interventions by bad actors. Instead, they have been introduced under relatively controlled conditions to ensure we identify and hopefully address any defects and biases. I’m also encouraged and excited by the fact that the release of ChatGPT has sparked intense competition between the tech giants such as Google and Microsoft, among others. 

I believe this is shaping up to be a race to the top, although I fully acknowledge the need to remain cautious and put guardrails in place as these technologies develop rapidly. That’s why I remain cautiously optimistic. At the very least, I can see the potential for tremendous efficiency gains but also their potential, as productivity tools, to turbocharge other, more creative tasks such as ideation and brainstorming. I foresee a future in which this is another tool, albeit a very powerful one that, in the same way that the internet did when it first emerged, can drive all of us forward.

Browse our portfolio of Executive Education programmes dealing with AI and related frontier technology topics here. 

TA-online-768x432

Join us for our next event in the think ahead series on 26th September 2023, where you can listen to our world-class panel discuss The Business Implications of AI.

READ MORE