
Join us for our next event in the think ahead series on 26th September 2023, where you can listen to our world-class panel discuss The Business Implications of AI.
Please enter a keyword and click the arrow to search the site
Or explore one of the areas below
Are there hidden trade-offs in AI-enabled consumer experiences?
AI has the potential to make our lives exponentially easier. From robots that do the work of teams to fitness trackers that tell us how much weight to lose; from apps that monitor our circadian rhythms to dating algorithms that can assess the compatibility of a potential partner, artificial intelligence has the precision, speed, the accuracy and the personalisation technology to dramatically enhance the way we work, live and play. Little wonder it has become so ubiquitous. Effectively the stuff of science fiction only a few decades ago, AI today is reshaping organisational culture – as well as the consumer experiences organisations deliver.
We know about its advantages. We’ve also heard about the risks. Plenty has been written about the potential for huge job loss as the machine replaces the man, the increasing threat of sophisticated cyber-attacks and even the possibility of super-systems going rogue. But how much do we understand about the more day-to-day risks or costs that the use of AI imposes on us as consumers? What are the trade-offs we experience when machine intelligence is embedded in the products and services we use every day?
A team of researchers from Erasmus University in The Netherlands, Ohio State University, Canada’s York University, and London Business School analysed a comprehensive body of relevant research that explores AI-consumer interactions from two lenses... psychological and sociological – to get a better sense of what we gain and lose from AI. We identified four experiences that emerge from these interactions and for each experience we examined the cost-benefit trade-offs that should be on the radar of managers and developers alike. We list these below, together with suggestions and recommendations on how to mitigate negative outcomes.
Artificial intelligence uses algorithms to process large amounts of past data and identify patterns or features in that data. It learns from these patterns, using them to make predictions about future behaviour that are generally accurate and incredibly quick.
But the data AI captures belongs to us, to consumers. The information parsed is ours – our personal choices, our preferences and our decisions. This is where the tensions and trade-offs emerge.
AI captures our data all the time. And it uses this information about us and our environment to create pleasing experiences – personalised or customised services, information or entertainment. Google’s Photo app, for example, allows Google to capture our memories, but in return offers to take all of the cognitive legwork out of related decision-making: how we manage, store or look for our photos and albums. We get a personalised service without incurring any mental or affective fatigue. But the research shows that data capture can also drive feelings of exploitation – that we are somehow monitored or controlled by systems that we don’t understand, and that we have lost ownership of our personal information in some way. This is down to both the intrusiveness and, at the same time, the lack of transparency and accountability that surround the ways AI can aggregate our data.
So what can managers do to mitigate this effect?
Discover fresh perspectives and research insights from LBS
“The potential of AI is undeniable. But so too are the dangers of oversimplification”
Anyone with a Netflix or Amazon account will routinely receive recommendations about which films to watch or products to buy. To produce these ultra-customized recommendations, AI uses individual and contextual data to classify individuals into specific consumer types. But the danger here is that classification walks a very thin line between a customer feeling understood or misunderstood. Consumers’ perception of being classified can reduce the value of recommendations that are supposed to be highly personal. Classification can also lump users together in ways that are incorrect and/or discriminatory. When algorithms work too well to classify consumers on the basis of certain traits or features, the fallout can be catastrophic. Apple discovered this to their cost when skewed algorithms started systematically failing to offer Apple Card credit to women.
So what can managers do?
"Organisations should not assume that their own algorithms and processes are bias-free”
Apps such as Alexa, Siri and Google Assistant use AI to perform simple tasks that eat up time – booking a hair appointment, writing out an email, or consulting a map. But delegating even routine tasks can come at a cost to users. Delegation can feel threatening for various reasons. Firstly, individuals like feeling that a positive outcome, however mundane, is a result of their own action, skill, ability or creativity. Secondly, delegating a choice or decision can leave individuals feeling unsatisfied. And lastly, outsourcing a task can lead to actual or perceived loss of control and mastery. Three unfortunate students vacationing in Australia made the headlines when they drove their car into the Pacific Ocean attempting to reach North Stradbroke Island. Photos of the car fully submerged in the ocean were accompanied by interviews in which the students explained that their GPS had “told us we could drive down there.”
So what can managers do?
"AI social interaction again treads a fine line between users feeling engaged or feeling unsettled or alienated”
The movie Her gave a fictionalised glimpse into the curious area of AI-human social interaction. Apps such as Siri and Alexa integrate certain anthropomorphic or humanised features that lend a social dimension to how we use them. And this social dynamic can enhance our feelings of engagement with the product, service and the organisation behind them. Or not. AI social interaction again treads a fine line between users feeling engaged or feeling unsettled or alienated.
Take this discombobulating exchange, reported by BusinessNewsDaily in 2020:
Bot: “how would you describe the term “bot” to your grandma?”
User: “My grandma is dead”.
Bot: “Alright! Thanks for your feedback (Thumbs up emoji)”.
So what can managers do?
AI-enabled products and services promise to make consumers happier, healthier and more efficient. They are often heralded as forces for good – tools to tackle not only the common but even the biggest problems facing humanity. And the potential of AI is undeniable. But so too are the dangers of oversimplification – and the inherent tendency to efface intersectional complexities that tie to human psychology and sociology: issues like gender, race, class, orientation and more.
The challenge to managers and developers is to design and deploy AI critically and with care. To be aware, informed and careful that artificial intelligence is not impaired by our own biases and flaws.
Simona Botti is Professor of Marketing at London Business School. Her research focuses on consumer behaviour and decision making, with particular emphasis on the psychological processes underlying perceived personal control and how exercising control (freedom of choice, power, information) influences consumers’ satisfaction and wellbeing.
Stefano Puntoni is the Sebastian S. Kresge Professor of Marketing at The Wharton School, at The University of Pennsylvania. He was formally Professor of Marketing at the Rotterdam School of Management, Erasmus University, at the time the referenced paper was published.
Rebecca Walker Reczek is Professor of Marketing and Berry Chair of New Technologies in Marketing at Fisher College of Business at The Ohio State University.
Markus Giesler is Professor of Marketing at Schulich school of Business at York University, Toronto, Canada.
Join us for our next event in the think ahead series on 26th September 2023, where you can listen to our world-class panel discuss The Business Implications of AI.
Think at London Business School
Professor Jacqui Cole EMBA2023 on her journey from a warehouse conveyor belt to award-winning interdisciplinary research scientist
By Mel Bradman
Think at London Business School
New challenges and opportunities are hiding in plain sight. Here’s how to spot them and shape your strategy accordingly
By Michael G Jacobides
Think at London Business School
Amanjeet Singh EMBA2021 on the future of AI (and how getting help from his dad with his maths homework led to a career at Google)
By Sophie Haydock