Think - AT LONDON BUSINESS SCHOOL

How to mitigate AI risk and ensure positive gain

Are there hidden trade-offs in AI-enabled consumer experiences?

how-to-mitigate-ai-risk-1140x346
  • AI products and services are the results of technological developments but they are experienced by consumers
  • The ubiquitous integration of AI in products and services creates distinct consumer experiences: data capture, classification, delegation and social interaction
  • These experiences deliver great value but also costs to consumers 
  • Managers and developers should have these costs and their solutions on their radar

AI has the potential to make our lives exponentially easier. From robots that do the work of teams to fitness trackers that tell us how much weight to lose; from apps that monitor our circadian rhythms to dating algorithms that can assess the compatibility of a potential partner, artificial intelligence has the precision, speed, the accuracy and the personalisation technology to dramatically enhance the way we work, live and play. Little wonder it has become so ubiquitous. Effectively the stuff of science fiction only a few decades ago, AI today is reshaping organisational culture – as well as the consumer experiences organisations deliver. 


We know about its advantages. We’ve also heard about the risks. Plenty has been written about the potential for huge job loss as the machine replaces the man, the increasing threat of sophisticated cyber-attacks and even the possibility of super-systems going rogue. But how much do we understand about the more day-to-day risks or costs that the use of AI imposes on us as consumers? What are the trade-offs we experience when machine intelligence is embedded in the products and services we use every day?

A team of researchers from Erasmus University in The Netherlands, Ohio State University, Canada’s York University, and London Business School analysed a comprehensive body of relevant research that explores AI-consumer interactions from two lenses... psychological and sociological – to get a better sense of what we gain and lose from AI. We identified four experiences that emerge from these interactions and for each experience we examined the cost-benefit trade-offs that should be on the radar of managers and developers alike. We list these below, together with suggestions and recommendations on how to mitigate negative outcomes. 

1. Data capture

Artificial intelligence uses algorithms to process large amounts of past data and identify patterns or features in that data. It learns from these patterns, using them to make predictions about future behaviour that are generally accurate and incredibly quick. 

But the data AI captures belongs to us, to consumers. The information parsed is ours – our personal choices, our preferences and our decisions. This is where the tensions and trade-offs emerge.


AI captures our data all the time. And it uses this information about us and our environment to create pleasing experiences – personalised or customised services, information or entertainment. Google’s Photo app, for example, allows Google to capture our memories, but in return offers to take all of the cognitive legwork out of related decision-making: how we manage, store or look for our photos and albums. We get a personalised service without incurring any mental or affective fatigue. But the research shows that data capture can also drive feelings of exploitation – that we are somehow monitored or controlled by systems that we don’t understand, and that we have lost ownership of our personal information in some way. This is down to both the intrusiveness and, at the same time, the lack of transparency and accountability that surround the ways AI can aggregate our data. 


So what can managers do to mitigate this effect?


  • Be aware. It’s key to strive for greater organisational sensitivity around the issues of privacy and the asymmetry in control over personal data. Responsible organisations would also do well to listen to consumers, at scale and with empathy, and question their own hard-held beliefs. 
  • Be transparent. Savvy organisations are already working to improve AI data-capture experiences, giving consumers the possibility to opt into specific data-collection processes and to ask for greater clarity on how these data are used. Organisations can limit consumer exploitation by playing an active role in educating their customers about the costs and benefits entailed in AI data-capture experiences. 

Discover fresh perspectives and research insights from LBS

“The potential of AI is undeniable. But so too are the dangers of oversimplification”

 2. Classification

Anyone with a Netflix or Amazon account will routinely receive recommendations about which films to watch or products to buy. To produce these ultra-customized recommendations, AI uses individual and contextual data to classify individuals into specific consumer types. But the danger here is that classification walks a very thin line between a customer feeling understood or misunderstood. Consumers’ perception of being classified can reduce the value of recommendations that are supposed to be highly personal. Classification can also lump users together in ways that are incorrect and/or discriminatory. When algorithms work too well to classify consumers on the basis of certain traits or features, the fallout can be catastrophic. Apple discovered this to their cost when skewed algorithms started systematically failing to offer Apple Card credit to women. 

So what can managers do?

  • Be rigorous. Organisations should not assume that their own algorithms and processes are bias-free and they should not wait to be told. No matter what’s going on at the policy or legislative level, organisations need to be proactive by collaborating with tech experts, computer scientists, sociologists and psychologists to audit algorithms and root biases out.
  • Be different. Organisations can reboot the classification experience by bursting the bubble and avoiding recommendations that are exclusively based on past choices, which might not reflect present or future predilections and may not provide consumers with optimal variety.
"Organisations should not assume that their own algorithms and processes are bias-free”

3. Delegation 

Apps such as Alexa, Siri and Google Assistant use AI to perform simple tasks that eat up time – booking a hair appointment, writing out an email, or consulting a map. But delegating even routine tasks can come at a cost to users. Delegation can feel threatening for various reasons. Firstly, individuals like feeling that a positive outcome, however mundane, is a result of their own action, skill, ability or creativity. Secondly, delegating a choice or decision can leave individuals feeling unsatisfied. And lastly, outsourcing a task can lead to actual or perceived loss of control and mastery. Three unfortunate students vacationing in Australia made the headlines when they drove their car into the Pacific Ocean attempting to reach North Stradbroke Island. Photos of the car fully submerged in the ocean were accompanied by interviews in which the students explained that their GPS had “told us we could drive down there.”

So what can managers do?

  • Be human. Certain abilities are conceived as intrinsically human and depend on making nuanced judgments in unstructured environments. Organisations are increasingly collaborating with museums, theatres and universities’ humanities departments to better understand how AI can preserve – rather than subvert – the human values of creativity, collaboration and community.
  • Be flexible. A classic marketing research finding revealed that consumers preferred using a pre-prepared cake mix when they were required to physically crack fresh eggs as part of the process. Why? Because human agency, however insignificant, can reduce the threat of loss of control and mastery and make delegation a more positive experience. In the same vein, savvy firms are exploring how self-driving cars can be designed to avoid their drivers feeling they have no control over their driving experience. 
"AI social interaction again treads a fine line between users feeling engaged or feeling unsettled or alienated”

4. Social

The movie Her gave a fictionalised glimpse into the curious area of AI-human social interaction. Apps such as Siri and Alexa integrate certain anthropomorphic or humanised features that lend a social dimension to how we use them. And this social dynamic can enhance our feelings of engagement with the product, service and the organisation behind them. Or not. AI social interaction again treads a fine line between users feeling engaged or feeling unsettled or alienated. 

Take this discombobulating exchange, reported by BusinessNewsDaily in 2020: 

Bot: “how would you describe the term “bot” to your grandma?” 

User: “My grandma is dead”. 

Bot: “Alright! Thanks for your feedback (Thumbs up emoji)”. 

 So what can managers do?

  • Be informed. To avoid Bot “fails”, firms are increasingly informing themselves about the dynamics of alienation. Not only can they collect information directly from consumers who have experienced alienation to gain valuable insights into how and why it occurs, they can also collaborate with experts such as psychologists, sociologists and gerontologists to discover more about the causes and consequences of alienation. 
  • Be careful. Anthropomorphism is a double-edged tool. Many designers and marketing managers take it for granted that humanising AI fosters better relationships with consumers. But this is not necessarily the case. Human beings are characterised by a heterogeneity so nuanced and complex that the margin for error is immense. There is also massive scope to draw on and calcify certain harmful stereotypes – the use of passive or “subservient” female voices in many AI apps is an all-too-common default – that should be on organisations’ radars. Progressive firms are increasingly investigating how to make AI gender-neutral, and in some cases less, rather than more, humanlike. 

AI-enabled products and services promise to make consumers happier, healthier and more efficient. They are often heralded as forces for good – tools to tackle not only the common but even the biggest problems facing humanity. And the potential of AI is undeniable. But so too are the dangers of oversimplification – and the inherent tendency to efface intersectional complexities that tie to human psychology and sociology: issues like gender, race, class, orientation and more. 

The challenge to managers and developers is to design and deploy AI critically and with care. To be aware, informed and careful that artificial intelligence is not impaired by our own biases and flaws.

Simona Botti is Professor of Marketing at London Business School. Her research focuses on consumer behaviour and decision making, with particular emphasis on the psychological processes underlying perceived personal control and how exercising control (freedom of choice, power, information) influences consumers’ satisfaction and wellbeing. 

Stefano Puntoni is the Sebastian S. Kresge Professor of Marketing at The Wharton School, at The University of Pennsylvania. He was formally Professor of Marketing at the Rotterdam School of Management, Erasmus University, at the time the referenced paper was published.

Rebecca Walker Reczek is Professor of Marketing and Berry Chair of New Technologies in Marketing at Fisher College of Business at The Ohio State University.

Markus Giesler is Professor of Marketing at Schulich school of Business at York University, Toronto, Canada. 

TA-online-768x432

Join us for our next event in the think ahead series on 26th September 2023, where you can listen to our world-class panel discuss The Business Implications of AI.

READ MORE
10500_AmanSingh Think image_v12

Think at London Business School

The rights and wrongs of data

Amanjeet Singh EMBA2021 on the future of AI (and how getting help from his dad with his maths homework led to a career at Google)

By Sophie Haydock

Find out more