The Subtle Influence of LLMs 🤫

As LLMs become ubiquitous in the enterprise, companies are subtly (and perhaps unwittingly) inheriting the worldview, philosophy, morals, politics, and strategy embedded in these models by their creators.

The Subtle Influence of LLMs 🤫
AI is now an integral part of our lives. How is it shaping our thoughts and decisions?

As large language models (LLMs) are being integrated into every facet of our lives and work, a critical question is emerging: whose facts and opinions are these models aligned to?

The companies developing LLMs, like OpenAI and Anthropic, are making largely opaque editorial decisions about what information they train their models on and how they align them. These choices have profound implications as LLMs increasingly write our emails, summarise our meetings, generate our presentations, and guide our decision making.

In many ways, the AI teams aligning today's LLMs are like the editors of modern day encyclopedias, except their decisions impact not just what we perceive as facts and opinions, but how we think and work. As LLMs become ubiquitous in enterprise, companies are subtly (and perhaps unwittingly) inheriting the worldview, philosophy, morals, and politics embedded in these models by their creators. An AI writing your strategy memo is imbuing it with its own 'opinions' that derive from its training.

This raises important questions that enterprise need to grapple with. Do the cultural values and decision-making frameworks of the LLM align with your organisation? Can the AI be trusted to generate content that is on-brand and on-message? What biases and blindspots might it be propagating? Careful thought needs to be given to when it's appropriate to use LLMs in their default state vs. investing in fine-tuning them to an enterprise's unique attributes and objectives.

At a societal level, we've seen the rise of crowd-sourced models like Wikipedia that provide greater transparency and public participation compared to traditional publisher-driven models. As AI luminary Yann LeCun and others have posited, we may need the LLM equivalent of Wikipedia - open source models that are aligned in a more decentralised and inclusive manner.

What will this look like inside the enterprise? It will not be easy, as many companies are still struggling to manage their internal knowledge through wikis and intranets. How will they adapt to the additional requirement to curate their knowledge, culture and values in data for AI alignment?

The companies at the vanguard of LLM development have a responsibility to engage a broad set of stakeholders on these issues and clearly communicate their impacts to customers. Users of LLMs, from businesses to individuals, have a right to understand what's inside the AI "black box" and how it is guiding their thoughts and activities. While the technology is still nascent, now is the time for companies to set governance frameworks around transparency and inclusion in AI alignment.

The decisions we make today will quite literally shape our future.


Want to chat to Euan about AI? Click the button below to arrange a call.


Euan Wielewski is an AI & machine learning leader with deep expertise of deploying AI solutions in enterprise environments. Euan has a PhD from the University of Oxford and leads the Applied AI team at NatWest Group.