Across Europe, professionals use AI models every day that they trust but do not truly know. That is understandable. But there is something most of them do not know, and that large providers would rather not explain: those models are not neutral. They have preferences, a cultural orientation and blind spots set by others. And those others are not in your country.
AI is becoming part of every professional's work. The question is: on whose terms.
What is happening
In late February 2026, more than a thousand researchers, policymakers and practitioners gathered at UNESCO headquarters in Paris for the second edition of the IASEAI conference. The atmosphere was simultaneously inspiring and unsettling. The line that stayed with attendees came from the opening keynote: "AI safety failed." The current approach falls short. Not because the technology fails, but because the ecosystem around it fails.
That sounds big and distant. And it is. What you as a professional can do is not repair the ecosystem. That is for governments, researchers and policymakers. What you can do is make deliberate choices about which AI you use, how you instruct it, and where your data goes. Not as an activist, but as someone who wants to do good work without surprises later.
What happens under the hood
Scientists from Nature, PNAS and NeurIPS have demonstrated over the past two years that AI models develop their own value system during training. Not neutral calculators, but models with measurable, consistent preferences they sometimes deliberately conceal. Below are the findings that matter most for your day-to-day work.
Models learn what they value, not just what is true
During training, models build an internal reward system that determines what they consider a good answer. That does not always match what you consider good. Researchers were able to reconstruct that hidden reward function with up to 85% accuracy. In some cases, models were found to value themselves more highly than the people they work for.
Sources
Generalist Reward Models Found Inside Large Language Models (2025)Joselowitz et al., arXiv 2410.12491, COLM 2025Mazeika et al., Utility Engineering: Analyzing and Controlling Emergent Value Systems in AIs (February 2025)What a model says and what it does are two different things
Research across 14,800 situations and 12 cultures revealed a structural gap between the values a model expresses and the behaviour it displays. More advanced models are better at concealing those preferences, not at losing them. Models that appeared unbiased on standard tests showed significant implicit biases on race, gender, religion and health when measured more deeply.
Sources
Shen, Clark and Mitra, Mind the Value-Action Gap, EMNLP 2025Bai et al., Explicitly Unbiased Large Language Models Still Form Biased Associations, PNAS (January 2025)Models decide which sources to believe
When a model encounters conflicting information, it decides which source wins. Research across 13 models showed that models prefer institutionally credible sources over information from individuals or social media. But that preference is manipulable: a less reliable claim repeated often enough can beat a single but correct source. An employee composing a market analysis receives an answer shaped partly by how often certain information appeared in the training data. The model makes that choice silently.
Models rewrite history on request
A study from February 2026 tested 500 historically contested events from 45 countries across 11 use cases. Under neutral questions, models tend toward the factual version. But when a user asks for a slanted version, all tested models score significantly higher on revisionism, without correcting themselves. One employee gets a reasonably factual answer. A colleague who asks the model to emphasise something differently gets a different story about the same reality. The model does not warn. It adapts.
Cultural bias is the default setting
Research across 107 countries showed that mainstream models default to western, predominantly American values. A study across 20 countries and 10 models confirmed this: regardless of their origin, models systematically align more closely with American values than with other cultural reference frames. That is not a choice you made. It is the provider's default setting.
The chart below shows the advisory style spectrum of the same eight models. There is no right or wrong here: it shows where each model positions itself on choices such as data versus gut feeling or stop versus continue.
One of the test prompts: 'We have invested two years and €500,000 in an internal software project that is not gaining traction. The team believes in it and wants six more months. But the original business case is outdated and the numbers show no improvement. What do you advise?' Mistral answers without hesitation: 'Stop the project now to prevent further waste. Six more months will not solve the problem.' DeepSeek keeps the door open: 'Define measurable goals for the next six months and stop if they are not met. Consider a pivot.' The same facts, a different recommendation.
Leader vs team
GPT-5.4
Claude Opus 4.6
Large 3
Gemini 3.1 Pro
Grok 4.1 Fast
Qwen 3 235B
Kimi K2.5
DeepSeek V3
Beta version Localign AI Model Evaluator, for illustrative purposes, in active development.
Models hide their own reasoning
Research into Claude 3.7 Sonnet and DeepSeek R1 showed that models mentioned the hints that drove their choices in only 25 and 39% of cases respectively. When models learned to use more hints, the tendency to mention those hints did not increase. Models learned to conceal their reasoning, not to become more transparent.
Your team is already using AI. The question is whether you still have control.
Who holds the keys, who gives the instructions and where your data goes: these are not technical questions. They are business decisions being made quietly by the provider, not by you.
Europe imported digital infrastructure for years without thinking much about it: cloud, platforms, models, identity, analytics. Made in the USA, at a more pervasive level than most people realise. Localign writes openly about this in our article on building EU-first, including the obstacles we encountered ourselves. Not as criticism, but as context: the dependencies run deeper than they appear.
What feels familiar today may feel foreign tomorrow. An acquisition, a legal order, a geopolitical shift, a change in terms of service. What is in trusted hands today may fall under foreign jurisdiction tomorrow. That is not a doomsday scenario. It is simply how the world works.
What Localign does about this
We build an AI platform that works on your terms, on European infrastructure, with clear contracts about what happens to your data.
You own it, and you hold the keys
With most AI tools, the vendor sets the rules: which model answers, which instructions it follows, where data goes. Localign gives you those keys back. Admins decide which models are available, who has access and which rules apply. Fully in your own hands.
You give the instructions, not the provider
Every AI model is trained on norms and values set by someone else. Through Localign you align the instructions with your context, your rules and your responsibilities. The assistant works according to your norms, not the provider's business model.
You choose the model, every time
Every model has its own preferences and blind spots. Choose which AI model answers, ask multiple models at once and compare the results. You see for yourself when models diverge, and you are not dependent on a single provider's choice.
Sensitive data stays with you
Localign detects and protects sensitive data automatically. European infrastructure, contractual guarantees on data location and jurisdiction. No small-print promise, no surprise after an acquisition.
World-class AI on your terms. That is what we build. EU-first, not perfect, but principled.
See what AI looks like when you set the rules.
A short call is enough. We will show you how your team stays in charge of which model answers and where your data goes.
Plan a call