When AI can’t tell who is allowed to speak, relevance replaces legitimacy

19 February 2026

𝗪𝗵𝗲𝗻 𝗔𝗜 𝗰𝗮𝗻’𝘁 𝘁𝗲𝗹𝗹 𝘄𝗵𝗼 𝗶𝘀 𝗮𝗹𝗹𝗼𝘄𝗲𝗱 𝘁𝗼 𝘀𝗽𝗲𝗮𝗸, 𝗿𝗲𝗹𝗲𝘃𝗮𝗻𝗰𝗲 𝗿𝗲𝗽𝗹𝗮𝗰𝗲𝘀 𝗹𝗲𝗴𝗶𝘁𝗶𝗺𝗮𝗰𝘆

AI is fluent, fast, and increasingly central to decision making, but it has one structural blind spot: 𝗶𝘁 𝗰𝗮𝗻𝗻𝗼𝘁 𝗿𝗲𝗰𝗼𝗴𝗻𝗶𝘀𝗲 𝗮𝘂𝘁𝗵𝗼𝗿𝗶𝘁𝘆.

AI retrieves 𝘳𝘦𝘭𝘦𝘷𝘢𝘯𝘤𝘦, not 𝘭𝘦𝘨𝘪𝘵𝘪𝘮𝘢𝘤𝘺. Every text fragment is treated as if it has the same right to shape an answer, regardless of domain, mandate, or expertise. This isn’t a niche quirk, it’s a fundamental property of embeddings, search, and LLMs. They measure similarity and patterns, not who is allowed to define truth.

𝗙𝗶𝗻𝗮𝗻𝗰𝗲 𝗳𝗲𝗲𝗹𝘀 𝘁𝗵𝗶𝘀 𝗳𝗮𝗶𝗹𝘂𝗿𝗲 𝗺𝗼𝘀𝘁 𝘀𝗵𝗮𝗿𝗽𝗹𝘆.
In financial institutions, knowledge is structured by:
• domain boundaries
• seniority
• regulatory standing
• geography

Equity analysts speak for equities. Macro speaks for macro. Risk and compliance speak with overriding authority. These boundaries are how institutions manage risk.

𝗔𝗜 𝘀𝘆𝘀𝘁𝗲𝗺𝘀 𝗲𝗿𝗮𝘀𝗲 𝘁𝗵𝗲𝗺. Retrieval may surface a junior contradicting a senior, an equity analyst speculating on FX, or a local expert generalising globally. All can be semantically similar yet institutionally illegitimate.

Authority inside an organisation can sit with:
• a publisher
• a department
• an author
• a content type

A central bank statement, a risk memo, and a speculative blog are not equivalent sources.

But to an embedding, they’re just vectors.
To full text search, they’re just strings.
To an LLM, they’re just patterns.

𝗧𝗵𝗲 𝗮𝘂𝘁𝗵𝗼𝗿𝗶𝘁𝘆 𝘀𝘁𝗿𝘂𝗰𝘁𝘂𝗿𝗲 𝗯𝗲𝗵𝗶𝗻𝗱 𝘁𝗵𝗲𝗺 𝗶𝘀 𝗶𝗻𝘃𝗶𝘀𝗶𝗯𝗹𝗲.

And 𝘄𝗵𝗲𝗻 𝗮𝘂𝘁𝗵𝗼𝗿𝗶𝘁𝘆 𝗶𝘀𝗻’𝘁 𝗺𝗼𝗱𝗲𝗹𝗹𝗲𝗱, 𝘁𝗵𝗲 𝗺𝗼𝗱𝗲𝗹 𝗶𝗻𝘃𝗲𝗻𝘁𝘀 𝗶𝘁𝘀 𝗼𝘄𝗻, based on confidence of tone, frequency in the corpus, or incidental correlations. It can suppress voices that matter and blend domains that must remain separate.

This isn’t a technical quirk.
𝗜𝘁’𝘀 𝗮 𝗴𝗼𝘃𝗲𝗿𝗻𝗮𝗻𝗰𝗲 𝗽𝗿𝗼𝗯𝗹𝗲𝗺.

Any AI that synthesises information is also 𝗿𝗲𝗱𝗶𝘀𝘁𝗿𝗶𝗯𝘂𝘁𝗶𝗻𝗴 𝗮𝘂𝘁𝗵𝗼𝗿𝗶𝘁𝘆.
If you don’t define who is allowed to shape an answer, 𝘁𝗵𝗲 𝗺𝗼𝗱𝗲𝗹 𝘄𝗶𝗹𝗹 𝗱𝗲𝗰𝗶𝗱𝗲 𝗳𝗼𝗿 𝘆𝗼𝘂.

Full article: Read