Knowledge Doesn't Live in Teams or Slack. So Why Is Your AI Reading Them?
4 May 2026
Most teams have a chat tool open all day. Microsoft Teams. Slack. Whatever your business runs on. Conversations happen in there. Decisions get debated. Someone asks “what’s our policy on X?” and three colleagues weigh in with different answers, and the conversation moves on, and a week later somebody else asks the same question.
This is normal. This is fine. Chat is for conversation.
The problem is we’re now letting AI read it like it’s truth.
The pattern, again
Microsoft Copilot can read your Teams. Glean indexes your Slack. ChatGPT Enterprise will happily ingest a chat export. The promise is the same: AI grounds its answers in “everything your company knows,” and that includes the chat threads.
So when an agent or assistant answers a question, it’s pulling from the messy, half-baked, conversational stream that wasn’t designed to be authoritative.
This is the inferred-truth pattern. We’ve written about it before. AI on top of chaos doesn’t produce truth. It produces confident-sounding nonsense from contradictory sources.
But chat tools are a special case worth talking about, because chat is fundamentally the wrong shape to be a source of truth.
Why chat is the wrong shape
Three reasons.
Chat is conversational, not canonical. People in a Teams thread debate, speculate, hedge, joke. None of it is meant to be the final answer. It’s how we get to an answer. Treating each message as fact gets the process backwards.
Chat is partial. A given thread captures three colleagues talking about something. Not the seven other people who would have weighed in. Not the eventual decision the manager made offline. Not the article that someone wrote later that supersedes the whole conversation. AI reading the thread gets a slice of the discussion, not the conclusion.
Chat is stale immediately. A policy update gets discussed on Tuesday. By Friday the actual policy article is updated. The Tuesday thread is now wrong, but it stays in the AI’s index forever, looking just as fresh as the article that replaced it.
What goes wrong in practice
Picture this. Someone asks the company AI assistant about leave policy. The AI reads:
- A Teams message from 2023 saying “I think we get 25 days but check with HR”
- An update from a manager last month saying “we just bumped to 28 days for this team”
- A formal policy article that says 25 days
- A casual comment in a project channel: “yeah HR told me 30 days for the trial”
The AI synthesises an answer. It might say 25, 28, or 30. Whichever it picks is wrong for someone.
Now multiply this across every policy, every process, every product detail your team has ever discussed. Across compliance training. Across customer-facing responses. Across regulated industries where being wrong has consequences.
This is what’s quietly happening inside a lot of organisations right now.
What chat actually is
Chat isn’t a knowledge base. It’s the place where knowledge gets debated, decided, and refined. Then somebody (a person, with judgement) writes the resolution into a canonical article. The article is the truth. The chat thread is the workings.
If your team is using chat as the only place an answer lives, that’s a red flag. It means a process never produced an article. It means knowledge fell through the cracks. The fix isn’t to point AI at the chat thread. The fix is to write the article.
What we don’t read
We don’t index Slack. We don’t index Teams. We don’t ingest chat exports. The AI in our chatbot, the search index, the MCP server tools, none of them index your chat. We index the canonical content you’ve curated.
Chat stays in chat. Substrate stays in the substrate.
Your AI should read the substrate. Your humans use the chat to get there.
What we’re building, in the opposite direction
We’re building toward a model where canonical answers get pushed into the surfaces your team already uses, not the other way around. Your team gets answers in the tool they’re already in. The AI is still reading the canonical content. Nothing about the audit trail changes. The chat thread doesn’t accidentally become a source of truth.
Substrate is where truth lives. Teams, Slack, your CRM, your help centre, your customer-facing widgets are all places where the substrate’s answers get delivered. Different layers, doing different jobs.
The honest admission
Sometimes the only place an answer lives is a Teams thread from six months ago. We get it. Real teams are messy. The answer to that isn’t to point AI at the chat and pretend the mess is a knowledge base. It’s to fish the answer out, write a real article, and then both your humans and your AI can use it.
Yes, that requires curation. We’ve made that argument before and we’ll keep making it. The companies that figure out knowledge management in the AI era won’t be the ones with the smartest chat-indexing inference. They’ll be the ones with the cleanest substrate.
Where this is going
Microsoft, Glean, and the rest of the inferred-truth crowd will keep pitching the dream of “your AI just reads everything you have.” It’s appealing because it sounds like no work. It’s broken because chat isn’t supposed to be a source of truth.
We’re betting on something less exciting. A canonical layer. Curated by humans. Read by everything else. The substrate stays clean. The chat stays human. The AI gets something solid to call.
That’s the company we’re building.
The KnowledgeScout Team