Skip to content
← Back to blog

We Built KnowledgeScout for AI Agents, Not Just Humans

21 April 2026

Most knowledge bases assume the person using them is a person.

They’re designed around a browser window. A search bar. An article page. Someone sitting at a desk typing a question, reading the result, and closing the tab. That’s been the model for twenty years.

We think that model is about to change, and we built for it.

Your next KB user isn’t typing

AI agents are already using knowledge bases. They just aren’t doing it well.

Someone on your team pastes a few articles into ChatGPT. Someone else points Claude at a PDF. Your ops lead builds a workflow where an agent drafts responses from “company knowledge” which is really a folder of files somebody copied across last Tuesday.

Every one of those workarounds is an agent using knowledge. None of them is an agent actually integrated with your knowledge base. The content is stale the minute it’s copied. There’s no audit trail. If the agent writes something new, nobody on the team sees it. And if someone updates the source article on Monday, the agent is still answering from last week’s version.

That’s the gap we wanted to close. Not by adding a chat bubble in the corner of the app. By making the knowledge base itself something an agent can connect to properly.

What an MCP server is, in plain English

MCP stands for Model Context Protocol. It’s an open standard Anthropic released that lets AI models connect to external tools and data sources in a consistent way. Think of it like USB for AI agents. Instead of every app inventing its own way for an AI to plug in, MCP gives them a common socket.

So when we say KnowledgeScout has an MCP server, we mean this. Any AI agent that speaks MCP (Claude, agents built on Claude or OpenAI, custom agents your team writes) can connect directly to your knowledge base. The agent doesn’t need you to copy articles into a prompt. It can search, read, and if you let it, propose new content, through a proper authenticated connection to your tenant.

What ours actually does

Our MCP server exposes 18 tools across two modes, read-only and read-plus-write. There are two ways to connect. You can issue an API key and set the mode on the key itself. Or an agent can sign in via OAuth as a specific user, in which case the mode and the content it can see are inherited from that user’s permissions in the app. Same as if a person had logged in.

In read mode, an agent can:

  • Search articles and documents across the whole KB, filtered to the signed-in user’s team memberships so it never surfaces content they shouldn’t see
  • Get a plain-English answer with citations when that’s what you actually need
  • Read any article or document, including specific pages of an uploaded PDF
  • See what’s overdue for review, flagged by readers, or voted unhelpful
  • List categories and tags, so the agent uses your taxonomy instead of inventing new labels
  • Flag an article if it spots an issue but isn’t confident enough to rewrite it

In read-plus-write mode it can also:

  • Draft new articles
  • Draft articles in bulk if you’re migrating content in from somewhere else
  • Propose an update to an existing article with a reason and a source
  • Pull search analytics, chat analytics, and document hotspots, so it can see the actual gaps in your content rather than guessing what’s needed
  • Check the review history on an article before proposing another change

That last group matters more than it sounds. Most content gap tools tell a human what’s missing. With an MCP server you can point an agent at those same signals, tell it “find the top five things people search for that have no matching article, and draft a first pass for each,” and let it work.

Drafts, not direct writes

Here’s the bit we think matters most.

Everything an agent creates or updates lands as a draft in a human review queue. It doesn’t go live. Your team sees the suggestion, the reason the agent gave, and the source it cited, and decides whether to publish, edit, or discard.

An AI agent that can edit your knowledge base without oversight is a great way to end up with a knowledge base you can’t trust. An AI agent that can propose changes for a human to approve is genuinely useful. It can do the boring work, humans do the judgment. That’s the trade we settled on.

The server also enforces a specific rule. An article can only have one pending suggestion at a time. If an agent proposes a change, no other suggestion can stack on top of it until the current one is reviewed. Stops the review queue turning into chaos.

A concrete example

Picture this. Your KB has been running for six months. You open the search analytics and see that “how to process a bulk refund” has been searched 47 times this quarter. There’s no article by that name. People have been clicking around, failing to find it, then giving up and messaging a colleague.

Pre-MCP, your options are: write the article yourself, ask someone on the team to do it, or add it to a backlog you’ll get to next quarter.

With the MCP server, you can point an agent at the problem. It pulls the search analytics and sees the 47 searches. It searches the KB to confirm there isn’t a near-match article buried under a different title. It checks chat analytics to see what themes people are asking about. Then it drafts the article, submits it to the review queue, and moves on. You get a draft with a clear reason (“47 unmatched searches this quarter, no existing article”) and a source trail.

Your job is to read the draft, fix the bits that need fixing, and hit publish. A thirty-minute job turns into five minutes.

The honest admission

We’re early here. The MCP server has been live for a couple of months and the 18 tools cover what we think matters most, but there are gaps. No file uploads through the server, documents still have to be added through the app. And agents can’t delete anything, which we did on purpose, but some customers will want that eventually.

We also can’t promise your agent will use the tools well. The tools only work as well as the agent driving them. A badly-prompted agent will still pull the wrong article, propose a vague update, or cite a source it never actually looked at. That’s an agent problem, not a KB problem, but it’ll still feel like both if it happens to you.

Why this matters now

Most of the big KMS platforms will end up with an MCP server. It’s an open standard, it’s growing fast, and the pressure from customers wanting agent access is only going one way. The question is whether a vendor built for it or bolted it on later.

We built for it. The whole content model, the review queue, the audit trail, the taxonomy structure, all of it assumes humans and agents will both be using the KB. That shapes decisions like “drafts, not direct writes,” which are much easier to bake in from the start than retrofit after the fact.

Your team doesn’t have to use any of this. Plenty of KnowledgeScout customers will never turn the MCP server on, and the product works fine for them. But if you’re already thinking about how agents fit into your workflow, or you’ve already got someone pasting KB content into Claude every week, we built the thing that closes the loop.

Ready to try it

KnowledgeScout is live in Australia and the US. You can start a free trial and have the MCP server running on your tenant inside a day. UK teams, we’re still on the waitlist model for you, register your interest and we’ll be in touch when we’re ready to onboard.

The KnowledgeScout Team