Agentic knowledge base
Agents do the work.
Humans decide the truth.
Search loops, draft proposals, flag triage — agents do the lift. What's true and what gets published — still your call.
Two ways a knowledge base can fail.
The old way: knowledge bases go stale. Articles age, no one updates them, search returns answers that were true two years ago. Trust collapses, people stop checking, the KB becomes a graveyard.
The new way: agentic KBs that publish whatever the model thinks is right. Hallucinations get baked in. Confident-sounding nonsense replaces last quarter's actual policy. You stop knowing what your own KB says, because an autonomous agent rewrote it last week and nobody reviewed.
There's a third option. Agents that do the work, but never get to decide what's true.
What "agentic" means here.
KnowledgeScout is agentic in two specific ways:
- Agentic search — an opt-in toggle on the AI Chatbot inside the app. When enabled, weak query results trigger a rewrite-and-retry loop (handling synonyms, alternate phrasing, related concepts). The user sees each rewrite as it happens. It's a loop, not a single shot — much stronger results, in exchange for more AI credits per question. Available on Business and Enterprise.
- Agentic write — external agents call in through the Model Context Protocol (MCP). They can propose new articles, suggest edits, flag stale content. Every write lands as a draft for human review. Available on Business and Enterprise.
What KnowledgeScout is not: an autonomous publisher. No agent ever publishes content directly. No agent ever edits a live article without a human approving the change. The agents do the proposing. Humans do the deciding.
The flywheel: questions surface gaps, agents fill them, search gets better.
A KB that maintains itself isn't science fiction — it's a closed loop with a human at the keystone.
Search reveals a gap
A user asks a question. Search loops, retries, and either returns a weak answer or none. Analytics log the gap.
An MCP agent picks it up
An internal agent reads the analytics, looks for a knowledge gap, and drafts a candidate article addressing it.
A human reviews the draft
The reviewer sees what the agent wrote, why, and the source it cited. Approve, edit, or reject. This step is the keystone.
Approved content publishes
The article goes live, attributed to the agent that drafted it and the human who approved it. Audit trail is automatic.
Search gets better
The next user asking that question finds an answer instead of a gap. The flywheel turns again.
Step 3 is what makes this trustworthy. Without human review the flywheel still spins, but it spins on whatever the model thought was true today. With it, the KB only ever absorbs content a human has signed off on.
Two surfaces, one principle
Built into the app for your team. Open via MCP for the agents you connect.
In-app
Agentic search
An opt-in toggle on the AI Chatbot inside the app. The loop rewrites weak queries and retries — the user sees each rewrite. Best for fuzzy questions where the asker doesn't know the right keyword. Costs more AI credits per question; turn it on when the search-quality lift is worth the spend. Available on Business and Enterprise.
External, via MCP
Agentic write
External MCP-compatible agents — Claude, Copilot, OpenAI agents, Foundry IQ, your own — can propose drafts, suggest edits, flag stale content. Every write lands in the human review queue. Available on Business and Enterprise.
Why human review isn't a speed bump
Trust matters for any team that depends on the knowledge in their KB being correct. For regulated industries — financial services, healthcare, legal, compliance — it's the difference between an AI feature you can ship and one your compliance team blocks.
Every agent action is logged
Searches, retrievals, drafts proposed, articles flagged. Full attribution. The audit trail captures human and agent contributions in the same format, exportable on demand.
No silent publishes
Drafts always land in a queue. The reviewer sees the agent's draft, the source it cited, and the diff against any existing article. Approve, edit, or reject. Nothing reaches live content without that step.
Version history is first-class
Every article has a full version timeline. Who wrote each line, when, and whether the source was a human author or an agent draft. A regulator can ask "what did your AI write on March 15?" and you have a clean, exportable answer.
Permissions are explicit, not implicit
An agent connecting via OAuth inherits the role of the user who authorised it. An agent connecting via API key carries the scope you set when you minted the key. There's no path where an agent gets write access by accident.
Common questions
Why don't you let agents publish directly?
Because somebody has to be accountable. If an agent published a wrong refund policy and a customer made a decision based on it, "the AI did it" isn't an answer that holds up — to your customer, to your auditor, or to the regulator. We've built KnowledgeScout so that the human who approves a draft is the human who's accountable for it. That's the point of the review queue. It's also why teams in regulated industries can use it without their compliance team blocking the rollout.
What does "agentic search" actually mean here?
It's an opt-in toggle on the AI Chatbot inside the app, available on Business and Enterprise. When enabled, weak query results trigger a rewrite-and-retry loop — handling synonyms, alternate phrasing, related concepts. The user sees each rewrite as it happens, with the rewritten searches surfaced in the chat. The trade-off: stronger search results in exchange for more AI credits per question. Off by default — admins switch it on when the lift is worth the spend.
What does "agentic write" mean here?
External AI agents — Claude, Copilot, OpenAI agents, Foundry IQ, your own — can connect via MCP (Model Context Protocol) and propose new articles, suggest edits, or flag stale content. Every write lands as a draft for human review. Available on Business and Enterprise.
Do I need to bring my own AI agents to use this?
No. Agentic search is a workspace toggle on the AI Chatbot — your admin turns it on for your team and they use it directly (Business and Enterprise). Agentic write requires an external MCP-compatible agent (your team's, your vendor's, or one you build) but KnowledgeScout doesn't require you to set one up. The agentic-write half is for teams that want to wire external agents into the same KB their humans use.
How is this different from any AI chatbot on a knowledge base?
Most KB chatbots run a single search and answer from whatever they got. The agentic search loop iterates: weak results trigger query rewrites until the answer is found or the loop gives up cleanly. And the system extends beyond chat — agents can also write back via MCP (always to drafts, never to live content). One source of truth feeding both ways.
Can I turn the agentic features off?
Yes — and they default off. Agentic search is its own admin-controlled toggle, separate from the AI Chatbot itself, so you can have AI Chat on without the agentic loop. The MCP write surface is permission-scoped — agents only get write access if you explicitly mint a key with that scope, or grant an OAuth user with Editor role. There's no path where agentic features turn themselves on.
Which plans include the agentic capabilities?
Both agentic search and agentic write are on Business and Enterprise. Agentic search is an admin-controlled toggle — off by default — that uses additional AI credits per question in exchange for stronger search results. Startup includes the AI Chatbot itself but without the agentic loop. BYO AI keys (so your AI provider charges you directly, not us) is optional on Business, included on Enterprise.
An agentic KB you can actually trust.
Agentic search and agentic write on Business and Enterprise. Both opt-in. Human review on everything.