Skip to content

Analytics & insights

Analytics that write your roadmap.

Search analytics, chat analytics, and a configurable conversation log. Plus an agentic write loop: let your team's AI agents read the gaps and draft articles to fill them.

Most KB analytics are vanity charts.

Article view counts. Total searches per day. Active users. Useful for a board slide, useless for actually fixing your knowledge base. The signal that matters is buried: which questions don't have answers, which articles people stop trusting, which documents your AI keeps citing because there isn't a proper article yet.

A knowledge base that's already working doesn't need more dashboards. One that has gaps needs analytics that point at them.

Your analytics should be your content roadmap, not just a report.

Three layers of insight

Two for humans to read, one for AI agents to act on.

Layer 1

Search analytics

What people are searching, what's returning nothing, which articles are most and least viewed, and which uploaded documents are getting hit (signal that they should be a proper article).

Layer 2

Chat analytics

What the chatbot's being asked, whether it's finding answers, which articles it cites most, which users use it most. Plus a configurable conversation log: never store, or store up to seven years.

Layer 3

Agentic write loop

Both analytics surfaces are MCP-readable. Your team's AI agents can read the gaps and draft articles to fill them, with every draft landing in the human review queue.

Search analytics, in detail

What's findable, what's missing, what's worth turning into a proper article.

Most-viewed articles

The content carrying your knowledge base. Worth keeping fresh and well-structured because everyone is hitting it.

Least-viewed articles

Content nobody reads. Either stale, redundant, or buried by taxonomy. A prompt to consolidate, archive, or rewrite.

Most-searched terms

The questions actually being asked, in the words people actually use. The first source for content roadmap and SEO targets on the public help centre.

Failed searches

Queries that returned nothing useful. Each failed search is a content gap with the question already written for you.

Article votes (helpful / not helpful)

Direct reader feedback on whether an article landed. The signal is direct: rewrite the ones that consistently get a thumbs-down.

Documents appearing in search

When uploaded documents (PDFs, slides) are showing up in search hits, that's signal those topics deserve to be proper, structured articles instead of attached files.

Chat analytics, in detail

If the AI chatbot is on, here's what it tells you about the gaps and the use.

Most-asked queries

The plain-English questions hitting the chatbot. Often phrased differently from how articles are titled, which is itself useful signal.

Answer quality

Whether the chatbot found a confident answer, fell back to weak retrieval, or gave up. Each "no answer" is a content gap or a search-hygiene problem.

Most-used articles by chat

Which articles the chatbot keeps citing. These are your load-bearing pieces of content. Keep them current, accurate, and well-written.

Most usage by user

Who's relying on the chatbot most. Could be a heavy user, a struggling new starter, or a team that needs better written content for their function.

Most-cited documents

When the chatbot keeps citing the same uploaded document, that's a signal the document deserves a proper article, structured for search and trust.

Configurable conversation log

Optional full transcripts. Don't store at all, or store up to seven years for compliance and deeper insight. Configurable per tenant, separately for internal and customer-facing chat.

Conversation log retention, your call

Don't store it. Or store it for seven years. Or somewhere in between.

Conversation log storage is a per-tenant setting, configurable separately for internal team chat and customer-facing chat. Two ends of the spectrum:

  • Don't store. Aggregate metadata only (query counts, answer-quality signals). No conversation content retained. The right setting if your privacy posture or jurisdiction requires it.
  • Store up to 7 years. Full transcripts auditable per user with timestamps and cited sources. Useful for regulated industries that need to show a regulator exactly what was asked, what the AI surfaced, and on what content basis.

Most teams sit in the middle: store for 30-90 days for operational use, then expire. Whatever fits.

The agentic write loop: analytics, machine-readable.

Both search analytics and chat analytics are exposed through the MCP server alongside the rest of the knowledge base. That means your team can build an AI agent that does the loop manually:

  1. Read the analytics. Spot failed searches. Find weak chat answers. Identify documents that keep getting cited.
  2. Decide which gap to fill next. Generate a draft article using the AI Writer or your own provider, grounded in the existing knowledge base.
  3. Submit the draft via MCP. It lands in the human review queue.
  4. An editor reviews, edits if needed, and publishes. The next analytics cycle reflects whether the new article actually reduced failed searches.

KnowledgeScout doesn't run this loop in-app. We give you the building blocks: machine-readable analytics, MCP write access, and a draft-to-review system. Your team builds the agent that fits your governance model. More on AI agent integration.

The point: drafts always go to a human. The loop ends with approval, not silent publishing.

Who this is for

Content and L&D teams

Stop guessing what to write next. Failed searches and chat gaps tell you exactly which articles are missing, with the question already phrased.

Regulated industries

Configurable conversation log retention up to seven years, full audit trail, version history on cited articles. Show a regulator what an AI surfaced and on what content basis.

Teams building AI write-back agents

If you want an agent that watches the gaps and proposes drafts, the MCP-readable analytics plus the draft-to-review system are exactly the building blocks. Bring your own agent, your own model.

Operations leaders

When agents give wrong answers, you need to know whether the article was wrong, was stale, or was never read. Search analytics, version history, and read acks tell you which.

Why KnowledgeScout's analytics

1.

Actionable signal, not vanity charts

Failed searches, weak chat answers, most-cited documents — every metric points at a specific decision. Rewrite, consolidate, archive, or convert a document into an article.

2.

Conversation log is your call

Don't store, store for a few weeks, store for seven years. Configurable per tenant and per chat surface (internal versus customer-facing). Match your privacy posture, not ours.

3.

Machine-readable through MCP

Both analytics surfaces are exposed via the MCP server. Your team builds the agent that reads the gaps and proposes drafts. We don't run the loop in-app — we give you the building blocks to run it your way.

4.

Drafts always go to human review

Even when an AI agent generates an article from analytics insights, the draft lands in the human review queue. The loop ends with approval. Nothing publishes silently.

Common questions

Are conversation logs stored by default?

Yes — 30 days by default, and configurable per tenant. You can opt out entirely (no conversation content stored), or extend retention up to seven years for compliance. Internal team chat and customer-facing chat are configured separately, so you can keep one and drop the other if you want.

Who can see analytics?

Editors and Admins by default — Editors need it because the analytics are how they decide what content to write next, refresh, or retire. Readers don't see analytics. Permissions are configurable per workspace if you want to tighten or loosen the default.

Can I disable analytics?

Search analytics and chat analytics are part of how the platform reports content health, so they're on by default. Conversation log storage is independently configurable, and full conversation transcripts can be set to never store. Reach out if your compliance setup needs more granular controls.

Can my AI agents read the analytics via MCP?

Yes. Search analytics and chat analytics are exposed through the MCP server alongside the rest of the knowledge base. Customer teams build their own agents that read failed searches and weak chat answers, identify content gaps, and propose drafts. Drafts always land in the human review queue, so the loop ends with a human approval step, not silent publishing.

What if my regulator asks for AI conversation history?

If you've opted into conversation log storage, the full transcript is auditable per user, per session, with timestamps and cited sources. Combine that with article version history, draft attribution, and read acknowledgement records, and you can show a regulator exactly what an AI surfaced and on what content basis. If you've opted out of storage, only aggregate metadata is available, which may not meet some regulators' requirements.

Are analytics available on every plan?

Search and chat analytics are on every plan, including Startup. Configurable conversation log retention is also on every plan. Reading analytics through MCP for agentic content loops is on Business and Enterprise, since MCP integration itself is a Business+ feature.

Stop guessing what to write next.

Search analytics, chat analytics, configurable conversation log, and machine-readable everything for the agents you build on top.