Many teams try to train a chatbot for a private application by crawling a few URLs and hoping the model will infer the rest. That usually breaks down for data-heavy products, dashboards, and logged-in experiences. This tutorial shows a better workflow: use Chat.co MCP to inspect the product logic, export the real source of truth, and train the bot start to finish in about 30 minutes.
Real Workflow, Condensed for Docs
This guide is based on a real build completed with ChatGPT 5.4 using Chat.co MCP. The exact assistant can vary, but the workflow stays the same: inspect the app, export structured data, train the bot in layers, and validate with real business questions.
1. Why Private Apps Need a Different Workflow
Public marketing sites are often crawl-friendly. Private production apps are usually not. In many modern products, the visible UI is only a shell around live data, calculated summaries, and business rules that exist in code.
Crawl-only approach
- Indexes labels, navigation, and empty state shells
- Misses private or runtime-only data
- Understands screens, but not calculation logic
- Breaks on domain-specific questions about totals and scenarios
MCP + structured export
- Starts from the actual data model and transformation logic
- Captures the meaning behind the dashboard numbers
- Lets an AI assistant configure Chat.co directly through tools
- Produces answers that match how operators already discuss the product
If your users ask questions about what changed versus recommended, which category drives the biggest shift, or how much was moved to federal funding, a simple crawl is rarely enough.
2. The 30-Minute MCP Plan
The speed comes from using MCP as the working bridge between your AI assistant and Chat.co. Instead of switching between a code editor, exports folder, and dashboard tabs, you can move through the setup as one guided session.
Inspect the codebase and identify how the product calculates key numbers.
Package the structured live data and add short narrative guides.
Upload documents, replace text training, seed Q&A, and tune the bot.
Test against stakeholder questions and fix the layer that failed.
Before you start: set up MCP first. If you still need the local connection and approval flow, read MCP Agent Setup and API vs MCP.
3. Step 1: Inspect the Product Logic
Start by using your AI coding assistant to inspect the application codebase. The goal is not to train the chatbot on code directly. The goal is to understand how the product thinks before you package any training content.
Questions to answer first
- Where does the dashboard get its numbers?
- Which hooks, services, or queries compute the KPI cards?
- Are totals raw sums, filtered sums, or adjusted values?
- Which exclusions, fallback rules, or scenario switches affect the results?
- Which identifiers should be treated as first-class lookup keys?
This step prevents a common failure mode: the bot retrieves the right record but answers the wrong business question because it never learned how the dashboard defines the numbers.
4. Step 2: Export Structured Live Data
Once you understand the product logic, choose the right source material. For a private SPA, internal dashboard, or data-heavy workflow app, the best primary source is often a structured export rather than HTML.
Good export package
entities.json fiscal-years.json funding-sources.json line-items.json line-item-values.json event-history.json rate-parameters.json budget-summaries.json overview.md decisionmaker-guide.md
The structured files give the bot factual coverage. The two short narrative files supply context: what the dataset contains and how to interpret the main dashboard visuals.
Decision rule
If your product is public and mostly static, crawl first. If it is authenticated, client-rendered, or driven by live runtime data, export structured data first and use crawl only as a supporting source.
5. Step 3: Train in Layers
The strongest setup is layered. Do not stop after document upload. Use MCP to push the bot through three distinct training layers.
Layer 1: Structured documents
Upload the source files so the bot can answer lookups about entities, codes, fiscal year values, funding assignments, and line-item detail.
Layer 2: Text training
Add short text definitions that explain business meaning: what counts as recommended versus petitioned, what moved to federal means, and which totals belong to which scenario.
Layer 3: Exact Q&A for real prompts
Seed the bot with the exact high-value questions stakeholders ask in meetings. This is often the biggest jump in answer quality.
MCP workflow shape
1. Create or select the chatbot 2. Set the text prompt 3. Upload structured documents 4. Replace text training 5. Bulk import curated Q&A 6. Update appearance and starter questions 7. Validate with real user prompts
The value of MCP here is speed and continuity. Your assistant can inspect the product, prepare the training assets, and call Chat.co tools without forcing you into a manual dashboard-only workflow.
6. Step 4: Add Prompt Rules and UX
A strong prompt is part of the training system. Without it, the bot can retrieve the right row and still answer the wrong question.
Prompt rules that usually matter
- Default to the product's business framing, not generic summarization
- Separate broad totals from line-item lookup answers
- Label fiscal year, scenario, and comparison basis clearly
- Do not mix recommended, petitioned, baseline, or current values
- Treat codes, line numbers, and entity names as search keys
- Refuse to invent missing numbers
Do not ignore presentation
Once the content is reliable, align the bot with the product experience users already trust:
- Title and welcome message
- Starter questions
- Brand colors and launcher behavior
- Citation display expectations
Rule of thumb: keep the visual language consistent with the product, but write the welcome text for the specific job this bot is supposed to do.
7. Step 5: Test with Stakeholder Questions
Developer tests are too narrow. Real users ask broader, fuzzier questions that require interpretation and correct framing.
Developer-style checks
- What is code ABC-123?
- Which funding source maps to line item X?
- Show the value for a known entity and year.
Stakeholder-style checks
- How much are they asking for and what is being recommended?
- What changed versus the baseline scenario?
- Which category contributes most to the reduction?
- What is the customer impact if this shift stays in place?
When a response fails, trace it back to the right layer:
- The bot had the data but not the interpretation
- The bot had the interpretation but not the exact phrasing
- The question was ambiguous and needed a stronger prompt rule
That feedback loop is what turns a technically trained bot into a useful product.
8. Keep Sensitive Data Out of the Workflow
This workflow is especially useful for private apps, which means security discipline is part of the tutorial.
- Never commit API keys or local key files to the repo
- Rotate temporary credentials after setup or demos
- Redact account IDs, internal URLs, and private labels in public write-ups
- Keep training exports separate from the application repo when they contain live data
- Use placeholder screenshots and sample payloads unless you have explicit approval
Final takeaway
The strongest Chat.co bots for private products are not built by training on the website alone. They are built by finding the real source of truth, teaching the bot how the product defines its numbers, and validating against the questions decision-makers actually ask. MCP compresses that work into one practical build session.
If you want to connect an AI client first, start with MCP Agent Setup. If you want the broader positioning first, read API vs MCP.
