Skip to main content

Documentation Index

Fetch the complete documentation index at: https://docs.manticscore.com/llms.txt

Use this file to discover all available pages before exploring further.

The Chat Agent is a conversational AI with full awareness of your product context. It knows your research, your build plans, and your connected integrations. In a single conversation you can ask it to look up existing research, run new market analysis, expand a build tree node, kick off a Forge coding run, create a Linear issue, or post to Slack — all through natural language. Each action it takes is reflected back to you in the stream as it happens.

What the agent can do

The agent has access to a core set of ManticScore tools, plus any integration you’ve connected via Composio: ManticScore tools:
ToolWhat it does
query_researchLook up and summarize your existing research
run_researchStart a new market research job
expand_build_nodeExpand a collapsed node in your build graph
assess_node_riskGet a risk and effort assessment for a node
deep_dive_featureRun feature deep research on specific features
start_implementationKick off a Forge coding run
Composio tools (discovered at runtime): The agent can discover and use any tool from your connected integrations — GitHub, Linear, Jira, Slack, Notion, Gmail, and 1,000+ more. It searches for the right tool, retrieves its schema, and calls it without you needing to configure anything beyond the initial connection.

Sending a message

Chat uses streaming NDJSON. Your client sends the conversation history and receives a stream of events back.
messages
array
required
Conversation history. Each message: {"role": "user" | "assistant", "content": "string"}. Content limit: 50,000 characters per message.
idea
string
required
Your product idea. This gives the agent context throughout the conversation. Up to 50,000 characters.
project_id
string
UUID of your project. When provided, the agent has access to all research and build graphs in that project.
session_id
string
Resume a previous conversation. The agent uses this to maintain memory across messages. Omit to start a new session.
has_research
boolean
default:"false"
Set to true if you have completed research the agent should be aware of.
has_build_tree
boolean
default:"false"
Set to true if you have a build graph the agent should be aware of.
research_context
string
Serialized research context to inject directly into the agent’s context window. Up to 100,000 characters.
curl --request POST \
  --url https://api.manticscore.com/chat \
  --header 'Authorization: Bearer <token>' \
  --header 'Content-Type: application/json' \
  --data '{
    "messages": [
      {"role": "user", "content": "What are the biggest white spaces in my market research, and which one should I build first?"}
    ],
    "idea": "A mobile app that helps freelancers track time and automatically generate invoices",
    "project_id": "proj_uuid_here",
    "session_id": null,
    "has_research": true,
    "has_build_tree": true
  }'

Stream events

The response is a stream of NDJSON events. Parse each line as a JSON object and handle events by the event field.
First event on every connection. Save the session_id to resume this conversation later.
{"v": 1, "event": "stream_start", "data": {"request_id": "req_abc", "session_id": "sess_xyz"}}
The agent is starting a new reasoning turn. stop_reason tells you whether the turn ended by calling a tool or by producing a response.
{"v": 1, "event": "agent_turn", "data": {"turn": 1, "stop_reason": "tool_use"}}
The agent is calling a tool. tool_name identifies which ManticScore or Composio tool is being invoked.
{"v": 1, "event": "tool_call", "data": {"tool_name": "query_research", "args": {"turn": 1}}}
Live status updates from within a running tool. When the agent calls run_research or start_implementation, pipeline stage events are forwarded here.
{
  "v": 1,
  "event": "tool_progress",
  "data": {
    "turn": 1,
    "tool": "run_research",
    "status": "in_progress",
    "pipeline_event": "search stage completed"
  }
}
A chunk of the agent’s text response. Append each text value to build the full message.
{"v": 1, "event": "message_delta", "data": {"text": "The largest white space in your market is "}}
Emitted at the end of the response. Contains a summary of the conversation, the tools used, and the session_id for resuming.
{
  "v": 1,
  "event": "conversation_summary",
  "data": {
    "summary": "Analyzed white spaces in freelance invoicing market. Recommended focusing on AI-assisted late payment handling.",
    "tools_used": ["query_research"],
    "session_id": "sess_xyz"
  }
}
Final event. The stream is closed.
{"v": 1, "event": "done", "data": {}}

Managing sessions

The agent maintains memory within a session and can reference earlier messages. Each session is tied to a session_id returned in stream_start.

List sessions

curl
curl --request GET \
  --url 'https://api.manticscore.com/chat/sessions?limit=20' \
  --header 'Authorization: Bearer <token>'

Get a session with full event history

curl
curl --request GET \
  --url https://api.manticscore.com/chat/sessions/sess_xyz \
  --header 'Authorization: Bearer <token>'

Delete a session

curl
curl --request DELETE \
  --url https://api.manticscore.com/chat/sessions/sess_xyz \
  --header 'Authorization: Bearer <token>'

Cross-session memory

The agent builds memory across sessions within a project. You can inspect or clear this memory: Fetch memories for a project:
curl
curl --request GET \
  --url 'https://api.manticscore.com/chat/memory?project_id=proj_uuid_here' \
  --header 'Authorization: Bearer <token>'
Clear memories:
curl
curl --request DELETE \
  --url 'https://api.manticscore.com/chat/memory?project_id=proj_uuid_here' \
  --header 'Authorization: Bearer <token>'
Omit project_id to clear all memories across all projects.

Limits and credits

Value
Rate limit20 requests / minute
Credit cost1 credit per conversation
Max message length50,000 characters
Max research context100,000 characters
Pass session_id on follow-up messages within the same conversation. The agent uses session context to avoid repeating tool calls and to give more relevant answers based on what it already retrieved.