The Chat API gives you a streaming conversational agent that can take action on your behalf — running market research, expanding build nodes, kicking off Forge coding runs, and calling any connected integration like Linear, Slack, or GitHub. Every message you send streams back NDJSON events as the agent thinks, calls tools, and composes its reply. Sessions are persisted automatically so you can resume a conversation at any time.Documentation Index
Fetch the complete documentation index at: https://docs.manticscore.com/llms.txt
Use this file to discover all available pages before exploring further.
Start a chat
POST /chat is the primary endpoint. It accepts your message history and streams back a sequence of NDJSON events until the agent finishes its reply.
This endpoint uses Stream auth. Your session token’s TTL is extended on connect so long-running agentic turns don’t expire mid-stream.
Full conversation history. Each message has a
role (user or assistant) and content string (max 50,000 characters per message).The product idea the conversation is focused on. Max 50,000 characters. The agent uses this as persistent context for all tool calls.
UUID of an existing project to scope tool calls (research, build graphs) to. When set, the server prepends the project’s one-line digest to the agent’s system prompt so the agent knows what’s been happening in that project. Pass
null to work without a project — the agent falls back to the user’s most recently active project (within the last 24 hours) when one is needed.ID of an existing session to continue. Pass
null to start a new session — the server mints one and sends it back in the stream_start event.Tell the agent whether completed research exists for this project. Helps it decide whether to call
run_research or query_research first.Tell the agent whether a build graph exists. Helps it choose between
expand_build_node and other tools.Pre-loaded research text to inject into the agent’s context window. Max 100,000 characters. Useful when you want to avoid a
query_research round-trip.Stream events
The response isContent-Type: application/x-ndjson. Each line is a complete JSON object: {"v": 1, "event": "<type>", "data": {...}}.
| Event | Data | Description |
|---|---|---|
stream_start | {"request_id": "string", "session_id": "string"} | First event. Save session_id to resume this conversation later. |
agent_turn | {"turn": int, "stop_reason": "tool_use|end_turn"} | The agent is starting a new turn. tool_use means it’s calling a tool next. |
thinking_delta | {"turn": int, "text": "string"} | Optional extended reasoning text. Append to a hidden reasoning buffer. |
tool_call | {"tool_call_id": "string", "tool_name": "string", "args": {"turn": int}} | The agent invoked a tool. Show an activity indicator with the tool name. |
tool_progress | {"turn": int, "tool": "string", "status": "in_progress|completed|error", "duration_ms?": int, "error?": "string", "job_id?": "string"} | Live status update for the running tool. job_id is set when a pipeline was started. |
message_delta | {"text": "string"} | Append this text to the current message bubble. |
card.signin_required | {"toolkit": "string", "...": "..."} | The agent attempted an action that needs a Composio sign-in. Render the inline card so the user can authorize without leaving the chat. |
card.oauth_required | {"toolkit": "string", "auth_url": "string", "...": "..."} | The user must complete an OAuth flow before the agent can continue. The card carries an auth_url to open in a browser. |
card.agent_spawned | {"agent_id": "string", "goal": "string", "schedule_cron": "string", "next_run_at": "string", "actions": [{"label": "string", "tool": "string", "args": {}}]} | Emitted after spawn_agent succeeds. Render the agent card with the supplied quick actions (e.g. Pause, Run now) wired back to update_agent. |
conversation_summary | {"summary": "string", "tools_used": ["string"], "turns": int, "session_id": "string"} | Emitted at the end of every response. Use to update the session card. |
error | {"message": "string"} | An unrecoverable error occurred. Close the stream and surface the message. |
done | {} | Stream is finished. |
card.* events are persisted alongside other session events, so they replay correctly when you re-open a session via GET /chat/sessions/{id}.Built-in agent tools
The agent has access to the following tools on every request. Composio tools (GitHub, Linear, Slack, Notion, etc.) are discovered at runtime based on the user’s connected accounts — the agent can invoke any of 1000+ tools without a hardcoded list.| Tool | What it does |
|---|---|
query_research | Searches existing completed research for the current project. |
run_research | Starts a new market research pipeline and streams its progress. |
expand_build_node | Expands a collapsed build graph node into child tasks. |
assess_node_risk | Runs an LLM risk and effort assessment on a build node. |
deep_dive_feature | Runs a single-feature deep research job. |
deep_research_features | Runs deep research across multiple features simultaneously. |
start_implementation | Kicks off a Forge agentic coding run to produce a GitHub PR. |
spawn_agent | Deploys a long-running autonomous agent that runs the user’s goal on a schedule (cron) or in response to events (on:research_completed, on:signal_detected, on:forge_pr_created, on:build_graph_completed, on:brief_ready). The agent re-reads its goal each run and discovers Composio tools at runtime. Emits a card.agent_spawned event the iOS client renders inline. If a required toolkit isn’t connected yet, emits a card.oauth_required instead. |
update_agent | Edits an existing spawned agent — change goal, schedule, toolkits, or pause/resume by toggling enabled. Accepts either a structured patch object or a plain-language natural_language instruction the server re-prompts to derive the diff. |
query_knowledge | Looks up everything the system knows about a canonical entity. Pass canonical_kind (company, feature, or blueprint) and canonical_id to receive the entity record plus every artifact (research, brief, signal, etc.) that references it. Use this when the user asks for cross-research context like “what do you know about Stripe”. |
When
run_research, expand_build_node, or start_implementation run, the server bridges their pipeline stage events as tool_progress events — you see the full real-time progress inline in the chat stream.run_research from chat is dispatched through the same background workflow runner as POST /research, so chat-initiated research uses identical pipeline behavior and capacity to direct API calls. The agent waits up to 180 seconds for the pipeline to emit done. If the pipeline hasn’t completed in that window the tool returns {"status": "timeout", ...} to the model — the underlying research job continues running in the background and can be polled or streamed via the standard research endpoints.deep_research_features is fire-and-forget: the agent dispatches the job through the same background workflow runner as POST /feature-research and returns immediately with {"job_id": "...", "status": "queued"}. Subscribe to GET /feature-research/{job_id}/events to stream progress, or poll GET /feature-research/{job_id}/status.Sessions
List sessions
Maximum sessions to return. Range: 1–100.
Pagination offset.
200 response
Get a session
Returns the session object plus its full persisted event log, which you can use to replay or display conversation history.200 response
Delete a session
Permanently deletes the session and its event log.204 No Content on success.
Memory
The agent stores durable memories across sessions — things like preferred naming conventions, team context, and past decisions. Memories are scoped to a project whenproject_id is provided.
List memories
Scope memories to a specific project UUID. Omit to list all memories.
Maximum memories to return. Range: 1–100.
200 response
Clear memories
Removes all memories for the given scope. Omitproject_id to clear all memories across every project.
200 response