Feature deep research runs a four-stage pipeline — scope, gather, analyze, synthesize — that examines how competitors have implemented specific features. For each feature you pass in, the pipeline identifies which competitors have built it, scrapes their implementations, analyzes patterns and edge cases, and produces a blueprint with recommended build order and a risk matrix. You can start a job manually or let it auto-chain from a completedDocumentation Index
Fetch the complete documentation index at: https://docs.manticscore.com/llms.txt
Use this file to discover all available pages before exploring further.
mode=feature market research run.
Start feature deep research
Rate limit: 5 requests per minute.The features to analyze. Each object must have an
id, a name, and a source.The product idea that provides context for the analysis. Maximum 5,000 characters.
UUID of the project to associate this job with. Pass
null for unattached jobs.UUID of the market research run that produced the features. Providing this gives the pipeline additional context.
202 response
GET /feature-research/{job_id}/events. Poll status from GET /feature-research/{job_id}/status.
Get job metadata
Returns lightweight metadata about a feature research job. Safe to poll while the job is running.200 response
One of
queued, running, completed, failed.Current stage in the pipeline:
queued, scope, gather, analyze, synthesize, completed.The features the pipeline is processing, with their resolved IDs and names.
Present only on
failed status. Human-readable description of what went wrong.Get job status (lightweight)
Returns only the status fields without the feature list. Suitable for frequent polling.200 response
last_event_seq as the cursor when reconnecting to the events stream.
Get full results
Returns the complete analysis output. Only available once the job hasstatus: completed.
200 response
The competitor set the pipeline identified for each feature during the
scope stage.Per-feature analysis. Each entry contains implementations found in the wild, common design patterns, technical approaches, edge cases discovered, and the open-source landscape.
Web sources the pipeline consulted during the
gather stage.409 if the job has not yet completed. Poll GET /feature-research/{job_id}/status first and only call this endpoint when status is completed.
Stream feature research events
Subscribe to live progress for a feature research job. The stream uses cursor-based replay — reconnect at any sequence number and receive all missed events before switching to live delivery.Resume from this event sequence number. Set to
0 to start from the beginning.{"v": 1, "event": "<type>", "data": {...}}. The event types are:
| Event | Description |
|---|---|
stage | A pipeline stage (scope, gather, analyze, synthesize) started or completed. |
progress | Free-text progress message within a stage. |
feature_analyzed | A single feature analysis completed. Data contains the feature ID and analysis output. |
result | All features analyzed. Contains the full results payload. |
error | A fatal error occurred. Contains message, code, and retryable. |
done | Stream is closed. Always the last event. |
Auto-chaining behavior
Feature deep research can be triggered automatically in two ways: From market research: CallPOST /research with "mode": "feature". When the market research pipeline completes, it automatically starts a feature deep research job on the top 5 features it identified. You don’t need to call POST /feature-research manually.
From build graphs: Call POST /build-graphs without a feature_research_id. The server automatically detects the latest completed feature research job for the project and injects its data into the LLM prompt.
When a feature deep research job completes, the platform fires a push notification with
type: "feature_research_complete". If you’re building a mobile client, listen for this notification to know when results are ready.