Three API calls to try it. Observe text, fetch insights, review what landed.
Send text, get back typed signals with confidence scores and evidence. Then get a report of what landed against the schema.
curl -X POST https://sourced-workspace--sourced-api-fastapi-app.modal.run/v1/observe \
-H "Content-Type: application/json" \
-H "X-Sourced-App: <your-app-id>" \
-H "Authorization: Bearer <your-app-secret>" \
-d '{
"subject_id": "demo-user",
"text": "I care about building things that help people learn.",
"source_id": "turn_1"
}'{
"traces": [
{
"target": "tom.person.values.helping",
"signal": "cares_about",
"confidence": 0.87,
"evidence": "I care about building things that help people learn.",
"method_id": "hybrid"
}
],
"surprises": [],
"skipped": 0,
"mode": "facilitation"
}Each trace has a target (the schema path that matched), a signal (a predicate like cares_about or stuck_on), a confidence score, and the exact evidence that triggered it.
curl "https://sourced-workspace--sourced-api-fastapi-app.modal.run/v1/stacks/<your-stack-id>/report/demo-user" \ -H "X-Sourced-App: <your-app-id>" \ -H "Authorization: Bearer <your-app-secret>"
{
"status": "ok",
"subject_id": "demo-user",
"matches": [
{
"target": "tom.person.values.helping",
"signal": "cares_about",
"trace_count": 2,
"top_confidence": 0.87
}
],
"gaps": [
{
"target": "tom.person.direction.practices",
"description": "Active commitments and daily practices"
}
],
"surprises": [
{
"target": "tom.person.energy.flow",
"signal": "alive_in",
"confidence": 0.72,
"evidence": "Time disappears when I'm making things."
}
],
"posture_distribution": {
"receiving": 2,
"moving": 0,
"participating": 1
},
"target_coverage": 0.33,
"total_traces": 3
}Matches are schema targets with traces. Gaps are targets the person hasn't addressed yet. Surprises are traces that landed outside your schema — things you didn't ask for but the person expressed. The posture_distribution shows the balance of receiving, moving, and participating signals.
curl "https://sourced-workspace--sourced-api-fastapi-app.modal.run/v1/stacks/<your-stack-id>/insights?subject_ids=demo-user" \ -H "X-Sourced-App: <your-app-id>" \ -H "Authorization: Bearer <your-app-secret>"
{
"spans": [...],
"traces": [
{
"target": "tom.person.values.helping",
"signal": "cares_about",
"confidence": 0.87,
"evidence": "I care about building things that help people learn.",
"status": "pending"
}
],
"threads": [...],
"connections": []
}Observe matched your text against the stack's schema and extracted typed signals — each trace has a target, a verb-style signal, confidence, and evidence. Report shows every dimension scored — what landed (matches), what's missing (gaps), and what surprised you (traces outside your schema). Insights aggregates all traces into spans, threads, and connections.
A complete script that observes text and fetches insights. Copy it, fill in your credentials, and run.
#!/usr/bin/env python3
"""Sourced quickstart — observe, report, fetch insights."""
import requests
BASE = "https://sourced-workspace--sourced-api-fastapi-app.modal.run"
APP_ID = "<your-app-id>"
APP_SECRET = "<your-app-secret>"
STACK_ID = "<your-stack-id>"
headers = {
"X-Sourced-App": APP_ID,
"Authorization": f"Bearer {APP_SECRET}",
"Content-Type": "application/json",
}
# 1. Observe some text
resp = requests.post(f"{BASE}/v1/observe", headers=headers, json={
"subject_id": "demo-user",
"text": "I feel energized when I build products that help people.",
"source_id": "turn_1",
})
print("Observe:", resp.status_code, resp.json())
# 2. Report — dimensions scored against the author's frame
resp = requests.get(
f"{BASE}/v1/stacks/{STACK_ID}/report/demo-user",
headers=headers,
)
result = resp.json()
print(f"Matches: {len(result['matches'])}, Gaps: {len(result['gaps'])}, Surprises: {len(result['surprises'])}")
print(f"Coverage: {result['target_coverage']:.0%}")
# 3. Fetch insights (spans, threads, connections)
resp = requests.get(
f"{BASE}/v1/stacks/{STACK_ID}/insights",
headers=headers,
params={"subject_ids": "demo-user"},
)
print("Insights:", resp.status_code, resp.json())
Every endpoint is documented with request/response schemas and a live “Try it out” button. No setup required — test directly from your browser.
Interactive explorer — try every endpoint with live requests and see response schemas.
Clean reference documentation — all endpoints, models, and parameters in one scrollable page.
| Operation | Endpoint | What it does |
|---|---|---|
| Observe | POST /v1/observe | Extract signals from text — returns traces with target, signal, confidence, evidence |
| Stack Observe | POST /v1/stacks/:id/observe | Observe within a stack context — uses the stack's schema and targets |
| Report | GET /v1/stacks/:id/report/:subject | Matches, gaps, surprises, posture distribution, coverage |
| Profile | GET /v1/stacks/:id/profile/:subject | Full person model — traces, patterns, coverage across all sessions |
| Session Summary | GET /v1/stacks/:id/session-summary/:subject | Per-session breakdown — what landed in this conversation |
| Chat | POST /v1/chat/:stack/:subject | SSE streaming conversation — sends turns, streams facilitated responses |
| Synthesize | POST /v1/stacks/:id/synthesize/:subject | Generate narrative — portrait, patterns, or summary from traces |
| Match | GET /v1/stacks/:id/matches | Cross-subject connections — shared dreams, complements, tensions |
| Insights | GET /v1/stacks/:id/insights | Aggregated view — spans, threads, connections across subjects |
Sign in to create API credentials and start integrating.
Sign InReplaces grant forms with conversation. Looks for the problem, the impact, and your specific role — listens for evidence of accountability.
Replaces the 121-question VIA Character Strengths survey with 7 conversations. Surfaces all 24 strengths — from Creativity to Spirituality — through stories instead of scales.
Replaces intake forms with guided dialogue. Looks for grounding, vision, obstacles, and commitment — asks where you are and what you're ready to change.
A playful philosophical demo. Looks for virtues, their excess, and their deficiency — find the mean between too much and too little.
Span → Trace → Thread. Data stays stable; behavior is configured in YAML.
Schema, Attunement, and Abilities define ontology, retrieval, and outcomes — without hardcoding app semantics.
When confirmed signals meet an ability's requirements, Sourced resolves parameters and hands them to your app. You define what happens next.
Targets with type: "compose" trigger synthesis instead of extraction. When enough evidence accumulates, the Fractal Weave engine composes an artifact — a portrait, match summary, or cohort theme — from the participant's own words. The same atomic operation powers all three scopes: self (N=1), dyad (N=2), and cohort (N=All).
Every trace gets a signal classifying what type of human signal it carries. The taxonomy uses verb predicates — database queries read like sentences.
Rules and intent from worldview config, loaded before conversation starts and applied as prompt policy plus evidence strictness.
System’s evolving belief about participant, multiple signals per turn (MICE).
How honest the system is about its own uncertainty.
| Confident | Uncertain | |
|---|---|---|
| Held | Reflect | Wait |
| Seeking | Offer | Be honest |
| Constraint | Unblock | Be honest |
Tension is relational (two held signals colliding), not a row.
MICE, not MECE: Human experience overlaps. One sentence triggers multiple dimensions — reaching_for + stuck_on + torn_between simultaneously. The taxonomy is Mutually Inclusive. MECE discipline applies to the processing pipeline (Constitution → Interpret → Update → Gate → Speak), not the perception layer.
Observation is controlled by two runtime knobs: retrieval (match_mode) and interpretation (method_mode).
Keyword/semantic retrieval plus an LLM interpretation pass. Richest candidates.
Always run retrieval; invoke the LLM only when confidence clears the gate.
No runtime LLM call. Produce trace candidates directly from matched evidence.
A candidate that the user confirms is just as reliable as a rich AI-synthesized candidate. The user's confirmation is what makes claims true.
/chat): participant-facing guided response.validate and inspect issues for source collisions./insights endpoint for a cross-subject view.tom_base and add specialized targets as needed.