Back to Sourced
For builders

Extract typed signals from text.

Three API calls to try it. Observe text, fetch insights, review what landed.

Try it

Quickstart

Send text, get back typed signals with confidence scores and evidence. Then get a report of what landed against the schema.

1. Observe text

curl -X POST https://sourced-workspace--sourced-api-fastapi-app.modal.run/v1/observe \
  -H "Content-Type: application/json" \
  -H "X-Sourced-App: <your-app-id>" \
  -H "Authorization: Bearer <your-app-secret>" \
  -d '{
    "subject_id": "demo-user",
    "text": "I care about building things that help people learn.",
    "source_id": "turn_1"
  }'

What you get back

{
  "traces": [
    {
      "target": "tom.person.values.helping",
      "signal": "cares_about",
      "confidence": 0.87,
      "evidence": "I care about building things that help people learn.",
      "method_id": "hybrid"
    }
  ],
  "surprises": [],
  "skipped": 0,
  "mode": "facilitation"
}

Each trace has a target (the schema path that matched), a signal (a predicate like cares_about or stuck_on), a confidence score, and the exact evidence that triggered it.

2. Report — dimensions scored

curl "https://sourced-workspace--sourced-api-fastapi-app.modal.run/v1/stacks/<your-stack-id>/report/demo-user" \
  -H "X-Sourced-App: <your-app-id>" \
  -H "Authorization: Bearer <your-app-secret>"

What you get back

{
  "status": "ok",
  "subject_id": "demo-user",
  "matches": [
    {
      "target": "tom.person.values.helping",
      "signal": "cares_about",
      "trace_count": 2,
      "top_confidence": 0.87
    }
  ],
  "gaps": [
    {
      "target": "tom.person.direction.practices",
      "description": "Active commitments and daily practices"
    }
  ],
  "surprises": [
    {
      "target": "tom.person.energy.flow",
      "signal": "alive_in",
      "confidence": 0.72,
      "evidence": "Time disappears when I'm making things."
    }
  ],
  "posture_distribution": {
    "receiving": 2,
    "moving": 0,
    "participating": 1
  },
  "target_coverage": 0.33,
  "total_traces": 3
}

Matches are schema targets with traces. Gaps are targets the person hasn't addressed yet. Surprises are traces that landed outside your schema — things you didn't ask for but the person expressed. The posture_distribution shows the balance of receiving, moving, and participating signals.

3. Fetch insights

curl "https://sourced-workspace--sourced-api-fastapi-app.modal.run/v1/stacks/<your-stack-id>/insights?subject_ids=demo-user" \
  -H "X-Sourced-App: <your-app-id>" \
  -H "Authorization: Bearer <your-app-secret>"

Insights response

{
  "spans": [...],
  "traces": [
    {
      "target": "tom.person.values.helping",
      "signal": "cares_about",
      "confidence": 0.87,
      "evidence": "I care about building things that help people learn.",
      "status": "pending"
    }
  ],
  "threads": [...],
  "connections": []
}

What just happened

Observe matched your text against the stack's schema and extracted typed signals — each trace has a target, a verb-style signal, confidence, and evidence. Report shows every dimension scored — what landed (matches), what's missing (gaps), and what surprised you (traces outside your schema). Insights aggregates all traces into spans, threads, and connections.

Full example

Full Python Example

A complete script that observes text and fetches insights. Copy it, fill in your credentials, and run.

sourced_quickstart.py

#!/usr/bin/env python3
"""Sourced quickstart — observe, report, fetch insights."""
import requests

BASE = "https://sourced-workspace--sourced-api-fastapi-app.modal.run"
APP_ID = "<your-app-id>"
APP_SECRET = "<your-app-secret>"
STACK_ID = "<your-stack-id>"

headers = {
    "X-Sourced-App": APP_ID,
    "Authorization": f"Bearer {APP_SECRET}",
    "Content-Type": "application/json",
}

# 1. Observe some text
resp = requests.post(f"{BASE}/v1/observe", headers=headers, json={
    "subject_id": "demo-user",
    "text": "I feel energized when I build products that help people.",
    "source_id": "turn_1",
})
print("Observe:", resp.status_code, resp.json())

# 2. Report — dimensions scored against the author's frame
resp = requests.get(
    f"{BASE}/v1/stacks/{STACK_ID}/report/demo-user",
    headers=headers,
)
result = resp.json()
print(f"Matches: {len(result['matches'])}, Gaps: {len(result['gaps'])}, Surprises: {len(result['surprises'])}")
print(f"Coverage: {result['target_coverage']:.0%}")

# 3. Fetch insights (spans, threads, connections)
resp = requests.get(
    f"{BASE}/v1/stacks/{STACK_ID}/insights",
    headers=headers,
    params={"subject_ids": "demo-user"},
)
print("Insights:", resp.status_code, resp.json())
Try it live

Interactive API Documentation

Every endpoint is documented with request/response schemas and a live “Try it out” button. No setup required — test directly from your browser.

Key Endpoints

OperationEndpointWhat it does
ObservePOST /v1/observeExtract signals from text — returns traces with target, signal, confidence, evidence
Stack ObservePOST /v1/stacks/:id/observeObserve within a stack context — uses the stack's schema and targets
ReportGET /v1/stacks/:id/report/:subjectMatches, gaps, surprises, posture distribution, coverage
ProfileGET /v1/stacks/:id/profile/:subjectFull person model — traces, patterns, coverage across all sessions
Session SummaryGET /v1/stacks/:id/session-summary/:subjectPer-session breakdown — what landed in this conversation
ChatPOST /v1/chat/:stack/:subjectSSE streaming conversation — sends turns, streams facilitated responses
SynthesizePOST /v1/stacks/:id/synthesize/:subjectGenerate narrative — portrait, patterns, or summary from traces
MatchGET /v1/stacks/:id/matchesCross-subject connections — shared dreams, complements, tensions
InsightsGET /v1/stacks/:id/insightsAggregated view — spans, threads, connections across subjects
When you're ready to build

Create API Credentials

Sign in to create API credentials and start integrating.

Sign In
Reference
Architecture

Stacks define what your AI listens for

Grant Application

Replaces grant forms with conversation. Looks for the problem, the impact, and your specific role — listens for evidence of accountability.

Values in Action

Replaces the 121-question VIA Character Strengths survey with 7 conversations. Surfaces all 24 strengths — from Creativity to Spirituality — through stories instead of scales.

Coaching Onboarding

Replaces intake forms with guided dialogue. Looks for grounding, vision, obstacles, and commitment — asks where you are and what you're ready to change.

Aristotle's Golden Mean Demo

A playful philosophical demo. Looks for virtues, their excess, and their deficiency — find the mean between too much and too little.

Developer model

Keep objects simple, keep behavior declarative

Core primitives

Span → Trace → Thread. Data stays stable; behavior is configured in YAML.

Declarative YAML

Schema, Attunement, and Abilities define ontology, retrieval, and outcomes — without hardcoding app semantics.

Capability-driven outcomes

When confirmed signals meet an ability's requirements, Sourced resolves parameters and hands them to your app. You define what happens next.

Compose Targets

Targets with type: "compose" trigger synthesis instead of extraction. When enough evidence accumulates, the Fractal Weave engine composes an artifact — a portrait, match summary, or cohort theme — from the participant's own words. The same atomic operation powers all three scopes: self (N=1), dyad (N=2), and cohort (N=All).

Signal taxonomy

Four questions + one meta-signal. MICE, not MECE.

Every trace gets a signal classifying what type of human signal it carries. The taxonomy uses verb predicates — database queries read like sentences.

Held
settled
Seeking
in motion
Values
axiology
cares_about
Clear values, deep commitments
“I care deeply about honesty”
torn_between
Two goods colliding
“I want freedom but need stability”
Knowledge
epistemology
knows
Stable skills, firm beliefs
“I’m good at systems thinking”
wondering
Testing ideas, uncertain
“Maybe I’d thrive in a smaller team”
Direction
teleology
working_on
Active practices, doing
“I started writing every morning”
reaching_for
Aspirations, not yet started
“I want to build something meaningful”
Energy
phenomenology
alive_in
Flow, vitality, joy
“Time disappears when I’m making things”
stuck_on
Blocked, depleted, stuck
“I can’t take that risk right now”
Sensemaking
hermeneutics
META
means
Settled stories, made peace
“That failure taught me resilience”
remaking
Story is shifting
“I’m starting to see it differently”

Three ToM Layers

tom.author
The Facilitator’s Philosophy

Rules and intent from worldview config, loaded before conversation starts and applied as prompt policy plus evidence strictness.

tom.person
The State Vector

System’s evolving belief about participant, multiple signals per turn (MICE).

tom.system
System Confidence

How honest the system is about its own uncertainty.

Platform Gate

ConfidentUncertain
HeldReflectWait
SeekingOfferBe honest
ConstraintUnblockBe honest

Tension is relational (two held signals colliding), not a row.

MICE, not MECE: Human experience overlaps. One sentence triggers multiple dimensions — reaching_for + stuck_on + torn_between simultaneously. The taxonomy is Mutually Inclusive. MECE discipline applies to the processing pipeline (Constitution → Interpret → Update → Gate → Speak), not the perception layer.

How it observes

Observation is a dial, not a switch

Observation is controlled by two runtime knobs: retrieval (match_mode) and interpretation (method_mode).

Both

Keyword/semantic retrieval plus an LLM interpretation pass. Richest candidates.

Standard cost

Smart

Always run retrieval; invoke the LLM only when confidence clears the gate.

Best balance

Keyword

No runtime LLM call. Produce trace candidates directly from matched evidence.

Lowest cost
Consent loop

Reliability lives in the loop

A candidate that the user confirms is just as reliable as a rich AI-synthesized candidate. The user's confirmation is what makes claims true.

1. Observe
System notices signal
2. Confirm
User says "Save"
3. Grow
The map updates

Standard Pages

  • /studio: design programs, send invites, review signals.
  • Conversational Sessions (/chat): participant-facing guided response.

Common Friction

  • Returns 200 but no traces: run validate and inspect issues for source collisions.
  • Need stack-level rollups: use the /insights endpoint for a cross-subject view.
  • Preset is too heavy: start with tom_base and add specialized targets as needed.