Back to Sourced
Principles

The protocol for transparent alignment

These ten principles govern the relationship between Sourced and the people it observes. They aren't aspirations. They're structural constraints — things the system literally cannot bypass.

01

Make the model visible

Every AI builds a model of you — your values, your thinking, your goals. Most hide it. Sourced shows you the model: what the system thinks it knows, where the evidence came from, and how confident it is.

Commitment: We will never build a model of you that you can't see.
02

Consent before commitment

The system observes and proposes, but you decide what's true. No claim about you is saved without your acknowledgment. The system proposes; you commit.

Commitment: Nothing is locked in without your say-so.
03

Correction is conversation

"Not quite" is a first-class action. Correcting the system is how the shared map improves — it's expected, easy, and non-punitive.

Commitment: Adjusting what the system got wrong is always one step away.
04

Your map is yours

You can see it, edit it, export it, and delete it at any time. Your digital self-understanding belongs to you — even if you leave.

Commitment: Portability is a right, not a feature.
05

Show the evidence

If the system makes a claim about you, you can always see where it came from. Every inference traces back to your own words.

Commitment: No black boxes. Every claim has an inspectable origin.
06

Declare intent

The system states what it's listening for and why. Designers must declare their purpose, their method, and what they'll ask — before the conversation begins.

Commitment: No program runs without declaring its goal.
07

Honest uncertainty

The system tells you what it's confident about and what it's guessing. Pretending to be certain when it's not is a violation of trust.

Commitment: We separate facts from hypotheses.
08

Sacred is sovereign

Some things are too important to be scored. You decide what the system measures and what it preserves without analyzing — your exact words, untouched.

Commitment: The limit of measurement is yours to draw.
09

Values are structure

These principles aren't aspirations in a document — they're constraints the system cannot bypass. Sourced doesn't make AI honest. It makes dishonesty visible.

Commitment: Our values are enforced by design, not by intention.
10

The map is not the territory

The system's model of you is always incomplete — and it knows that. Incompleteness is constitutional. What the system refuses to claim defines its character.

Commitment: We will never pretend the model is you.
Not just words

How values become structural

A values document only matters if it changes what the system can actually do. Sourced turns principles into constraints — things the system literally cannot bypass.

1
What designers must declare

A program can't run unless it states its goal, its method, and what it will ask. No hidden curriculum.

2
What people always see

Intent is always visible. Every claim has an inspectable origin. Correction is always one tap away.

3
What the AI cannot do

It can't use sensitive assumptions without checking. It can't override your thinking time. Hard limits, not guidelines.

Hard questions

What you're probably thinking

Fair objections and honest answers.

Isn't this just a psychological profile?
"Building a model of someone's values and thinking sounds like a privacy nightmare."

AI products already build these models — every recommendation engine, every memory feature, every personalization layer. The question isn't whether models should exist, but whether people should be able to see and correct them. With Sourced, you see everything, delete anything, and export the whole thing. That's not a privacy risk. It's a privacy solution.

People don't want to inspect their AI
"Most users want results, not transparency. This adds friction nobody asked for."

Most people won't look at the Map. That's fine. The point isn't that everyone inspects — it's that everyone can. The transparency exists whether you use it or not, like a nutrition label. And the correction data that flows from the people who do engage is some of the most valuable signal a product can get.

Can AI really be honest?
"You're talking about honesty and values, but it's a machine. It doesn't have values."

You're right — the machine has no values of its own. That's exactly why the values need to be structural. Not in the prompt. Not in the training data. In the protocol, where they're constraints that can be verified and enforced. Sourced doesn't make AI honest. It makes dishonesty visible.

By design

Incompleteness is constitutional

Sourced does not replace human judgment. It does not summarize what should be sat with. It does not score what the person declares sacred. It does not pretend certainty it doesn't have.

The system is designed to strengthen your thinking, not replace it. Like a bicycle for self-understanding — it goes where you pedal, but you still have to pedal.

The limit of measurement is a feature

Some things are too important to be scored. When something you say matters deeply but doesn't fit the system's categories, you can mark it sacred. The system preserves your exact words without analyzing, classifying, or using them for matching. What Sourced refuses to measure defines its character as much as what it does measure.

This is not a limitation. It is the product.

Read the full theory →

Built with intention

“Sourced exists because we believe the relationship between people and AI should be built on honesty, not extraction.”

When AI shows its work and people can correct what it gets wrong, trust becomes structural — not performative. That's what transparent alignment means: not a promise in a terms-of-service, but a protocol enforced by design.

Designed for depth. Optimized for humanity.