Akashik Protocol
Concepts

Attunement

How agents receive relevant context from the Field - without querying.

Search is reactive. You know what you want, you ask for it, you get it.

Attunement is different. An agent declares its identity and current focus — its role, active task, and context budget. The Field evaluates all MemoryUnits against that declaration and returns the most relevant context, ranked by relevance score, within the budget.

The agent does not specify what it wants. The Field determines what it needs.

How it works

const context = await strategist.attune({
  scope: {
    role: 'strategist',
    max_units: 10
  },
  context_hint: 'About to draft competitive positioning section'
})

// context.record[0].relevance_score → 0.85
// context.record[0].relevance_reason → 'Recent finding from researcher. Market sizing relevant to strategy.'
// context.record[0].memory_unit.intent.purpose → 'Validate market size for go-to-market strategy'

Every returned unit includes a relevance_score and a human-readable relevance_reason. Not just the what — the why it was surfaced.

What the Field considers

The protocol does not prescribe a specific scoring algorithm, but defines the required inputs:

FactorLevelsDescription
Role alignment0+Does this unit's content match the agent's declared role?
Type importance0+Decisions and contradictions rank above observations.
Recency0+More recent units score higher.
Source diversity0+Results should span multiple agents where possible.
since_epoch filtering1+Only return units committed after a given epoch.
Semantic similarity2+Vector embeddings measure content relevance.
Task relevance2+Units serving the same task rank higher.
Relation proximity2+Units connected via relations to the agent's context rank higher.
Confidence weighting2+Higher-confidence units score higher.

Scope

The scope object is the agent's declaration of relevance context. At minimum, role and max_units are required.

// Level 0 - minimal scope
await agent.attune({
  scope: { role: 'analyst', max_units: 10 }
})

// Level 2+ - extended scope with richer signals
await agent.attune({
  scope: {
    role: 'analyst',
    max_units: 20,
    max_tokens: 8000,
    interests: ['competitive landscape', 'market sizing'],
    active_task_id: 'task-segment-b-analysis',
    relevance_threshold: 0.6,
    recency_weight: 0.7
  },
  context_hint: 'Preparing competitive analysis for Segment B'
})

Context budgets

LLMs have finite context windows. Attunement respects this.

max_units is a hard unit count cap. max_tokens (Level 2+) is a token budget — the Field estimates token counts and either includes full content or summaries to fit within the limit.

await agent.attune({
  scope: {
    role: 'strategist',
    max_units: 50,
    max_tokens: 12000  // Field will truncate/summarise to fit
  }
})

Polling for updates

At Level 1+, pass since_epoch to receive only units committed since your last ATTUNE. This is the Level 1 subscription model:

let lastEpoch = 0

// On each loop
const context = await agent.attune({
  scope: { role: 'strategist', max_units: 10 },
  since_epoch: lastEpoch
})

lastEpoch = context.epoch

Conflicts in attunement

Every ATTUNE response includes unresolved conflicts relevant to the requesting agent in a conflicts array. An agent never receives context silently contaminated by a contradiction — it knows when its context contains a disputed claim.

const context = await strategist.attune({ scope: { role: 'strategist', max_units: 10 } })

console.log(context.conflicts)
// → [{ id: 'conflict-001', type: 'factual', status: 'detected', ... }]

Push subscriptions (Level 2+)

At Level 2+, agents can subscribe for push notifications instead of polling:

await agent.subscribe({
  events: ['memory.recorded', 'conflict.detected'],
  min_relevance: 0.7
})
// Field delivers notifications via WebSocket or SSE

Vector search retrieves semantically similar content. It has no concept of intent, temporal position, agent role, or context budgets.

Attunement retrieves contextually appropriate content. It knows who is asking, why they might need it, and how much context they can absorb. The result is not just retrieval — it is structured context delivery.