Akashik Protocol
Docs

Quickstart

Get two agents sharing memory in under five minutes.

Install

npm install @akashikprotocol/core
@akashikprotocol/core is currently in active development. The API below reflects the spec and will be available at the first 0.1.0 release. Track progress on GitHub.

Level 0 - Two agents, shared memory

import { Field } from '@akashikprotocol/core'

const field = new Field()

// Register agents with roles
const researcher = await field.register({
  id: 'researcher-01',
  role: 'researcher',
  interests: ['market size', 'growth trends']
})

const strategist = await field.register({
  id: 'strategist-01',
  role: 'strategist'
})

// Researcher records a finding — intent is required
await researcher.record({
  type: 'finding',
  content: 'European SaaS market for SMB HR tools is growing at 23% CAGR, expected to reach $4.2B by 2027.',
  intent: {
    purpose: 'Validate market size assumption for go-to-market strategy',
    question: 'Is the European HR SaaS market large enough to justify a dedicated go-to-market?'
  }
})

// Strategist attunes — receives relevant context automatically
// No query. The Field figures out what the strategist needs.
const context = await strategist.attune({ max_units: 10 })

console.log(context.record[0].memory_unit.content)
// → 'European SaaS market for SMB HR tools...'

console.log(context.record[0].relevance_score)
// → 0.85

console.log(context.record[0].relevance_reason)
// → 'Recent finding from researcher. Market sizing is relevant to strategy.'

That's it for Level 0. Two registered agents, one RECORD, one ATTUNE. The Field handles scoring and delivery.

Adding confidence (Level 1)

At Level 1, committed records require a confidence score and reasoning:

await researcher.record({
  mode: 'committed',
  type: 'finding',
  content: 'European SaaS market for SMB HR tools is growing at 23% CAGR.',
  intent: {
    purpose: 'Validate market size for go-to-market strategy'
  },
  confidence: {
    score: 0.82,
    reasoning: 'Based on three independent analyst reports with consistent estimates.',
    evidence: ['https://example.com/report-a', 'https://example.com/report-b'],
    assumptions: ['EU AI Act enforcement begins Q3 2026']
  }
})

This creates a transparent, auditable chain: any agent can see not just what was found, but how certain the finding is and why.

Recording a contradiction

When a second researcher finds conflicting data, they declare it:

await researcher2.record({
  mode: 'committed',
  type: 'finding',
  content: 'Market growth decelerating to 14% CAGR based on Q4 2026 earnings reports.',
  intent: {
    purpose: 'Update market growth projection with latest data'
  },
  confidence: {
    score: 0.75,
    reasoning: 'Based on a single Q4 earnings report. May not represent the full market.'
  },
  relations: [{
    type: 'contradicts',
    target_id: 'mem-001',
    description: 'Original estimate was 23% CAGR; new data suggests 14%'
  }]
})
// → The Field automatically creates a Conflict object

The strategist's next ATTUNE will include the unresolved conflict. Nothing gets silently overwritten.

Resolving the conflict

await strategist.merge({
  conflict_id: 'conflict-001',
  strategy: 'confidence_weighted',
  resolution: {
    winner_id: 'mem-001',
    rationale: 'mem-001 confidence 0.82 from 3 sources vs 0.75 from single source.'
  }
})

Subscribing to updates (Level 1+)

Poll for new context since your last ATTUNE using since_epoch:

let lastEpoch = 0

// On each loop
const context = await strategist.attune({
  max_units: 10,
  since_epoch: lastEpoch
})

lastEpoch = context.epoch // store for next poll

At Level 2+, use push subscriptions via WebSocket or SSE instead.

Next

The Field

The shared memory space all agents operate within.

Memory Units

The atomic unit of knowledge in the protocol.

Attunement

How agents receive relevant context without querying.

Conformance Levels

Adopt as little or as much as you need.