context-engine-ai

Give your AI agent a memory of what just happened.

$ npm install context-engine-ai
npm 0.2.3 GitHub ★ Star CI passing License MIT TypeScript 5.7 Node ≥18
Event Feed ctx.ingest()
Query Context ctx.query()
Results
Ingest some events, then run a query.
Three methods. That's the whole API.
import { ContextEngine } from 'context-engine-ai' const ctx = new ContextEngine() // SQLite + local TF-IDF, zero config await ctx.ingest({ type: 'app_switch', data: { app: 'VS Code', file: 'auth.ts' } }) await ctx.ingest({ type: 'message', data: { from: 'Alice', text: 'auth bug is back' } }) await ctx.ingest({ type: 'test', data: { result: '3 failed, 12 passed' } }) const result = await ctx.query('what needs attention?') console.log(result.summary) // "[message] from: Alice, text: auth bug is back | [test] result: 3 failed ..." await ctx.close()
import Anthropic from '@anthropic-ai/sdk' import { ContextEngine } from 'context-engine-ai' const ctx = new ContextEngine({ dbPath: './agent.db' }) const claude = new Anthropic() // Events stream in from your app, sensors, webhooks... await ctx.ingest({ type: 'error', data: { service: 'auth', error: 'TokenExpiredError', count: 47 } }) await ctx.ingest({ type: 'slack', data: { from: 'Sarah', text: 'staging is throwing 401s' } }) await ctx.ingest({ type: 'calendar',data: { event: 'Sprint review', in: '25min' } }) // Get relevant context for the agent const context = await ctx.query('what needs attention?', 5) const response = await claude.messages.create({ model: 'claude-sonnet-4-20250514', max_tokens: 1024, system: `You are a developer assistant. Current context:\n${context.summary}`, messages: [{ role: 'user', content: 'What should I focus on?' }] }) // Claude sees the auth errors, Slack message, and upcoming sprint review
import { ContextEngine } from 'context-engine-ai' const ctx = new ContextEngine({ dbPath: './context.db' }) ctx.serve(3334) // REST API: POST /ingest, GET /context?q=..., GET /recent // Or via CLI — no code needed: // npx context-engine-ai serve --port 3334 // npx context-engine-ai demo // Ingest from any source: // curl -X POST http://localhost:3334/ingest \ // -H 'Content-Type: application/json' \ // -d '{"type": "deploy", "data": {"service": "api", "env": "prod"}}' // Query with natural language: // curl "http://localhost:3334/context?q=recent+deployments"
import { ContextEngine } from 'context-engine-ai' // Scale to production: PostgreSQL + pgvector + OpenAI embeddings const ctx = new ContextEngine({ storage: 'postgres', pgConnectionString: process.env.DATABASE_URL, embeddingProvider: 'openai', openaiApiKey: process.env.OPENAI_API_KEY, maxEvents: 50000, decayHours: 72, }) // Same API — just a config change. await ctx.ingest({ type: 'event', data: { key: 'value' } }) const result = await ctx.query('query in natural language')
Why context-engine-ai
Zero config
SQLite + local TF-IDF out of the box. No API keys, no cloud, no database setup. Works in 3 lines.
⏱️
Temporal decay
Recent events rank higher. A 5-minute-old event scores 8× higher than an identical one from 3 days ago.
🔁
Auto-deduplication
Switching between two apps 50 times creates 2 events, not 50. Configurable cosine threshold.
🔍
Semantic querying
Ask "any errors?" in plain English — no SQL, no type filtering. Local TF-IDF or OpenAI embeddings.
📡
MCP server
Works as a Model Context Protocol tool server — plug into Claude Desktop, Cursor, or Windsurf.
🚀
~0.1ms latency
Ingest and query both run in ~0.1ms locally. 20MB heap. Not the bottleneck in your agent loop.