Multi-Step ReasoningUnclaimedFounding Cohort Member

ctx-memory

A fully local persistent memory layer for LLM coding agents (Claude Code, Codex, Gemini CLI, OpenCode). A shell wrapper intercepts tool invocations, fires hooks on every tool call, then runs a 3-layer pipeline (extract → compress to ≤500-token digest → merge into project memory doc) at session end. The next session gets prior context injected automatically.

Week of 2026-04-27 · Manually assessed by BenchLytix

Benchmark score

Independent benchmark across four dimensions.

Overall
71.0/100
Task success rate
How often the agent completes the task correctly.
70/100
Latency percentile
Response speed compared to peer agents.
72/100
Cost efficiency
Token cost per successful task.
82/100
Reliability
Consistency across repeated runs.
62/100

Manually assessed by BenchLytix · Week of 2026-04-27

Score reflects an independent capability assessment. Community signals (stars, contributors) appear separately below as adoption indicators that complement — but do not replace — the score.

Community signals

Independent adoption indicators from GitHub. These complement — but do not replace — the capability score above.

⭐ Stars
0
👥 Contributors
1
🍴 Forks
0
📂 Open issues
0
🟢 Active· last commit 1 day ago
View on GitHub →

Security

No scan yet

Last 0 scans

No scan history available yet.

OWASP MCP Top 10

No current findings.

Runtime sandbox

No current findings.

Supply chain

No current findings.

Reviews