Visual browser for AI agent skills and memories with a built-in local AI model. Additive analysis builds on existing knowledge instead of starting from scratch. Fully air-gapped. No API keys. No cost.
Real skills and memories from public repos. Internal repos analyzed by the embedded AI model. All generated locally at $0 cost.
Browsing repos from engram-hq, sreniatnoc, ARTIFACTIQ, and famous open-source projects
Select a skill from the tree to view its content, fetched live from GitHub.
Hover over nodes to see metrics.
Click a session to view full details.
Engram indexes skills and memories from your GitHub repos and gives you a visual dashboard to browse, search, and analyze them.
Navigate your 3-tier skill hierarchy (User, Org, Repo) with tree view and list view. See frontmatter metadata, rendered markdown, and word counts.
Interactive 3D force-directed graph with org cloud bubbles. Wireframe spheres group each org's nodes visually — hover a cloud for aggregate metrics (skills, sessions, word counts), click to zoom in. Nodes sized by cost, linked chronologically with flowing particles.
Track agent activity and costs. Session counts, model usage, and org distribution from real data. Per-session cost tracking via SDK or memory frontmatter. Try the Analytics tab above.
Search across all skills and memories with full-text matching. Faceted results by type, org, and tier with highlighted snippets. Try the Search tab above.
Auto-discover orgs, scan repos for .skills/ and .memory/ directories. Incremental sync with content hashing. See the Sync tab above for a live view.
Discovers existing skills and memories in your repo before generating. Feeds them as context to the model, which merges: preserving accurate content, updating stale info, and adding net-new insights. Use --fresh for from-scratch mode.
Single curl command installs pipx, engram-cli, and Ollama. Built-in engram upgrade command checks PyPI and upgrades in place. No manual version management.
Native SwiftUI app with Swift Charts for analytics. 59 tests passing, App Store metadata prepared. Pending Apple Developer enrollment and backend deployment.
Engram understands the hierarchical structure of AI agent knowledge - from personal skills to org-wide patterns to repo-specific techniques.
Your personal routing rules, preferences, and cross-org knowledge. Stored in orgs/skills/ at the top level.
Shared conventions, coding standards, and operational knowledge for an entire GitHub org. Lives in <org>/.skills/.
Task-specific techniques like ML training configs, deployment scripts, and validation flows. Found in <repo>/.skills/.
Analyze any codebase with a local AI model. Builds on existing knowledge. No API keys. No cloud. One command.
# One-line install (pipx + engram + ollama) curl -fsSL https://raw.githubusercontent.com/engram-hq/engram-cli/main/install.sh | bash # Or manually brew install ollama pipx && pipx install engram-cli # Analyze any repo (discovers existing skills automatically) engram analyze . # Analyze a GitHub repo engram analyze pallets/flask --org pallets # Browse results in a local visual dashboard engram browse # Self-upgrade to latest version engram upgrade
First run downloads Qwen2.5-Coder 7B (~4.5GB, one-time). Runs on 8GB RAM.
Upgrade models anytime: engram analyze . --model qwen2.5-coder:14b
engram upgrade checks PyPI and upgrades in place
Your code never leaves your machine. No API calls, no telemetry, no cloud dependency.
No API tokens burned. Analyze unlimited repos. Perfect for researchers and students.
Discovers existing skills and memories, feeds them to the model. Builds on what you have instead of regenerating from scratch.
engram upgrade checks PyPI and upgrades in place. One-liner install handles everything from zero.
$ engram analyze . ╭────────────────────────────────────────╮ │ Engram v3.1.0 - Local AI Code Analyzer │ ╰────────────────────────────────────────╯ Phase 1: Heuristic Analysis Languages: Python (58%), Go (8%), TypeScript (6%) Frameworks: FastAPI, Pydantic, SQLAlchemy, pytest Testing: detected (120 test files) Patterns: REST API, Service layer, Documentation site Discovery: Scanning for existing knowledge... Found 11 existing skills and 34 existing memories Will use additive mode Phase 2: Local Model Inference (Additive mode) [1/5] Generating architecture skill... [2/5] Generating patterns skill... [3/5] Generating testing skill... [4/5] Generating project overview... [5/5] Generating activity analysis... ╭───────────── Results ──────────────────╮ │ Generated 3 skills + 2 memories (ADDITIVE) │ │ Model: qwen2.5-coder:7b | Time: 47s | Cost: $0│ ╰────────────────────────────────────────────╯
Lightweight TypeScript SDK to track what your agents do. Automatic batching, retry, and cost calculation.
import { Engram } from '@engram-hq/sdk' const engram = new Engram({ apiKey: 'eng_...' }) // Track agent operations (batched automatically) engram.track({ operation: 'create', targetType: 'skill', agentType: 'claude-code', modelId: 'claude-opus-4-6', inputTokens: 45230, outputTokens: 12847, cacheReadTokens: 128000, durationMs: 85430, }) // Flush on exit await engram.shutdown()
Two ways to run Engram - pick what fits your workflow.
One command installs everything. Analyze any repo from your terminal.
# Install everything in one shot curl -fsSL https://raw.githubusercontent.com/engram-hq/engram-cli/main/install.sh | bash # Analyze a repo engram analyze . # Upgrade anytime engram upgrade
Full web UI with search, timeline, analytics. Docker one-liner.
git clone https://github.com/engram-hq/engram-web.git docker compose -f docker-compose.local.yml up --build
Engram is open source. The CLI runs entirely on your machine with a built-in AI model and additive analysis. Your knowledge compounds over time. No API keys, no cloud, no cost.