Fluently is an open platform for human-AI collaboration knowledge. Cycles contributed by the community are structured by collaboration frameworks, starting with the AI Fluency 4D Framework. Learn from proven patterns, contribute yours, and serve them as AI-ready skills, prompts, resources, and tools.
Every cycle built on the 4D Framework answers these four questions. Together they eliminate the most common failure modes in AI-assisted work.
The knowledge base is not just for reading. Each dimension maps to a concrete type of AI service primitive, so the patterns you learn and contribute are directly actionable in any MCP-compatible agent.
.md files that give agents the right framing for a task.Every human-AI collaboration is a chain of prompts. A framework like the 4D Framework classifies those prompt chains into named clusters: Delegation, Description, Discernment, and Diligence. Fluently defines how they connect, loop, and restart into a repeatable cycle.
Each cycle in the knowledge base includes its full collaboration block: ordered D-clusters with example prompts, transition triggers, and loop-back conditions. Use get_collaboration_pattern via the MCP server to retrieve this for any cycle.
Community knowledge flows through GitHub MCP: no server required, always current. Private or isolated knowledge uses the custom MCP server with any connector.
The Fluently knowledge base lives in a public GitHub repo. Any AI agent with the GitHub MCP server wired can read KNOWLEDGE.md, fetch knowledge/index.json, and deep-read individual YAML cycles, with no custom server, no rebuild, and no auth required for reads.
Knowledge updates the moment a community cycle is merged. Contributions open a PR automatically via the same GitHub MCP.
When you need private knowledge, like a team's proprietary patterns, a fork with domain-specific cycles, or an air-gapped environment, the custom Fluently MCP server connects to any backend: a private GitHub repo, a local directory, SQL, or NoSQL. Five connectors, one interface.
Six tools are exposed: discover, retrieve, deep-read, inspect dimensions, force-refresh, and contribute. No numeric scores: the agent reasons over ranked candidates contextually.
raw.githubusercontent.com on every request. The custom MCP server adds a 1-hour TTL cache with bundled offline fallback. New cycles appear the moment a PR is merged, without restarting anything.contribute_cycle tool handles the rest: returns a PR URL for public knowledge, writes a file for local, or opens a branch automatically for private repos.fluent CLI scores tasks, compares cycles, lists the knowledge base, and guides you through contributing a new cycle interactively. Works offline with bundled knowledge.Choose your AI provider, describe a task, and watch Fluently fetch live knowledge from GitHub and reason over cycles, exactly what the GitHub MCP path does, running right here.
Browser, CLI, or MCP server. Pick the path that fits your workflow.
npx fluently-cli score "your task here"
npx fluently-cli list coding
fluent from anywhere. Includes all commands: score, compare, list, contribute, sync.npm install -g fluently-cli
fluent score "AI reviews PRs for style issues"
Works with any MCP-compatible client.
claude_desktop_config.json (Claude Desktop)
~/.claude/settings.json (Claude Code)
Then prompt your agent: "Read KNOWLEDGE.md in Fluently-Org/fluently and find the best cycle for my task."
Install the server
Private knowledge connector
Better-scoped prompts mean fewer tokens. Fewer tokens mean lower CO₂. The 4D framework reduces token consumption by 30–45% vs unstructured prompting — and now any agent can estimate and track that impact.
Zero-dependency The carbon calculator is pure arithmetic — no API calls, no libraries. Load CARBON_KNOWLEDGE.md into any agent's context and it can calculate, track, and report CO₂ from token consumption.
How it works: every LLM call has an input token count and an output token count. Each model has a gCO₂eq rate per token (from the EcoLogits open database). Multiply, sum, and you have an estimate. Fluently adds framework efficiency multipliers on top.
| Framework | Token reduction | CO₂ saving | vs baseline |
|---|---|---|---|
| Fluently 4D | 30–45% | 25–40% | most efficient |
| Fluently Linear | 20–30% | 15–25% | |
| Fluently Cyclic | 15–25% | 10–20% | |
| No framework | — | baseline |
npm install fluently-carbon
Copy this instruction into your AI assistant's system prompt or project instructions. It teaches any agent to detect available tools, load the live knowledge base, and execute the 4D cycle correctly — not from training memory.
CLAUDE.md file at your repo root for Claude Code. The agent will auto-detect whether the Fluently MCP server or GitHub MCP is connected and load the knowledge base accordingly.
Fluently is open-source and community-driven. Contribute a cycle, fork the knowledge base, or wire it to your team's private patterns.