AI Fluency 4D Framework
fluent·ly

The Open Standard for
Human-AI Collaboration

Fluently is an open platform for human-AI collaboration knowledge. Cycles contributed by the community are structured by collaboration frameworks, starting with the AI Fluency 4D Framework. Learn from proven patterns, contribute yours, and serve them as AI-ready skills, prompts, resources, and tools.

16
Community cycles
7
Domains
4D
Bundled framework
0
Installs required
Claude + GitHub MCP: live knowledge
# Claude reads KNOWLEDGE.md via GitHub MCP Claude › Read KNOWLEDGE.md in Fluently-Org/fluently → 16 cycles · 7 domains · always live from main branch   # Find cycles that match the task Claude › Read knowledge/index.json, finding cycles that fit "AI reviews PRs for style"?   Best fit: Code Review Triage Delegation → augmented (human approves before merge) Discernment → compare output vs senior review standard ⚠ Antipattern: auto-merge without human sign-off   # Read full YAML for detailed guidance Claude › Read knowledge/coding-code-review-triage.yaml ✓ Full cycle loaded: delegation, description, discernment, diligence
The AI Fluency 4D Framework

Four dimensions of great human-AI collaboration

Every cycle built on the 4D Framework answers these four questions. Together they eliminate the most common failure modes in AI-assisted work.

🎯
D1 · Delegation
Who owns the decision?
"Who should own this: human, AI, or both?"
Define the autonomy level before the task starts. Automated, augmented, or supervised. Ambiguity here creates accountability gaps.
📝
D2 · Description
Is the context complete?
"What does the AI need to understand the task fully?"
Framing determines output quality. A good description includes role, constraints, examples, and expected format. Not just the ask.
🔍
D3 · Discernment
How do you evaluate trust?
"How do you know when the output is good enough?"
Define what good looks like before reviewing. Without explicit criteria, humans default to accepting fluent-sounding output regardless of accuracy.
D4 · Diligence
Who stays accountable?
"What human sign-off is required after AI involvement?"
Accountability doesn't transfer to AI. Diligence names the human who owns the outcome, along with the minimum verification step before shipping.
From knowledge to practice

Each framework dimension becomes a service your AI can call

The knowledge base is not just for reading. Each dimension maps to a concrete type of AI service primitive, so the patterns you learn and contribute are directly actionable in any MCP-compatible agent.

D2 · Description
Skills
Context and scope framing patterns. Triggered on demand as token-light .md files that give agents the right framing for a task.
D1 · Delegation
Prompts
Task handoff and sub-agent patterns. Named, parameterized invocation templates that encode how much autonomy to grant for each task type.
D3 · Discernment
Resources
Proceed, pause, and escalate heuristics. Read-only knowledge blobs and examples an agent consults to decide whether output is trustworthy enough to proceed.
D4 · Diligence
Tools
Verify, validate, and safety checks. Callable functions that contribute new patterns, retrieve existing ones, and validate cycles against the shared schema.
How Cycles Work

Framework dimensions are conversation clusters, not tags

Every human-AI collaboration is a chain of prompts. A framework like the 4D Framework classifies those prompt chains into named clusters: Delegation, Description, Discernment, and Diligence. Fluently defines how they connect, loop, and restart into a repeatable cycle.

One cluster = a chain of related prompts

DEL · Delegation cluster
Human"Can you auto-approve style issues and flag logic ones?"
AI"I can flag logic issues with confidence. Want me to auto-close only style nits?"
Human"Yes. Auto-close style, surface everything else with a severity label."
↳ Trigger: autonomy boundaries agreed, moves to Description
DES · Description cluster
Human"Here's the PR diff, our style guide, and the three issues this PR addresses."
AI"Should I cross-reference open issues when flagging findings?"
Human"Yes, link any finding to the relevant issue if there's a match."
↳ Trigger: AI has enough context, moves to Discernment

Cycles define how the clusters connect

LINEAR email writing, lesson planning, clinical docs
Del → Des → Dis → Dil · Single pass, no loops · Safety-critical workflows
LINEAR WITH LOOPS code review, bug triage, marketing copy
Del → Des → Dis → Dil · Loop-back from Dis to Del when quality fails
ITERATIVE refactoring, literature review, content dev
Full cycle repeats multiple times · Each pass builds on the last · Scope narrows
CYCLIC creative collaboration, continuous processes
Dil loops back to Del · Continuous · Direction can reset at any Dil checkpoint

Code Review Triage · linear_with_loops · 4 D-clusters · 1 conditional loop-back

DEL
Negotiate AI scope
Human and AI agree on autonomy boundaries and severity thresholds
DES
Provide context
PR diff, style guide, and linked issues shared with AI
DIS
Evaluate findings
Human validates which AI flags are real vs false positives
↩ loop back to DEL if >30% false positives
DIL
Approve & document
Senior engineer signs off, decisions logged in PR comments
↻ restart for next PR

Community cycles by collaboration pattern

LINEAR
Email Writing · Lesson Planning
Clinical Documentation · Legal Drafting
Single pass · No loops · Strict accountability
LINEAR WITH LOOPS
Code Review · Bug Triage
Test Generation · Course Design
Data Analysis · Marketing Copy
Loop when quality fails · Scope renegotiation
ITERATIVE
Refactoring · Literature Review
Content Development · Iterative Refinement
Multiple full passes · Each builds on last
CYCLIC
Creative AI Collaboration
Continuous · Direction resets at any checkpoint

Each cycle in the knowledge base includes its full collaboration block: ordered D-clusters with example prompts, transition triggers, and loop-back conditions. Use get_collaboration_pattern via the MCP server to retrieve this for any cycle.

Architecture

Two paths to the knowledge base

Community knowledge flows through GitHub MCP: no server required, always current. Private or isolated knowledge uses the custom MCP server with any connector.

Path A: Community (default)

GitHub MCP reads the knowledge base directly

The Fluently knowledge base lives in a public GitHub repo. Any AI agent with the GitHub MCP server wired can read KNOWLEDGE.md, fetch knowledge/index.json, and deep-read individual YAML cycles, with no custom server, no rebuild, and no auth required for reads.

Knowledge updates the moment a community cycle is merged. Contributions open a PR automatically via the same GitHub MCP.

✓ Zero install ✓ Always current ✓ Community contributions
AI Agent: settings.json
// Wire GitHub MCP — no auth needed for reads "mcpServers": { "github": { "command": "npx", "args": ["-y", "@modelcontextprotocol/server-github"], "env": { "GITHUB_PERSONAL_ACCESS_TOKEN": "ghp_xxx" } } }   // Then prompt your agent: Read KNOWLEDGE.md in Fluently-Org/fluently Find cycles for "AI draft + human review" workflow
Path B: Private / Isolated

Custom MCP server with pluggable connectors

When you need private knowledge, like a team's proprietary patterns, a fork with domain-specific cycles, or an air-gapped environment, the custom Fluently MCP server connects to any backend: a private GitHub repo, a local directory, SQL, or NoSQL. Five connectors, one interface.

Six tools are exposed: discover, retrieve, deep-read, inspect dimensions, force-refresh, and contribute. No numeric scores: the agent reasons over ranked candidates contextually.

✓ Private knowledge ✓ 5 connectors ✓ Offline fallback
fluently-mcp-server: connectors
# Community (default — no config) $ fluently-mcp-server   # Private GitHub repo FLUENTLY_CONNECTOR=github-private FLUENTLY_GITHUB_REPO=your-org/knowledge FLUENTLY_GITHUB_TOKEN=ghp_xxx $ fluently-mcp-server   # Local / offline FLUENTLY_CONNECTOR=local FLUENTLY_LOCAL_PATH=./my-knowledge $ fluently-mcp-server
Features

Everything in one knowledge base

🔄
Live Knowledge
Always current, no rebuilds
The GitHub MCP connector fetches directly from raw.githubusercontent.com on every request. The custom MCP server adds a 1-hour TTL cache with bundled offline fallback. New cycles appear the moment a PR is merged, without restarting anything.
🤖
Agent in the Loop
No biased scoring
Numeric scores encode assumptions about platform maturity and user skill. Instead, the server returns ranked keyword-similarity candidates and lets the agent reason contextually over fit, adapting to your specific context.
🏗️
Private Knowledge
Your org's own patterns
Fork the repo, add proprietary cycles, point the server at your private repo. Your team's hard-won AI collaboration patterns stay internal and compound over time.
📤
Contribute
One command to PR
Validate a new cycle via Zod schema and the contribute_cycle tool handles the rest: returns a PR URL for public knowledge, writes a file for local, or opens a branch automatically for private repos.
🛡️
Schema Validation
Zod-validated entries
Every cycle must pass the shared Zod schema before being accepted: all 4 dimensions, examples, antipatterns, score hints. CI enforces this on every PR.
📦
CLI
Terminal-first workflow
The fluent CLI scores tasks, compares cycles, lists the knowledge base, and guides you through contributing a new cycle interactively. Works offline with bundled knowledge.
Live Demo

Try it in your browser

Choose your AI provider, describe a task, and watch Fluently fetch live knowledge from GitHub and reason over cycles, exactly what the GitHub MCP path does, running right here.

AI Connection
No AI connected Connect →

Your API key is stored only in your browser's localStorage and never sent anywhere except your chosen provider's API.

Your Task
What's happening
Fetch knowledge/index.json from GitHub (public, no auth)
Keyword-match your task to the top 3 cycles
Fetch full YAML for each candidate from GitHub
Your AI agent reasons over the cycles and tells you which fits best, and why
Agent output
GitHub MCP AI
Connect an AI provider (GPT, Gemini, Copilot, Mistral…),
describe your AI task, and click Run.

Fetches live knowledge from GitHub
and streams AI reasoning back here.
Get Started

Three ways to use Fluently

Browser, CLI, or MCP server. Pick the path that fits your workflow.

🌐 Browser
Run in this page
Zero install. Connect any AI provider above, describe your task, and the demo runs the full GitHub MCP + AI reasoning flow right here.
Go to demo
⚡ npx
CLI: no global install
Run the Fluently CLI without touching your PATH. Works anywhere Node.js 20+ is available. Scores and compares cycles from the terminal.
npx fluently-cli score "your task here"
npx fluently-cli list coding
📦 Global install
CLI: always available
Install once to run fluent from anywhere. Includes all commands: score, compare, list, contribute, sync.
npm install -g fluently-cli
fluent score "AI reviews PRs for style issues"
MCP Setup

Wire it to your AI assistant

Works with any MCP-compatible client.

claude_desktop_config.json (Claude Desktop)

{ "mcpServers": { "github": { "command": "npx", "args": ["-y", "@modelcontextprotocol/server-github"], "env": { "GITHUB_PERSONAL_ACCESS_TOKEN": "ghp_xxx" } } } }

~/.claude/settings.json (Claude Code)

{ "mcpServers": { "github": { "command": "npx", "args": ["-y", "@modelcontextprotocol/server-github"], "env": { "GITHUB_PERSONAL_ACCESS_TOKEN": "ghp_xxx" } } } }

Then prompt your agent: "Read KNOWLEDGE.md in Fluently-Org/fluently and find the best cycle for my task."

Install the server

# Install globally npm install -g fluently-mcp-server # MCP agent config { "mcpServers": { "fluently": { "command": "fluently-mcp-server" } } }

Private knowledge connector

{ "mcpServers": { "fluently": { "command": "fluently-mcp-server", "env": { "FLUENTLY_CONNECTOR": "github-private", "FLUENTLY_GITHUB_REPO": "your-org/knowledge", "FLUENTLY_GITHUB_TOKEN": "ghp_xxx" } } } }
Carbon Footprint

AI has a carbon cost. Fluently helps reduce it.

Better-scoped prompts mean fewer tokens. Fewer tokens mean lower CO₂. The 4D framework reduces token consumption by 30–45% vs unstructured prompting — and now any agent can estimate and track that impact.

Zero-dependency   The carbon calculator is pure arithmetic — no API calls, no libraries. Load CARBON_KNOWLEDGE.md into any agent's context and it can calculate, track, and report CO₂ from token consumption.

How it works: every LLM call has an input token count and an output token count. Each model has a gCO₂eq rate per token (from the EcoLogits open database). Multiply, sum, and you have an estimate. Fluently adds framework efficiency multipliers on top.

Framework Token reduction CO₂ saving vs baseline
Fluently 4D 30–45% 25–40% most efficient
Fluently Linear 20–30% 15–25%
Fluently Cyclic 15–25% 10–20%
No framework baseline
Rates from EcoLogits open model database (MIT licensed). Anthropic, OpenAI, and Mistral figures are modelled bottom-up estimates (±40% band, medium confidence). Google Gemini figures are calibrated against vendor-disclosed data (high confidence). Fluently savings are from the 3-task benchmark, March 2026. Download: CARBON_KNOWLEDGE.md · CARBON_SKILL.md · npm install fluently-carbon
Carbon estimate medium
Per 100k tokens
Demo session (3 tasks)
Session tokens
Framework comparison
🌿
Select a model to see the estimate.
Agent Protocol

Enable your AI to run Fluently

Copy this instruction into your AI assistant's system prompt or project instructions. It teaches any agent to detect available tools, load the live knowledge base, and execute the 4D cycle correctly — not from training memory.

instruction-agent-fluently.txt Claude Projects · Claude Code
Raw ↗
# FLUENTLY COLLABORATION PROTOCOL Source: https://github.com/Fluently-Org/fluently Reference: https://fluently.ctrl6.com/guide.html You are operating under the Fluently open standard for human-AI collaboration. Fluently is framework-agnostic. Your default framework is the AI Fluency 4D Framework (Delegation, Description, Discernment, Diligence) by Dakan & Feller, but other frameworks may be registered in the knowledge base. Always treat the active framework as a variable, not a constant. ──────────────────────────────────────────────────────────────── STEP 0 – ENVIRONMENT DETECTION (run once, silently, on first load) ──────────────────────────────────────────────────────────────── Detect which path is available and follow it. Do not announce the detection process — just route correctly. PATH A · Fluently MCP server connected → Call list_domains to confirm the server is live. → Call the equivalent of get_framework_list or read frameworks/index.json to discover all registered frameworks. PATH B · GitHub MCP connected (no Fluently MCP) → Read frameworks/index.json in repo Fluently-Org/fluently (main branch) to discover all registered frameworks. → Read KNOWLEDGE.md for domain and cycle orientation. PATH C · No MCP (chat interface only) → Fetch https://github.com/Fluently-Org/fluently to load the framework description and available domains. → Operate from the fetched source. Do not substitute training memory for live framework definitions. In all three paths: load the list of registered frameworks before the first response. The 4D framework is the default if none are specified and only one is registered. ──────────────────────────────────────────────────────────────── STEP 1 – FRAMEWORK SELECTION (run once per session, before first task) ──────────────────────────────────────────────────────────────── After loading the framework list: IF only the default framework (4D) is registered: → Activate it silently. No question needed. IF more than one framework is registered: → Ask once: "I found [N] frameworks registered in the Fluently knowledge base: [list names]. Which would you like to use, or shall I default to [default framework name]?" → Wait for the answer before proceeding. → Do not ask again for the rest of the session unless the user requests a switch with "use framework [name]". IF the user specifies a framework by name at any point: → Load its dimension definitions from the source. → Switch to it immediately and carry forward. Store the active framework as: ACTIVE_FRAMEWORK = { id, name, dimensions[] } All subsequent steps reference ACTIVE_FRAMEWORK.dimensions, never hardcoded dimension names. ──────────────────────────────────────────────────────────────── STEP 2 – KNOWLEDGE BASE ORIENTATION (run once per session) ──────────────────────────────────────────────────────────────── Load the available domains and cycle index for the active framework. Know what cycles exist before any task arrives. This is not optional — an agent that skips this step and operates from training memory is not running Fluently; it is describing it. ──────────────────────────────────────────────────────────────── STEP 3 – PRE-TASK PROTOCOL (run before every response) ──────────────────────────────────────────────────────────────── Before producing any output for a task: 3a. IDENTIFY DOMAIN Classify the task into one of the knowledge base domains. State it in one word. 3b. FIND MATCHING CYCLE Check the knowledge base for a cycle that matches this domain and task type under ACTIVE_FRAMEWORK. → If a match exists: name it and note its collaboration pattern (linear / linear_with_loops / cyclic / iterative). Reference its antipatterns during D3. → If no match exists: state "No matching cycle found — running first-principles [ACTIVE_FRAMEWORK.name] cycle." Flag this as a potential contribution candidate. 3c. SET DIMENSION SCOPE For each dimension in ACTIVE_FRAMEWORK.dimensions (in canonical order), state its scope for this specific task in one sentence each — before writing the output. This is the pre-output protocol, not a post-output label. ──────────────────────────────────────────────────────────────── STEP 4 – RUN THE CYCLE ──────────────────────────────────────────────────────────────── Execute each dimension in ACTIVE_FRAMEWORK.dimensions canonical order. For each dimension: · Apply its definition as a decision gate, not a label. · The second-to-last dimension is always the trust/evaluation gate: name at least one specific risk, gap, or assumption the human must verify. "Review for accuracy" is not acceptable — name the specific claim. · The final dimension is always the accountability gate: name the exact human action required before this output is used. "Review as needed" is not acceptable — name the action. Loop-back condition: if output quality fails the trust gate, return to the first dimension and renegotiate scope before producing new output. Do not silently retry. ──────────────────────────────────────────────────────────────── STEP 5 – SCORE AND COMPOUND ──────────────────────────────────────────────────────────────── After each task: · Score each dimension 0–100 using the active framework's score_hints as weighting guidance. · Compute a weighted overall score. · Carry cycle context forward. Each task in a session builds on the previous one — do not reset. · If a novel collaboration pattern emerged, flag it: "This cycle is a candidate for contribution to the Fluently knowledge base." ──────────────────────────────────────────────────────────────── ANTIPATTERNS — NEVER DO THESE ──────────────────────────────────────────────────────────────── · Applying dimension labels after producing output instead of running dimensions as decision gates before output · Assuming 4D is the only registered framework without checking · Skipping the knowledge base lookup and operating from training memory alone · Vague trust/evaluation flags ("verify accuracy", "check this") without naming the specific claim to verify · Optional or vague accountability checkpoints · Resetting framework context between tasks in the same session · Describing the framework instead of running it ──────────────────────────────────────────────────────────────── FORMAT RULE ──────────────────────────────────────────────────────────────── The output leads. The framework block follows compactly. The cycle is a service to the human, not a performance of compliance. If the framework block is longer than the output, something is wrong.
Paste into Claude Projects → Project Instructions or into a CLAUDE.md file at your repo root for Claude Code. The agent will auto-detect whether the Fluently MCP server or GitHub MCP is connected and load the knowledge base accordingly.
instruction-agent-fluently.txt ChatGPT Custom Instructions
Raw ↗
Paste the full content of instruction-agent-fluently.txt into "What would you like ChatGPT to know about you?" or into a GPT system prompt (GPT Builder → Configure → Instructions). Raw file: https://raw.githubusercontent.com/Fluently-Org/fluently/main/instruction-agent-fluently.txt For PATH B (GitHub MCP), connect the GitHub plugin and ensure the token has read access to Fluently-Org/fluently. For PATH C (no plugin), ChatGPT will fetch the repo page on first load and operate from live content.
ChatGPT's "Custom Instructions" field has a character limit (~1500 chars). If the full protocol exceeds it, use the GPT Builder system prompt (no limit) or paste only Steps 0–2 into Custom Instructions and reference the raw file URL for the full text.
instruction-agent-fluently.txt Any MCP-compatible agent
Raw ↗
Add the full content of instruction-agent-fluently.txt to your agent's system prompt before the first user message. Raw file (always up to date): https://raw.githubusercontent.com/Fluently-Org/fluently/main/instruction-agent-fluently.txt Works with any agent that supports: · A configurable system prompt · Optional: MCP tool calls (Fluently MCP or GitHub MCP) · Optional: HTTP fetch (for PATH C — no MCP) Without MCP, the agent falls back to PATH C and fetches the Fluently repo page directly. MCP is recommended for structured knowledge retrieval and cycle scoring. MCP server (npm): npm install -g fluently-mcp-server Then add "fluently-mcp-server" to your MCP servers config.
The protocol is framework-agnostic. If you register additional frameworks in your Fluently knowledge base, the agent will discover and offer them automatically at session start via Step 1.

Start collaborating better with AI today

Fluently is open-source and community-driven. Contribute a cycle, fork the knowledge base, or wire it to your team's private patterns.