Game Changer

Context
Engineering

Stop writing prompts. Start designing context windows.

6
Building Blocks
60%
Target Context Fill
โˆž
Reproducibility
The Problem

The LLM is a Stateless Box

Every time an agent runs, it starts from zero. It doesn't remember your last session, your databases, or your tools. The output is only as good as the context you feed it.

๐Ÿ“ฅ
Context In
Everything you give it
โ†’
๐Ÿง 
LLM
Stateless processing
โ†’
๐Ÿ“ค
Output
Quality = f(context)
The Framework

6 Building Blocks

01
Identity & Outcome
Who is this agent? What should it achieve? System prompt level.
02
Tools & Schemas
Every API and tool with exact input/output schemas.
03
Resources & Data
What private data? Intelligent retrieval, not data dumps.
04
Workflow
Outcome-defined steps. Describe what, not how.
05
Memory
Session persistence, compaction rules, key-value stores.
06
Composition
Split into sub-agents when context windows diverge.
Deep Dive

Identity + Tools

Block 1
Identity & Outcome
Define the broad outcome, not step-by-step instructions.
Do: "Produce a client audit that enables proposal creation"
Don't: "First search Google, then check LinkedIn, then..."
Block 2
Tools & Schemas
The agent must know exactly what it can call.
ToolInputOutput
WebSearchquery stringresults[]
Notionfilter objectpages[]
Writepath + contentsaved file
Deep Dive

Resources + Memory

Block 3
Resources & Data Access
Intelligent retrieval โ€” let the agent ask for data iteratively.
Context Budget: 60-70%
Never max out. Leave room for thinking.

Data Economy:
Pass IDs โ†’ not full objects
Summarize โ†’ don't dump raw
Block 5
Memory Strategy
How does the agent handle state across runs?
Persistence: Session Log in Notion
Compaction: Keep last 3 verbatim, summarize older
Key-Value: Write bulky data to Notion, pass page ID
The Shift

Before vs After CE

โŒ Before

โœ— "Use web search to research"
โœ— No tool schemas โ€” agent guesses
โœ— Dump all data into context
โœ— No memory โ€” every run is blind
โœ— Monolith agents

โœ“ After CE

โœ“ Tool table with exact I/O schemas
โœ“ Full schemas for every API call
โœ“ 60-70% budget, pointers over dumps
โœ“ Session Log + compaction rules
โœ“ Orchestrator + sub-agents
Your System

Notion as Context Layer

๐Ÿงญ
ROSME
Block 1: Identity
โ†’
๐Ÿ—๏ธ
AI USE CASE
Blocks 2-6: PRD
โ†’
๐Ÿ“
Prompt Library
Block 1: Templates
โ†’
๐Ÿ“Š
Session Log
Block 5: Memory
โ†’
๐ŸŽ“
AI 2026
Learning State

Every database feeds specific context blocks. The agent never gets everything โ€” it gets exactly what it needs for the current task.

Proof It Works

Customer Audit CE Review

Ran the framework against an existing skill. Found 3 of 6 blocks missing entirely.

BlockStatusGap
1. Identity & OutcomePartialMissing outcome definition + scope boundaries
2. Tools & SchemasMissingSays "use web search" but no tool table
3. Resources & DataMissingNo Notion DB references, no context budget
4. WorkflowGoodClear steps โ€” could be more outcome-defined
5. MemoryMissingNo compaction, no persistence strategy
6. CompositionMissingMonolith OK for now, but noted for future
Progress

What We Built

Done
Prompt Library DB
Central storage for all reusable prompts, templates, and agent personas.
Done
Session Log DB
Cross-session memory. What happened, what blocked, next steps.
Done
CE Agent Template
Full 6-block template stored in Prompt Library for all future agents.
Done
Updated PRD Template
ai-use-case-creator now generates PRDs with Tools, Resources, Memory blocks.
Next Session

What's Next

1. Install Skills
Copy updated ai-use-case-creator + session-closer from outputs/ to skills folder.
2. CE Rewrite Skills
customer-audit โ†’ proposal-generator โ†’ shooting-plan. Add missing blocks to each.
3. Compaction Strategy
Define system-wide rules for summarizing vs. passing verbatim.
4. Composition Map
Identify which skills should split into orchestrator + sub-agent patterns.
โ† โ†’ Arrow Keys ยท F Fullscreen