file: ./content/docs/core/contributing.mdx
meta: {
"title": "Contributing",
"description": "Contributing to Daydreams."
}
## Contributing
To contribute to Daydreams, please review our development guidelines and
submission process.
If you are a developer and would like to contribute with code, please check out
our [GitHub repository](https://github.com/daydreamsai/daydreams) and open an
issue to discuss before opening a Pull Request.
## Star History
file: ./content/docs/core/first-agent.mdx
meta: {
"title": "Your first agent",
"description": "Build your first Daydreams agent and discover the power of contexts."
}
import { Tab, Tabs } from "fumadocs-ui/components/tabs";
## What Makes Daydreams Different?
Most AI frameworks treat conversations as stateless - every interaction starts
from scratch. But real conversations aren't like that. **Daydreams changes
everything with contexts.**
### The Magic of Contexts
Imagine you're building a customer support agent. With traditional frameworks:
```text title="traditional-stateless-approach.txt"
User: "I need help with my order"
Agent: "What's your order number?"
User: "12345"
Agent: "What's your order number?" // 😕 Already forgot!
```
With Daydreams contexts:
```text title="daydreams-stateful-approach.txt"
User: "I need help with my order"
Agent: "What's your order number?"
User: "12345"
Agent: "I see order #12345. It was shipped yesterday!" // 🎉 Remembers!
```
**Contexts are isolated workspaces** that:
* 🧠 **Remember** - Each conversation has its own memory
* 🔒 **Isolate** - Different users never see each other's data
* 🎯 **Focus** - Specialized behaviors for different situations
* 🔄 **Persist** - Memory survives between conversations
## How Daydreams Works
An agent in Daydreams follows this cycle:
1. **Listen** - Receives input (message, event, API call)
2. **Think** - Uses an LLM to understand and decide
3. **Act** - Performs actions or sends responses
4. **Remember** - Saves important information in context
This happens continuously, with the context providing memory and state
throughout.
## Installation
Let's build your first stateful agent! Start by installing Daydreams:
pnpm add @daydreamsai/core @daydreamsai/cli @ai-sdk/openai zod
npm install @daydreamsai/core @daydreamsai/cli @ai-sdk/openai zod
bun add @daydreamsai/core @daydreamsai/cli @ai-sdk/openai zod
yarn add @daydreamsai/core @daydreamsai/cli @ai-sdk/openai zod
**Important:** Set your `OPENAI_API_KEY` environment variable before continuing.
## Your First Context-Aware Agent
Let's build a personal assistant that remembers you - your name, preferences,
and conversation history. This showcases the true power of contexts.
### Step 1: Create your project
```bash title="setup.sh"
mkdir my-first-agent && cd my-first-agent
touch agent.ts
```
### Step 2: Build a stateful agent
```typescript title="agent.ts"
import { createDreams, context, action } from "@daydreamsai/core";
import { cliExtension } from "@daydreamsai/cli";
import { openai } from "@ai-sdk/openai";
import * as z from "zod";
// Create a context - this is where the magic happens!
const assistantContext = context({
type: "personal-assistant",
// Each user gets their own context instance
schema: z.object({
userId: z.string().describe("Unique identifier for the user"),
}),
// Initialize memory for new users
create: () => ({
userName: "",
lastTopic: "",
preferences: {},
conversationCount: 0,
}),
// Define what the LLM sees about this context
render: (state) => {
const { userName, conversationCount, lastTopic, preferences } =
state.memory;
return `
Personal Assistant for User: ${state.args.userId}
${userName ? `Name: ${userName}` : "Name: Unknown (ask for their name!)"}
Conversations: ${conversationCount}
${lastTopic ? `Last topic: ${lastTopic}` : ""}
${
Object.keys(preferences).length > 0
? `Preferences: ${JSON.stringify(preferences, null, 2)}`
: "No preferences saved yet"
}
`.trim();
},
// Instructions that guide the assistant's behavior
instructions: `You are a personal assistant with memory. You should:
- Remember information about the user across conversations
- Ask for their name if you don't know it
- Learn their preferences over time
- Reference previous conversations when relevant
- Be helpful and personalized based on what you know`,
// Track conversation count
onRun: async (ctx) => {
ctx.memory.conversationCount++;
},
});
// Add actions the assistant can perform
assistantContext.setActions([
action({
name: "remember-name",
description: "Remember the user's name",
schema: z.object({
name: z.string().describe("The user's name"),
}),
handler: async ({ name }, ctx) => {
ctx.memory.userName = name;
return {
remembered: true,
message: `I'll remember your name is ${name}`,
};
},
}),
action({
name: "update-topic",
description: "Remember what we're discussing",
schema: z.object({
topic: z.string().describe("Current conversation topic"),
}),
handler: async ({ topic }, ctx) => {
ctx.memory.lastTopic = topic;
return { updated: true };
},
}),
]);
// Create the agent
const agent = createDreams({
model: openai("gpt-4o-mini"),
extensions: [cliExtension],
contexts: [assistantContext],
});
// Start the interactive CLI
async function main() {
await agent.start();
console.log("\n🤖 Personal Assistant Started!");
console.log("💡 Try telling me your name or preferences.");
console.log("💡 Exit and restart - I'll still remember you!\n");
// Simulate different users with different context instances
const userId = process.argv[2] || "default-user";
console.log(`Starting session for user: ${userId}\n`);
// Run the assistant for this specific user
await agent.run({
context: assistantContext,
args: { userId }, // This creates/loads a unique context instance
});
console.log("\n👋 See you next time!");
}
main().catch(console.error);
```
### Step 3: Experience the magic of contexts
Run your agent:
```bash title="run.sh"
# Start as the default user
node agent.ts
# Or start as a specific user
node agent.ts alice
node agent.ts bob
```
Try this conversation flow:
```text title="example-conversation.txt"
You: Hi there!
Assistant: Hello! I don't think we've been properly introduced. What's your name?
You: I'm Alice
Assistant: Nice to meet you, Alice! I'll remember that for next time.
You: I love coffee and hate mornings
Assistant: I've noted your preferences! You love coffee and hate mornings.
You: What do you know about me?
Assistant: I know your name is Alice, you love coffee, and you hate mornings.
We've had 1 conversation so far.
[Exit and restart the agent with 'node agent.ts alice']
You: Do you remember me?
Assistant: Of course, Alice! I remember you love coffee and hate mornings.
This is our 2nd conversation together.
```
### What Just Happened?
1. **Context Creation** - When you started with user "alice", Daydreams created
a unique context instance
2. **Memory Persistence** - The context saved Alice's name and preferences
3. **Isolation** - If you run `node agent.ts bob`, Bob gets a completely
separate context
4. **Stateful Behavior** - The agent's responses are personalized based on
context memory
## Understanding Context Power
Let's see how contexts solve real problems:
### Problem 1: Multi-User Support
```typescript title="multi-user-contexts.ts"
// Each user automatically gets their own isolated context
await agent.run({
context: assistantContext,
args: { userId: "alice" }, // Alice's personal workspace
});
await agent.run({
context: assistantContext,
args: { userId: "bob" }, // Bob's separate workspace
});
// Alice and Bob never see each other's data!
```
### Problem 2: Different Behaviors for Different Situations
```typescript title="multiple-context-types.ts"
// Different contexts for different purposes
const casualChatContext = context({
type: "casual-chat",
instructions: "Be friendly and conversational",
});
const technicalSupportContext = context({
type: "tech-support",
instructions: "Be precise and solution-focused",
});
const salesContext = context({
type: "sales",
instructions: "Be helpful but also mention relevant products",
});
// Same agent, different personalities based on context!
```
### Problem 3: Complex State Management
```typescript title="complex-state.ts"
interface GameMemory {
level: number;
score: number;
inventory: string[];
currentRoom: string;
}
const gameContext = context({
type: "adventure-game",
create: () => ({
level: 1,
score: 0,
inventory: ["torch"],
currentRoom: "entrance",
}),
// Game state persists between sessions!
});
```
## What You Built
Your customer service agent demonstrates the three key Daydreams concepts:
### 1. **Context Isolation**
Each customer gets their own workspace:
```typescript
// Different customers = different context instances
args: {
customerId: "CUST001";
} // Alice's data
args: {
customerId: "CUST002";
} // Bob's data (completely separate)
```
### 2. **Context Composition**
Combine contexts with `.use()`:
```typescript
use: (state) => [
{
context: accountContext,
args: { customerId: state.args.customerId }, // Share customer ID
},
];
```
The LLM automatically sees both customer support AND account data.
### 3. **Action Scoping**
Actions are available based on active contexts:
* `save-customer-info`, `create-ticket` → Only in customer context
* `link-account`, `check-balance` → Only in account context
* Global actions → Available everywhere
## Test the Complete Flow
```bash
node agent.ts
```
Try this conversation:
```text
You: Hi, my name is Sarah Johnson, email sarah@email.com
Agent: [Uses save-customer-info action]
You: I need help with billing. My account is ACC12345
Agent: [Uses link-account action, then can use check-balance]
You: Create a ticket for my billing issue
Agent: [Uses create-ticket action]
You: Thanks, that's resolved now
Agent: [Uses resolve-ticket action]
```
## Key Concepts Learned
* **Contexts** provide isolated, stateful workspaces
* **`.use()`** composes contexts for modular functionality
* **`.setActions()`** scopes actions to specific contexts
* **Memory persists** between conversations for the same context args
* **LLM sees all** active context data and available actions
## Next Steps
* [Contexts](/docs/core/concepts/contexts) - Deep dive into context patterns
* [Building Blocks](/docs/core/concepts/building-blocks) - Understand actions,
inputs, outputs
* [Extensions](/docs/core/concepts/extensions) - Package functionality for reuse
* [Building Block Operations](/docs/core/concepts/building-block-operations) -
Common patterns
**Challenge:** Add a third context for order management using `.use()` and give
it order-specific actions!
file: ./content/docs/core/index.mdx
meta: {
"title": "Daydreams Framework",
"description": "TypeScript framework for building stateful AI agents with composable contexts."
}
> ⚠️ **Alpha Software**: Expect breaking changes. API not yet stable.
## What Makes Daydreams Different?
**Composable Contexts** - the key innovation that sets Daydreams apart.
Most AI frameworks treat conversations as stateless. Daydreams provides **isolated, stateful workspaces** that can be composed together for complex behaviors:
```typescript
// Single context
const chatContext = context({ type: "chat" });
// Composed contexts - combine functionality
const customerServiceContext = context({ type: "customer-service" })
.use(state => [
{ context: accountContext, args: { customerId: state.args.customerId } },
{ context: ticketContext, args: { customerId: state.args.customerId } }
]);
```
**Result**: The LLM automatically gets access to chat, account, AND ticket data in a single conversation.
## Framework Features
| Feature | Description | Benefit |
| ------------------------ | ------------------------------------------------- | ---------------------------------------- |
| **Composable Contexts** | Combine isolated workspaces with `.use()` | Modular, reusable agent behaviors |
| **Stateful Memory** | Persistent memory per context instance | Agents that truly remember conversations |
| **Action Scoping** | Context-specific capabilities via `.setActions()` | Precise control over agent abilities |
| **Multi-User Isolation** | Separate context instances per user/session | Secure, scalable multi-tenant support |
| **Real-time Streaming** | XML-based LLM response parsing | Immediate action execution |
| **TypeScript-first** | Full type safety across all components | Better developer experience |
| **Model Agnostic** | Works with any AI SDK provider | Flexibility in model choice |
| **Extension System** | Pre-built integrations (Discord, Twitter, etc.) | Rapid development |
## Architecture Overview
Daydreams agents are built from four core components:
```typescript
// Building blocks work together
const agent = createDreams({
model: openai("gpt-4o"),
contexts: [customerContext], // Stateful workspaces
actions: [linkAccount], // What agent can do
inputs: [discordMessage], // How to listen
outputs: [emailReply], // How to respond
});
```
**Flow**: Input triggers agent → LLM reasons with context data → Actions execute → Outputs communicate → Memory persists
## Get Started
### 🚀 **Quickstart**
[Your First Agent](/docs/core/first-agent) - Build a customer service agent that showcases contexts, composition, and action scoping
### 📚 **Learn the Concepts**
* [Building Blocks](/docs/core/concepts/building-blocks) - Core components overview
* [Contexts](/docs/core/concepts/contexts) - Stateful workspaces and composition patterns
* [Actions, Inputs, Outputs](/docs/core/concepts/building-block-operations) - Agent capabilities
* [Agent Lifecycle](/docs/core/concepts/agent-lifecycle) - How agents process information
### 🏗️ **System Architecture**
* [Services](/docs/core/concepts/services) - Infrastructure management
* [Extensions](/docs/core/concepts/extensions) - Feature packaging
* [Extensions vs Services](/docs/core/concepts/extensions-vs-services) - Decision guide
### 🔧 **Advanced Topics**
* [Context Composition](/docs/core/concepts/composing-contexts) - Advanced patterns
* [Prompting](/docs/core/concepts/prompting) - LLM interaction structure
* [MCP Integration](/docs/core/concepts/mcp) - External service connections
### 💻 **Examples & Tutorials**
* [Installation Guide](/docs/core/installation) - Development setup
* [Example Agents](/docs/tutorials/examples) - Complete working examples
* [API Reference](/docs/api) - Detailed documentation
file: ./content/docs/core/installation.mdx
meta: {
"title": "Installation",
"description": "Set up Daydreams and configure your development environment."
}
import { Tab, Tabs } from "fumadocs-ui/components/tabs";
import { Step, Steps } from "fumadocs-ui/components/steps";
## Get an API key
* [DreamsRouter](https://router.daydreams.systems) -> `DREAMSROUTER_API_KEY`
* [Google Gemini](https://console.cloud.google.com/gemini) -> `GEMINI_API_KEY`
* [OpenAI](https://platform.openai.com/docs/api-reference) -> `OPENAI_API_KEY`
* [Anthropic](https://docs.anthropic.com/en/api/getting-started) ->
`ANTHROPIC_API_KEY`
* [Groq](https://docs.groq.com/docs/api-reference) -> `GROQ_API_KEY`
* Other providers are supported by
[ai-sdk.dev](https://ai-sdk.dev/docs/foundations/providers-and-models)
## Installation
There are two ways to get started with Daydreams:
The quickest way to get started is using the create-agent command:
Run the create-agent command:
```bash title="create-agent.sh"
npx @daydreamsai/create-agent my-agent
```
This will:
* Create a new directory for your agent
* Set up package.json with necessary dependencies
* Create an index.ts file with your selected extensions
* Generate a .env.example file with required environment variables
* Install all dependencies
Choose your extensions when prompted (or use flags):
```bash title="create-agent-with-extensions.sh"
# With specific extensions
npx @daydreamsai/create-agent my-agent --twitter --discord --cli
# With all extensions
npx @daydreamsai/create-agent my-agent --all
```
Available extensions:
* `--cli`: Include CLI extension
* `--twitter`: Include Twitter extension
* `--discord`: Include Discord extension
* `--telegram`: Include Telegram extension
* `--all`: Include all extensions
Configure your environment variables in the generated `.env` file and start
building!
For more control over your setup, you can install manually:
Initialize your project and install core packages:
```bash title="package-install.sh"
pnpm init -y
pnpm add typescript tsx @types/node @daydreamsai/core @daydreamsai/cli
```
```bash title="package-install.sh"
npm init -y
npm install typescript tsx @types/node @daydreamsai/core @daydreamsai/cli
```
```bash title="package-install.sh"
bun init -y
bun add typescript tsx @types/node @daydreamsai/core @daydreamsai/cli
```
```bash title="package-install.sh"
yarn init -y
yarn add typescript tsx @types/node @daydreamsai/core @daydreamsai/cli
```
Install an LLM provider SDK:
`bash title="package-install.sh" pnpm add @ai-sdk/openai `
`bash title="package-install.sh" npm install @ai-sdk/openai `
`bash title="package-install.sh" bun add @ai-sdk/openai `
`bash title="package-install.sh" yarn add @ai-sdk/openai `
Other supported providers from [ai-sdk.dev](https://ai-sdk.dev/):
* `@ai-sdk/anthropic` for Claude
* `@ai-sdk/google` for Gemini
* And many more...
Create your environment file:
```bash title="bash"
# Create .env file
cp .env.example .env
```
Add your API keys:
```bash title=".env"
OPENAI_API_KEY=your_openai_api_key_here
# ANTHROPIC_API_KEY=your_anthropic_api_key_here
# GEMINI_API_KEY=your_gemini_api_key_here
```
Create your first agent file (`index.ts`):
```typescript title="index.ts"
import { createDreams, LogLevel } from "@daydreamsai/core";
import { cliExtension } from "@daydreamsai/cli";
import { openai } from "@ai-sdk/openai";
const agent = createDreams({
logLevel: LogLevel.DEBUG,
model: openai("gpt-4o"),
extensions: [cliExtension],
});
// Start the agent
await agent.start();
```
Add run scripts to your `package.json`:
```json title="package.json"
{
"scripts": {
"dev": "tsx index.ts",
"start": "node index.js"
}
}
```
Run your agent:
`bash title="run-dev.sh" pnpm dev `
`bash title="run-dev.sh" npm run dev `
`bash title="run-dev.sh" bun dev `
`bash title="run-dev.sh" yarn dev `
## Next Steps
Now you can start building your first agent! Check out the
[concepts](/docs/core/concepts/core) section to learn about the core building
blocks.
file: ./content/docs/router/dreams-sdk.mdx
meta: {
"title": "Dreams SDK Integration",
"description": "Dreams Router provider for Vercel AI SDK with built‑in x402 payments"
}
## Dreams Router Provider for Vercel AI SDK
Dreams Router is an AI model router with built‑in x402 payments for the
[Vercel AI SDK](https://sdk.vercel.ai/docs). It supports EVM and Solana, API
keys or wallet auth, and auto‑generates exact payment headers from server
requirements.
### Key Features
* Payment‑integrated AI using the x402 protocol (USDC)
* Multiple auth methods: API key, JWT, or inline payments
* Unified LLM router across providers and models
* Manage your account at
[router.daydreams.systems](https://router.daydreams.systems)
## Installation
```bash
npm install @daydreamsai/ai-sdk-provider viem x402
```
## Quick Start
### Separated Auth (EVM/Solana helpers)
```ts
import { generateText } from 'ai';
import {
createEVMAuthFromPrivateKey,
createSolanaAuthFromPublicKey,
} from '@daydreamsai/ai-sdk-provider';
// EVM (Ethereum, Base, etc.)
const { dreamsRouter } = await createEVMAuthFromPrivateKey(
process.env.EVM_PRIVATE_KEY as `0x${string}`,
{
payments: { network: 'base-sepolia' },
}
);
// Solana (browser/wallet-style: publicKey + signMessage)
const { dreamsRouter: solanaRouter } = await createSolanaAuthFromPublicKey(
process.env.SOL_PUBLIC_KEY!,
async ({ message }) => wallet.signMessage(message),
{
payments: {
network: 'solana-devnet',
rpcUrl: 'https://api.devnet.solana.com',
},
}
);
const { text } = await generateText({
model: dreamsRouter('google-vertex/gemini-2.5-flash'),
prompt: 'Hello from Dreams Router!',
});
```
Why separated helpers?
* Type safety per chain; chain‑specific options stay clear
* Explicit intent (EVM vs Solana), smaller bundles
### API Key Auth
```ts
import { createDreamsRouter } from '@daydreamsai/ai-sdk-provider';
import { generateText } from 'ai';
const dreamsRouter = createDreamsRouter({
apiKey: process.env.DREAMSROUTER_API_KEY,
});
const { text } = await generateText({
model: dreamsRouter('google-vertex/gemini-2.5-flash'),
prompt: 'Hello, Dreams Router!',
});
```
### Namespace .evm / .solana (Node)
```ts
import {
createDreamsRouter,
type SolanaSigner,
} from '@daydreamsai/ai-sdk-provider';
import { privateKeyToAccount } from 'viem/accounts';
// EVM via viem Account
const evm = createDreamsRouter.evm(
privateKeyToAccount(process.env.EVM_PRIVATE_KEY as `0x${string}`),
{ network: 'base-sepolia' }
);
// Solana via Node signer (base58 secret)
const solana = createDreamsRouter.solana(
{
type: 'node',
secretKeyBase58: process.env.SOLANA_SECRET_KEY!,
rpcUrl: process.env.SOLANA_RPC_URL,
},
{ network: 'solana-devnet' }
);
```
## Authentication Methods
* x402 payments (wallet‑based, EVM or Solana)
* API key
* Session token (JWT) from wallet login
## Configuration
### Environment
```bash
# API key auth
DREAMSROUTER_API_KEY=...
# EVM auth
EVM_PRIVATE_KEY=0x...
# Solana (Node signer)
SOLANA_SECRET_KEY=base58-encoded-64-byte-secret
SOLANA_RPC_URL=https://api.devnet.solana.com
# Solana (wallet-style)
SOL_PUBLIC_KEY=...
ROUTER_BASE_URL=https://api-beta.daydreams.systems
```
## Advanced
### Payment config (auto‑requirements)
```ts
type DreamsRouterPaymentConfig = {
network?: 'base' | 'base-sepolia' | 'solana' | 'solana-devnet';
validityDuration?: number; // default 600s
mode?: 'lazy' | 'eager'; // default 'lazy'
rpcUrl?: string; // Solana only
};
```
Amounts and pay‑to addresses come from the router’s 402 response and are signed
automatically; you do not set them manually.
### Solana signer interface (Node)
```ts
type SolanaSigner = {
type: 'node';
secretKeyBase58: string; // 64‑byte secret, base58
rpcUrl?: string;
};
```
### Model selection
Use any model available in the dashboard; e.g.
`google-vertex/gemini-2.5-flash`.
## Links
* Dreams Router Dashboard: [https://router.daydreams.systems](https://router.daydreams.systems)
* x402 Protocol: [https://github.com/x402](https://github.com/x402)
* Vercel AI SDK: [https://sdk.vercel.ai/docs](https://sdk.vercel.ai/docs)
## Resources
* [Vercel AI SDK Documentation](https://sdk.vercel.ai/docs)
* [Daydreams Core Documentation](/docs/core)
* [Complete Examples](https://github.com/daydreamsai/daydreams/tree/main/examples)
* [x402 Payment Protocol](https://x402.dev)
file: ./content/docs/router/index.mdx
meta: {
"title": "Daydreams Router",
"description": "AI model routing with authentication and payments"
}
# Router
The Daydreams Router acts as an intelligent gateway between your application and
AI models, providing unified access to multiple AI providers through a single
API.
🌐 **Live Service**:
[router.daydreams.systems](https://router.daydreams.systems)
## Key Features
* **Unified Interface**: Single API for OpenAI, Anthropic, Google, and more
* **Model Routing**: Automatic selection and fallback between providers
* **Dual Authentication**: API keys or x402 USDC micropayments
* **OpenAI Compatibility**: Works with existing OpenAI SDK clients
* **Cost Tracking**: Monitor usage across all providers
## How It Works
The router standardizes interactions across AI providers:
1. **Request Reception**: Accepts OpenAI-format requests
2. **Provider Translation**: Converts to provider-specific formats
3. **Response Normalization**: Returns standardized OpenAI-format responses
4. **Error Handling**: Automatic retries and provider fallbacks
## Architecture
```
Your App → Dreams Router → Provider (OpenAI/Anthropic/Google/etc)
↓
Auth Layer (API Key or x402)
↓
Response Standardization
```
## Getting Started
* [Quickstart Guide](./quickstart) - Make your first API call in 5 minutes
* [Dreams SDK Integration](./dreams-sdk) - Use with Vercel AI SDK and Daydreams
* [API Reference](https://router.daydreams.systems/docs) - Complete endpoint
documentation
## Example Applications
### Nanoservice (Paid AI Assistant)
The
[nanoservice example](https://github.com/daydreamsai/daydreams/tree/main/examples/x402/nanoservice)
demonstrates a complete AI service with x402 payments:
```typescript
// Payment-enabled AI assistant
const { dreamsRouter } = await createDreamsRouterAuth(account, {
payments: {
amount: "100000", // $0.10 USDC per request
network: "base-sepolia",
},
});
const agent = createDreams({
model: dreamsRouter("google-vertex/gemini-2.5-flash"),
contexts: [assistantContext],
});
```
## Next Steps
* Start with the [Quickstart Guide](./router/quickstart)
* Explore [Dreams SDK integration](./router/dreams-sdk) for TypeScript projects
* Learn about [x402 payments](https://www.x402.org/) for micropayment
integration
* View
[complete examples](https://github.com/daydreamsai/daydreams/tree/main/examples)
file: ./content/docs/router/quickstart.mdx
meta: {
"title": "Quickstart",
"description": "Get started with Daydreams Router in under 5 minutes"
}
# Quickstart Guide
Make your first API call to the Dreams Router in under 5 minutes.
## Authentication Methods
### API Key
Add your API key to request headers:
```bash
Authorization: Bearer YOUR_API_KEY
```
Get your key from [router.daydreams.systems](https://router.daydreams.systems).
### x402 Payments (Pay-per-use)
Instead of an API key, you can pay per request using USDC micropayments via the `X-Payment` header:
```javascript
import { generateX402Payment } from "@daydreamsai/ai-sdk-provider";
import { privateKeyToAccount } from "viem/accounts";
const account = privateKeyToAccount("0x...your-private-key");
// Generate x402-compliant payment header
const paymentHeader = await generateX402Payment(account, {
amount: "100000", // $0.10 USDC (6 decimals)
network: "base-sepolia", // or "base" for mainnet
});
// Make request with X-Payment header
const response = await fetch(
"https://router.daydreams.systems/v1/chat/completions",
{
method: "POST",
headers: {
"Content-Type": "application/json",
"X-Payment": paymentHeader, // x402-compliant payment
},
body: JSON.stringify({
model: "google-vertex/gemini-2.5-flash",
messages: [{ role: "user", content: "Hello!" }],
}),
}
);
```
For browser/wagmi environments:
```javascript
import { generateX402PaymentBrowser } from "@daydreamsai/ai-sdk-provider";
import { useAccount, useSignTypedData } from "wagmi";
const { address } = useAccount();
const { signTypedDataAsync } = useSignTypedData();
const paymentHeader = await generateX402PaymentBrowser(
address,
signTypedDataAsync,
{ amount: "100000", network: "base-sepolia" }
);
// Use paymentHeader in X-Payment header
```
## Making Your First Request
The primary endpoint for AI completions is `/v1/chat/completions`. Here's a
simple example using curl:
```bash
curl -X POST https://router.daydreams.systems/v1/chat/completions \
-H "Content-Type: application/json" \
-H "Authorization: Bearer YOUR_API_KEY" \
-d '{
"model": "google-vertex/gemini-2.5-flash",
"messages": [
{
"role": "user",
"content": "Hello, how are you?"
}
],
"stream": false
}'
```
## Request Schema
The completions endpoint accepts the following parameters:
### Required Fields
* **`model`** (string): The model identifier (e.g., "gpt-4",
"claude-3-opus-20240229", "gemini-pro")
* **`messages`** (array): An array of message objects representing the
conversation
### Message Object Structure
Each message in the `messages` array must have:
* **`role`** (string): One of "system", "user", or "assistant"
* **`content`** (string): The text content of the message
Example:
```json
{
"role": "user",
"content": "What is the capital of France?"
}
```
### Optional Fields
* **`stream`** (boolean): Enable streaming responses (default: false)
* **`temperature`** (number): Controls randomness (0.0 to 2.0, default: 1.0)
* **`max_tokens`** (number): Maximum tokens to generate
* **`top_p`** (number): Nucleus sampling parameter (0.0 to 1.0)
* **`frequency_penalty`** (number): Reduce repetition (-2.0 to 2.0)
* **`presence_penalty`** (number): Encourage new topics (-2.0 to 2.0)
* **`stop`** (string or array): Stop sequences
* **`user`** (string): Unique identifier for end-user tracking
### Complete Request Example
```json
{
"model": "google-vertex/gemini-2.5-flash",
"messages": [
{
"role": "system",
"content": "You are a helpful assistant."
},
{
"role": "user",
"content": "Explain quantum computing in simple terms."
}
],
"temperature": 0.7,
"max_tokens": 500,
"stream": false
}
```
## Response Format
Dreams Router standardizes all responses to the OpenAI format, regardless of the
underlying provider:
```json
{
"id": "chatcmpl-123",
"object": "chat.completion",
"created": 1677652288,
"model": "google-vertex/gemini-2.5-flash",
"choices": [
{
"index": 0,
"message": {
"role": "assistant",
"content": "Quantum computing is a type of computing that uses quantum mechanical phenomena..."
},
"finish_reason": "stop"
}
],
"usage": {
"prompt_tokens": 20,
"completion_tokens": 150,
"total_tokens": 170
}
}
```
### Response Fields
* **`id`**: Unique identifier for the completion
* **`object`**: Always "chat.completion" for non-streaming responses
* **`created`**: Unix timestamp of when the completion was created
* **`model`**: The model used for the completion
* **`choices`**: Array of completion choices (usually one)
* **`index`**: Position in the choices array
* **`message`**: The generated message with role and content
* **`finish_reason`**: Why generation stopped ("stop", "length",
"content\_filter", etc.)
* **`usage`**: Token usage statistics for billing and monitoring
## Streaming Responses
For real-time responses, enable streaming by setting `stream: true`:
```bash
curl -X POST https://router.daydreams.systems/v1/chat/completions \
-H "Content-Type: application/json" \
-H "Authorization: Bearer YOUR_API_KEY" \
-d '{
"model": "google-vertex/gemini-2.5-flash",
"messages": [{"role": "user", "content": "Write a story"}],
"stream": true
}'
```
Streaming responses are sent as Server-Sent Events (SSE) with the following
format:
```
data: {"id":"chatcmpl-123","object":"chat.completion.chunk","created":1677652288,"model":"google-vertex/gemini-2.5-flash","choices":[{"index":0,"delta":{"content":"Once"},"finish_reason":null}]}
data: {"id":"chatcmpl-123","object":"chat.completion.chunk","created":1677652288,"model":"google-vertex/gemini-2.5-flash","choices":[{"index":0,"delta":{"content":" upon"},"finish_reason":null}]}
data: {"id":"chatcmpl-123","object":"chat.completion.chunk","created":1677652288,"model":"google-vertex/gemini-2.5-flash","choices":[{"index":0,"delta":{},"finish_reason":"stop"}]}
data: [DONE]
```
## Available Models
Dreams Router supports models from multiple providers including OpenAI,
Anthropic, Google, Groq, xAI, Moonshot, and Cerebras.
To get a complete, up-to-date list of available models with their capabilities
and pricing, use the `/v1/models` endpoint:
```bash
curl -H "Authorization: Bearer YOUR_API_KEY" \
https://api-beta.daydreams.systems/v1/models
```
Common HTTP status codes:
* `400`: Bad Request (invalid parameters)
* `401`: Unauthorized (missing or invalid API key)
* `402`: Payment Required (insufficient balance)
* `404`: Not Found (invalid model)
* `429`: Too Many Requests (rate limit exceeded)
* `500`: Internal Server Error
## Rate Limiting
API requests are rate-limited per user. Rate limit information is included in
response headers:
* `X-RateLimit-Limit`: Maximum requests per window
* `X-RateLimit-Remaining`: Remaining requests in current window
* `X-RateLimit-Reset`: Unix timestamp when the window resets
## SDK Integration
### OpenAI SDK Compatible
```python
# Python
from openai import OpenAI
client = OpenAI(
api_key="YOUR_API_KEY",
base_url="https://router.daydreams.systems/v1"
)
response = client.chat.completions.create(
model="google-vertex/gemini-2.5-flash",
messages=[{"role": "user", "content": "Hello!"}]
)
```
```javascript
// JavaScript
import OpenAI from "openai";
const client = new OpenAI({
apiKey: "YOUR_API_KEY",
baseURL: "https://router.daydreams.systems/v1",
});
const response = await client.chat.completions.create({
model: "google-vertex/gemini-2.5-flash",
messages: [{ role: "user", content: "Hello!" }],
});
```
### Dreams SDK Integration
For TypeScript projects using Vercel AI SDK, see the
[Dreams SDK guide](./dreams-sdk) for detailed integration instructions with
payment support.
## Cost Tracking
Each request incurs costs based on the model used and tokens processed. The
response includes usage information for tracking:
* **Prompt tokens**: Tokens in your input messages
* **Completion tokens**: Tokens in the generated response
* **Total tokens**: Sum of prompt and completion tokens
## Best Practices
1. **Use System Messages**: Include a system message to set the AI's behavior
and context
2. **Set Max Tokens**: Always specify `max_tokens` to control costs and response
length
3. **Handle Streaming**: For long responses, use streaming to improve user
experience
4. **Implement Retries**: Add exponential backoff for transient errors
5. **Monitor Usage**: Track your token usage to manage costs effectively
6. **Cache Responses**: Consider caching responses for repeated queries
## Next Steps
* [Dreams SDK Integration](./dreams-sdk) - TypeScript integration with Vercel AI
SDK
* [API Reference](https://router.daydreams.systems/docs) - Complete endpoint
documentation
* [Model Catalog](/v1/models) - View all available models and pricing
* [Examples](https://github.com/daydreamsai/daydreams/tree/main/examples) -
Working implementations
file: ./content/docs/tutorials/index.mdx
meta: {
"title": "Tutorials",
"description": "Hands-on examples and guides for building Daydreams agents."
}
## Getting Started
New to Daydreams? Start with the core tutorial that teaches contexts, composition, and action scoping:
* **[Your First Agent](/docs/core/first-agent)** - Build a customer service agent step-by-step
## Basic Concepts
Learn fundamental patterns through practical examples:
* **[Starting Agent](/docs/tutorials/basic/starting-agent)** - Minimal agent setup
* **[Single Context](/docs/tutorials/basic/single-context)** - Working with one context
* **[Multi-Context Agent](/docs/tutorials/basic/multi-context-agent)** - Context composition patterns
## Platform Integration
### MCP (Model Context Protocol)
Integrate with external tools and services:
* **[MCP Guide](/docs/tutorials/mcp/mcp-guide)** - Introduction to MCP integration
* **[Blender Integration](/docs/tutorials/mcp/blender)** - Connect with Blender via MCP
* **[Multi-Server Setup](/docs/tutorials/mcp/multi-server)** - Managing multiple MCP servers
### x402 Nanoservices
Build autonomous service agents:
* **[x402 Server](/docs/tutorials/x402/server)** - Set up x402 server infrastructure
* **[Nanoservice Agents](/docs/tutorials/x402/nanoservice)** - Create autonomous service agents
## Learning Path
**Recommended order for new developers:**
1. [Your First Agent](/docs/core/first-agent) - Core concepts
2. [Multi-Context Agent](/docs/tutorials/basic/multi-context-agent) - Advanced patterns
3. Choose platform: [MCP](/docs/tutorials/mcp/mcp-guide) or [x402](/docs/tutorials/x402/server)
## Need Help?
* [Core Concepts](/docs/core/concepts) - Framework fundamentals
* [Building Blocks](/docs/core/concepts/building-blocks) - Component reference
* [Discord Community](https://discord.gg/rt8ajxQvXh) - Get support
file: ./content/docs/core/concepts/actions.mdx
meta: {
"title": "Actions",
"description": "Define capabilities and interactions for your Daydreams agent."
}
## What is an Action?
An action is something your agent can **do** - like calling an API, saving data, or performing calculations. Actions are the bridge between LLM reasoning and real-world functionality.
For common patterns like schema validation, error handling, and external service integration, see [Building Block Operations](/docs/core/concepts/building-block-operations).
## Actions vs Inputs/Outputs
Understanding the difference is crucial:
| Building Block | Purpose | When LLM Uses It | Returns Data |
| -------------- | ---------------------------- | --------------------------------------- | ------------------------------- |
| **Actions** | Get data, perform operations | When it needs information for reasoning | ✅ Yes - LLM uses results |
| **Inputs** | Listen for external events | Never - inputs trigger the agent | ❌ No - triggers conversation |
| **Outputs** | Communicate results | When it wants to respond/notify | ❌ No - final communication step |
### Common Pattern: Actions → Outputs
```xml title="action-then-output-flow.xml"
{"city": "Boston"}
Weather in Boston: {{calls[0].temperature}}, {{calls[0].condition}}
```
## Creating Your First Action
Here's a simple calculator action:
```typescript title="calculator-action.ts"
import { action } from "@daydreamsai/core";
import * as z from "zod";
export const addNumbers = action({
// Name the LLM uses to call this action
name: "add-numbers",
// Description helps LLM know when to use it
description: "Adds two numbers together",
// Schema defines what arguments are required
schema: z.object({
a: z.number().describe("First number"),
b: z.number().describe("Second number"),
}),
// Handler is your actual code that runs
handler: async ({ a, b }) => {
const result = a + b;
return {
sum: result,
message: `${a} + ${b} = ${result}`,
};
},
});
```
Use it in your agent:
```typescript title="agent-with-calculator.ts"
import { createDreams } from "@daydreamsai/core";
import { openai } from "@ai-sdk/openai";
const agent = createDreams({
model: openai("gpt-4o"),
actions: [addNumbers],
});
// When user asks "What's 5 + 3?":
// 1. LLM calls addNumbers action with {a: 5, b: 3}
// 2. Gets back {sum: 8, message: "5 + 3 = 8"}
// 3. Responds with the calculation result
```
## Action Scoping and Context Integration
### Global vs Context-Specific Actions
Actions can be scoped at different levels for precise control:
```typescript title="action-scoping.ts"
import { context, action, createDreams } from "@daydreamsai/core";
import * as z from "zod";
// Global actions - available in ALL contexts
const globalTimeAction = action({
name: "get-current-time",
description: "Gets the current time",
handler: async () => ({ time: new Date().toISOString() }),
});
// Context-specific actions - only available when context is active
const chatContext = context({
type: "chat",
schema: z.object({ userId: z.string() }),
create: () => ({ messages: [], preferences: {} }),
}).setActions([
action({
name: "save-preference",
description: "Saves a chat preference",
schema: z.object({ key: z.string(), value: z.string() }),
handler: async ({ key, value }, ctx) => {
ctx.memory.preferences[key] = value;
return { saved: true, preference: { key, value } };
},
}),
]);
const agent = createDreams({
model: openai("gpt-4o"),
contexts: [chatContext],
actions: [globalTimeAction], // Available in all contexts
});
```
### Action Availability During Execution
When a context is active, the LLM sees:
* **Global actions** (defined at agent level)
* **Context-specific actions** (from the active context via `.setActions()`)
* **Composed context actions** (from contexts included via `.use()`)
```typescript title="action-composition-example.ts"
const analyticsContext = context({
type: "analytics",
schema: z.object({ userId: z.string() }),
}).setActions([
action({
name: "track-event",
description: "Track user event",
schema: z.object({ event: z.string() }),
handler: async ({ event }, ctx) => {
return { tracked: true, event };
},
}),
]);
const mainContext = context({
type: "main",
schema: z.object({ userId: z.string() }),
})
.use((state) => [
{ context: analyticsContext, args: { userId: state.args.userId } }
])
.setActions([
action({
name: "main-action",
description: "Main context action",
handler: async () => ({ result: "main" }),
}),
]);
// When mainContext is active, LLM can use:
// ✅ Global actions from agent
// ✅ main-action from mainContext
// ✅ track-event from composed analyticsContext
```
### Cross-Context Communication
Actions can access other contexts through the agent instance:
```typescript title="cross-context-actions.ts"
const syncUserData = action({
name: "sync-user-data",
description: "Syncs data between user contexts",
schema: z.object({ targetUserId: z.string() }),
handler: async ({ targetUserId }, ctx) => {
// Access another context's state
const otherContext = await ctx.agent.getContext({
context: chatContext,
args: { userId: targetUserId }
});
// Read from other context and update current context
const otherPrefs = otherContext.memory.preferences;
ctx.memory.syncedPreferences = otherPrefs;
return {
synced: true,
preferences: otherPrefs,
fromContext: otherContext.id,
};
},
});
```
## Key Takeaways
* **Actions enable capabilities** - Bridge between LLM reasoning and real-world functionality
* **Return data to LLM** - Unlike outputs, actions provide data for further reasoning
* **Scoped availability** - Global actions vs context-specific via `.setActions()`
* **Context composition** - Composed contexts contribute their actions automatically
* **Cross-context communication** - Access other contexts through the agent instance
* **Memory access** - Read and modify context memory with automatic persistence
* **Template resolution** - LLM can reference previous results with `{{calls[0].data}}`
For schema validation, error handling, and external service patterns, see [Building Block Operations](/docs/core/concepts/building-block-operations).
file: ./content/docs/core/concepts/agent-lifecycle.mdx
meta: {
"title": "Agent Lifecycle",
"description": "How Daydreams agents process information and execute tasks."
}
## Simple Overview
Think of an agent as following a simple loop:
1. **Something happens** (input arrives)
2. **Agent thinks** (uses LLM to decide what to do)
3. **Agent acts** (performs actions or sends responses)
4. **Agent remembers** (saves what happened)
5. **Repeat**
This loop continues as long as the agent is running, handling new inputs and
responding intelligently based on its context and memory.
## Agent Boot Process
Before handling any inputs, agents go through an initialization phase:
```typescript
// Agent creation
const agent = createDreams({
model: openai("gpt-4o"),
contexts: [chatContext, taskContext],
// ... other config
});
// Boot sequence
await agent.start();
```
**Boot steps:**
1. **Container Setup** - Dependency injection container initialized
2. **Memory System** - KV store, vector store, working memory initialized
3. **TaskRunner** - Task queues and concurrency management setup
4. **Context Registry** - All provided contexts registered with type validation
5. **Extension Loading** - Input/output handlers and services activated
6. **Agent Context** - If agent has its own context, it's created and prepared
## The Basic Flow
Here's what happens when your agent receives a Discord message:
```
Discord Message Arrives
↓
Agent loads chat context & memory
↓
Agent thinks: "What should I do?"
↓
Agent decides: "I'll check the weather and respond"
↓
Agent calls weather API (action)
↓
Agent sends Discord reply (output)
↓
Agent saves conversation to memory
```
***
## Detailed Technical Explanation
The core of the Daydreams framework is the agent's execution lifecycle. This
loop manages how an agent receives input, reasons with an LLM, performs actions,
and handles results. Understanding this flow is crucial for building and
debugging agents.
Let's trace the lifecycle of a typical request:
## 1. Input Reception
* **Source:** An external system (like Discord, Telegram, CLI, or an API) sends
information to the agent. This is usually configured via an `extension`.
* **Listener:** An `input` definition within the agent or an extension listens
for these events (e.g., a new message arrives).
* **Trigger:** When the external event occurs, the input listener is triggered.
* **Invocation:** The listener typically calls `agent.send(...)`, providing:
* The target `context` definition (which part of the agent should handle
this?).
* `args` to identify the specific context instance (e.g., which chat
session?).
* The input `data` itself (e.g., the message content).
## 2. `agent.send` - Starting the Process
* **Log Input:** The framework logs the incoming information as an `InputRef` (a
record of the input).
* **Initiate Run:** It then calls the internal `agent.run` method to start or
continue the processing cycle for the specified context instance, passing the
new `InputRef` along.
## 3. `agent.run` - Managing the Execution Cycle
* **Context Queue Management:** Uses TaskRunner to ensure only one execution per context instance at a time. Multiple inputs to the same context get queued.
* **Load/Create Context:** The framework finds the specific `ContextState` for the target instance (e.g., the state for chat session #123). If it's the first time, it creates the state and persistent memory (`ContextState.memory`).
* **Working Memory Setup:** Retrieves or creates temporary `WorkingMemory` for this execution containing arrays for inputs, outputs, actions, results, thoughts, events, steps, and runs.
* **Context Preparation:** If the context uses `.use()` composition, all composer functions are executed to determine which additional contexts to prepare. These are prepared in parallel.
* **Engine Initialization:** Creates an Engine instance that will orchestrate the step-by-step execution with its router system for handling inputs, outputs, and actions.
* **Start Step Loop:** Begins the main reasoning loop that continues until the LLM stops generating actions or hits step limits.
## 4. Inside the Step Loop - Perception, Reasoning, Action
Each iteration (step) within the `agent.run` loop represents one turn of the
agent's core reasoning cycle:
* **Prepare State:** The agent gathers the latest information, including:
* The current persistent state of the active `Context`(s) (via their `render`
functions).
* The history of the current interaction from `WorkingMemory` (processed
inputs, outputs, action results from previous steps).
- Any *new* unprocessed information (like the initial `InputRef` or results
from actions completed in the previous step).
- The list of currently available `actions` and `outputs`.
* **Generate Prompt:** This information is formatted into a structured prompt
(using XML) for the LLM. The prompt clearly tells the LLM its instructions,
what tools (actions/outputs) it has, the current state, and what new
information needs attention. (See [Prompting](/docs/core/concepts/prompting)).
* **LLM Call:** The agent sends the complete prompt to the configured LLM.
* **Stream XML Parsing:** As the LLM generates its response token by token:
* The framework uses `xmlStreamParser` to process chunks in real-time
* Detects and extracts elements: ``, ``, ``
* Each element becomes a typed reference (ThoughtRef, ActionCall, OutputRef) immediately pushed to WorkingMemory
* Incomplete elements are held until closing tags are found
* **Engine Router Processing:** The Engine routes each parsed element:
* **ThoughtRef** → Added to `workingMemory.thoughts` array
* **ActionCall** → Validated, template-resolved, queued via TaskRunner
* **OutputRef** → Validated and processed through output handlers
* **Action Execution:** For each ``:
* Arguments parsed and validated against Zod schema
* Template resolution for `{{calls[0].data}}` style references
* TaskRunner queues action with proper concurrency control
* Results returned as ActionResult and added to `workingMemory.results`
* **Step Completion:** Engine checks if more steps needed:
* Continues if LLM generated actions that might produce new information
* Stops if no unprocessed elements remain or step limit reached
## 5. Run Completion
* **Exit Loop:** Once the loop condition determines no further steps are needed,
the loop exits.
* **Final Tasks:** Any final cleanup logic or `onRun` hooks defined in the
context are executed.
* **Save State:** The final persistent state (`ContextState.memory`) of all
involved contexts is saved to the `MemoryStore`.
* **Return Results:** The framework resolves the promise originally returned by
`agent.send` or `agent.run`, providing the complete log (`chain`) of the
interaction.
## Practical Example: Complete Lifecycle Trace
Here's a real example showing the complete lifecycle when a user asks for weather:
```typescript
// User message triggers input
await agent.send({
context: chatContext,
args: { userId: "alice" },
input: { type: "text", data: "What's the weather in NYC?" }
});
```
**Execution trace:**
```
1. INPUT RECEPTION
InputRef created: { type: "text", data: "What's the weather in NYC?" }
2. CONTEXT PREPARATION
ContextState loaded/created: chat:alice
Working memory retrieved with conversation history
Composed contexts prepared (if any .use() definitions)
3. ENGINE STEP 1
Prompt generated with:
- Context render: "Chat with alice, 5 previous messages..."
- Available actions: [getWeather, ...]
- New input: "What's the weather in NYC?"
LLM Response (streamed):
User wants weather for NYC. I should use getWeather action.
{"location": "New York City"}
Parsed elements:
- ThoughtRef → workingMemory.thoughts[]
- ActionCall → validated, queued via TaskRunner
Action execution:
- getWeather handler called with {"location": "New York City"}
- Returns: {temperature: 72, condition: "sunny"}
- ActionResult → workingMemory.results[]
4. ENGINE STEP 2
Prompt includes action result from Step 1
LLM Response:
Got weather data, now I'll respond to the user.
The weather in NYC is 72°F and sunny! Perfect day to go outside.
Parsed elements:
- ThoughtRef → workingMemory.thoughts[]
- OutputRef → processed through text output handler
5. RUN COMPLETION
- No more unprocessed elements
- Context onRun hooks executed
- Context memory saved: chat:alice state updated
- Working memory persisted
- Chain of all logs returned: [InputRef, ThoughtRef, ActionCall, ActionResult, ThoughtRef, OutputRef]
```
This shows how the agent cycles through reasoning steps, executing actions and generating outputs until the interaction is complete.
## Error Handling in Lifecycle
When errors occur during the lifecycle:
* **Action Failures:** ActionResult contains error information, LLM can see failure and retry or handle gracefully
* **LLM Errors:** Automatic retry with exponential backoff
* **Context Errors:** onError hooks called, execution can continue or abort
* **Stream Parsing Errors:** Invalid XML is logged but doesn't crash execution
* **Memory Errors:** Fallback to in-memory storage, logging for debugging
This detailed cycle illustrates how Daydreams agents iteratively perceive (inputs, results), reason (LLM prompt/response), and act (outputs, actions), using streaming and asynchronous task management to handle complex interactions efficiently.
file: ./content/docs/core/concepts/building-block-operations.mdx
meta: {
"title": "Building Block Operations",
"description": "Common patterns for working with actions, inputs, and outputs in Daydreams agents."
}
## Working with Building Blocks
This guide covers the common patterns and best practices that apply to all building blocks - actions, inputs, and outputs. For specific details on each building block type, see [Actions](/docs/core/concepts/actions), [Inputs](/docs/core/concepts/inputs), and [Outputs](/docs/core/concepts/outputs).
## The Complete Agent Flow
Here's how actions, inputs, and outputs work together in a real agent:
```typescript title="complete-agent-flow.ts"
import { createDreams, context, action, input, output } from "@daydreamsai/core";
import { openai } from "@ai-sdk/openai";
import * as z from "zod";
// 1. INPUT: Listen for Discord messages
const discordInput = input({
type: "discord:message",
schema: z.object({
content: z.string(),
userId: z.string(),
channelId: z.string(),
}),
subscribe: (send, agent) => {
discord.on("messageCreate", (message) => {
send(
chatContext,
{ userId: message.author.id },
{
content: message.content,
userId: message.author.id,
channelId: message.channel.id,
}
);
});
return () => discord.removeAllListeners("messageCreate");
},
});
// 2. ACTION: Get weather data
const getWeather = action({
name: "get-weather",
description: "Gets current weather for a city",
schema: z.object({
city: z.string().describe("City name"),
}),
handler: async ({ city }) => {
const response = await fetch(`https://api.weather.com/${city}`);
const data = await response.json();
return {
temperature: data.temp,
condition: data.condition,
city
};
},
});
// 3. OUTPUT: Send Discord response
const discordOutput = output({
type: "discord:message",
description: "Sends a message to Discord",
schema: z.string(),
attributes: z.object({
channelId: z.string(),
}),
handler: async (message, ctx) => {
const { channelId } = ctx.outputRef.params;
await discord.send(channelId, message);
return { sent: true };
},
});
// 4. CONTEXT: Tie everything together
const chatContext = context({
type: "chat",
schema: z.object({ userId: z.string() }),
create: () => ({ messages: [] }),
});
// 5. AGENT: Complete system
const agent = createDreams({
model: openai("gpt-4o"),
contexts: [chatContext],
inputs: [discordInput],
outputs: [discordOutput],
actions: [getWeather],
});
// Now when user types: "What's the weather in Boston?"
// 1. INPUT detects Discord message → triggers agent
// 2. Agent processes message and calls ACTION to get weather
// 3. Agent uses OUTPUT to send weather info back to Discord
// Complete conversation loop! 🎉
```
## Schema Validation Patterns
All building blocks use [Zod](https://zod.dev) schemas for validation. Here are the essential patterns:
### Basic Schema Patterns
```typescript title="basic-schemas.ts"
// Simple types
schema: z.string(), // Any string
schema: z.number(), // Any number
schema: z.boolean(), // true/false
// Objects with validation
schema: z.object({
email: z.string().email(), // Valid email format
age: z.number().min(0).max(150), // Number between 0-150
name: z.string().min(1).max(100), // String 1-100 chars
}),
// Arrays and optionals
schema: z.array(z.string()), // Array of strings
schema: z.string().optional(), // Optional string
schema: z.string().default("hello"), // String with default value
// Enums for controlled values
schema: z.enum(["small", "medium", "large"]),
```
### Advanced Validation
```typescript title="advanced-schemas.ts"
// Descriptions help LLMs understand what to provide
schema: z.object({
city: z.string().describe("Name of the city to check weather for"),
units: z.enum(["celsius", "fahrenheit"]).describe("Temperature units"),
includeForecast: z.boolean().optional().default(false)
.describe("Whether to include 3-day forecast"),
}),
// Transformations and refinements
schema: z.string().transform(s => s.toLowerCase()),
schema: z.number().refine(n => n > 0, "Must be positive"),
// Conditional schemas
schema: z.discriminatedUnion("type", [
z.object({ type: z.literal("email"), address: z.string().email() }),
z.object({ type: z.literal("phone"), number: z.string() }),
]),
```
### Schema Best Practices
```typescript title="schema-best-practices.ts"
// ✅ Good - specific constraints and descriptions
const userSchema = z.object({
email: z.string().email().describe("User's email address"),
age: z.number().min(13).max(120).describe("Age in years"),
preferences: z.array(z.string()).max(10).describe("Up to 10 preferences"),
});
// ✅ Good - sensible defaults
const configSchema = z.object({
timeout: z.number().min(1000).default(5000).describe("Timeout in milliseconds"),
retries: z.number().min(0).max(5).default(3).describe("Number of retries"),
});
// ❌ Bad - too loose, no validation
const badSchema = z.object({
data: z.any(), // Could be anything!
stuff: z.string(), // No constraints or description
});
```
## Context Memory Access
All building blocks can access and modify context memory. Here are the common patterns:
### Reading and Writing Memory
```typescript title="memory-patterns.ts"
// Memory interface for type safety
interface ChatMemory {
messages: Array<{ role: string; content: string; timestamp: number }>;
userPreferences: Record;
stats: { totalMessages: number; lastActive: number };
}
const chatAction = action({
name: "save-message",
schema: z.object({ message: z.string() }),
handler: async ({ message }, ctx) => {
// Access typed memory
const memory = ctx.memory as ChatMemory;
// Initialize if needed
if (!memory.messages) {
memory.messages = [];
}
if (!memory.stats) {
memory.stats = { totalMessages: 0, lastActive: Date.now() };
}
// Update memory
memory.messages.push({
role: "user",
content: message,
timestamp: Date.now(),
});
memory.stats.totalMessages++;
memory.stats.lastActive = Date.now();
// Changes persist automatically when handler completes
return {
success: true,
totalMessages: memory.stats.totalMessages,
};
},
});
```
### Memory Best Practices
```typescript title="memory-best-practices.ts"
// ✅ Good - safe memory access
handler: async (data, ctx) => {
// Always check and initialize
if (!ctx.memory.items) {
ctx.memory.items = [];
}
// Update with validation
if (data.item && typeof data.item === 'string') {
ctx.memory.items.push({
id: crypto.randomUUID(),
content: data.item,
createdAt: Date.now(),
});
}
return { success: true, count: ctx.memory.items.length };
},
// ❌ Bad - unsafe memory access
handler: async (data, ctx) => {
ctx.memory.items.push(data.item); // Could crash if items is undefined
return { success: true };
},
```
## Error Handling Patterns
Consistent error handling across all building blocks:
### Structured Error Responses
```typescript title="error-handling.ts"
// ✅ Good error handling pattern
handler: async ({ userId }, ctx) => {
try {
const user = await database.getUser(userId);
if (!user) {
return {
success: false,
error: "USER_NOT_FOUND",
message: `No user found with ID: ${userId}`,
};
}
return {
success: true,
data: user,
message: "User retrieved successfully",
};
} catch (error) {
// Log technical details for debugging
ctx.agent.logger.error("database-error", error.message, {
userId,
stack: error.stack,
});
// Return user-friendly error
return {
success: false,
error: "DATABASE_ERROR",
message: "Unable to retrieve user information at this time",
retryable: true,
};
}
},
```
### Error Categories
```typescript title="error-categories.ts"
// Define consistent error types
const ErrorTypes = {
VALIDATION_ERROR: "VALIDATION_ERROR",
NOT_FOUND: "NOT_FOUND",
PERMISSION_DENIED: "PERMISSION_DENIED",
RATE_LIMITED: "RATE_LIMITED",
EXTERNAL_SERVICE_ERROR: "EXTERNAL_SERVICE_ERROR",
INTERNAL_ERROR: "INTERNAL_ERROR",
} as const;
// Use in handlers
handler: async ({ apiKey, query }, ctx) => {
if (!apiKey) {
return {
success: false,
error: ErrorTypes.VALIDATION_ERROR,
message: "API key is required",
};
}
try {
const response = await externalAPI.search(query, { apiKey });
if (response.status === 401) {
return {
success: false,
error: ErrorTypes.PERMISSION_DENIED,
message: "Invalid API key",
};
}
if (response.status === 429) {
return {
success: false,
error: ErrorTypes.RATE_LIMITED,
message: "API rate limit exceeded, please try again later",
retryAfter: 60,
};
}
return {
success: true,
data: response.data,
};
} catch (error) {
return {
success: false,
error: ErrorTypes.EXTERNAL_SERVICE_ERROR,
message: "External service is temporarily unavailable",
retryable: true,
};
}
},
```
## Async Operations and External Services
Best practices for handling asynchronous operations:
### Proper Async Patterns
```typescript title="async-patterns.ts"
// ✅ Good - proper async/await usage
handler: async ({ url, options }, ctx) => {
// Check for cancellation during long operations
if (ctx.abortSignal?.aborted) {
throw new Error("Operation cancelled");
}
const response = await fetch(url, {
signal: ctx.abortSignal, // Respect cancellation
timeout: 10000, // Set reasonable timeout
...options,
});
if (!response.ok) {
throw new Error(`HTTP ${response.status}: ${response.statusText}`);
}
const data = await response.json();
return {
success: true,
data,
statusCode: response.status,
};
},
// ❌ Bad - fire-and-forget, no error handling
handler: ({ url }) => {
fetch(url); // Promise ignored!
return { status: "started" }; // Completes before fetch
},
```
### Timeout and Cancellation
```typescript title="timeout-cancellation.ts"
handler: async ({ items }, ctx) => {
const results = [];
for (let i = 0; i < items.length; i++) {
// Check for cancellation before each operation
if (ctx.abortSignal?.aborted) {
return {
success: false,
error: "CANCELLED",
message: "Operation was cancelled",
processedCount: i,
};
}
try {
// Process with timeout
const result = await Promise.race([
processItem(items[i]),
new Promise((_, reject) =>
setTimeout(() => reject(new Error("Timeout")), 5000)
),
]);
results.push(result);
} catch (error) {
// Log and continue with other items
ctx.agent.logger.warn("item-processing-failed", {
itemIndex: i,
error: error.message,
});
}
}
return {
success: true,
processedCount: results.length,
totalCount: items.length,
results,
};
},
```
### External Service Integration
```typescript title="external-service-integration.ts"
// Weather API integration with full error handling
const weatherAction = action({
name: "get-weather",
description: "Gets current weather for a location",
schema: z.object({
location: z.string().describe("City or address"),
units: z.enum(["metric", "imperial"]).optional().default("metric"),
}),
handler: async ({ location, units }, ctx) => {
const apiKey = process.env.WEATHER_API_KEY;
if (!apiKey) {
return {
success: false,
error: "CONFIGURATION_ERROR",
message: "Weather service not configured",
};
}
try {
const response = await fetch(
`https://api.openweathermap.org/data/2.5/weather?q=${encodeURIComponent(location)}&appid=${apiKey}&units=${units}`,
{
timeout: 10000,
signal: ctx.abortSignal,
}
);
if (response.status === 404) {
return {
success: false,
error: "LOCATION_NOT_FOUND",
message: `Could not find weather data for "${location}"`,
};
}
if (response.status === 401) {
return {
success: false,
error: "API_KEY_INVALID",
message: "Weather service authentication failed",
};
}
if (!response.ok) {
throw new Error(`Weather API error: ${response.status}`);
}
const data = await response.json();
// Update memory with request history
if (!ctx.memory.weatherRequests) {
ctx.memory.weatherRequests = [];
}
ctx.memory.weatherRequests.push({
location,
timestamp: Date.now(),
temperature: data.main.temp,
});
return {
success: true,
location: data.name,
temperature: Math.round(data.main.temp),
condition: data.weather[0].description,
humidity: data.main.humidity,
units: units,
message: `Current weather in ${data.name}: ${Math.round(data.main.temp)}° ${units === 'metric' ? 'C' : 'F'}, ${data.weather[0].description}`,
};
} catch (error) {
if (error.name === 'AbortError') {
return {
success: false,
error: "CANCELLED",
message: "Weather request was cancelled",
};
}
ctx.agent.logger.error("weather-api-error", error.message, {
location,
stack: error.stack,
});
return {
success: false,
error: "WEATHER_SERVICE_ERROR",
message: "Unable to get weather information right now",
retryable: true,
};
}
},
});
```
## Building Blocks Working Together
The real power comes from combining all three building blocks:
### Complete Workflow Example
```typescript title="complete-workflow.ts"
// E-commerce order processing workflow
// 1. INPUT: Webhook from payment processor
const paymentWebhook = input({
type: "payment:webhook",
schema: z.object({
orderId: z.string(),
status: z.enum(["paid", "failed", "refunded"]),
amount: z.number(),
customerId: z.string(),
}),
subscribe: (send, agent) => {
paymentProcessor.on("webhook", (data) => {
send(
orderContext,
{ orderId: data.orderId },
{
orderId: data.orderId,
status: data.status,
amount: data.amount,
customerId: data.customerId,
}
);
});
return () => paymentProcessor.removeAllListeners("webhook");
},
});
// 2. ACTION: Process order based on payment status
const processOrder = action({
name: "process-order",
description: "Processes order after payment",
schema: z.object({
orderId: z.string(),
status: z.string(),
}),
handler: async ({ orderId, status }, ctx) => {
// Update order memory
if (!ctx.memory.orders) {
ctx.memory.orders = {};
}
ctx.memory.orders[orderId] = {
status,
processedAt: Date.now(),
};
if (status === "paid") {
// Fulfill order
await fulfillmentService.createShipment(orderId);
return {
success: true,
action: "shipped",
message: `Order ${orderId} has been shipped`,
};
} else if (status === "failed") {
return {
success: true,
action: "cancelled",
message: `Order ${orderId} has been cancelled due to payment failure`,
};
}
return { success: true, action: "unknown", message: "Order status updated" };
},
});
// 3. OUTPUT: Send notifications to customer and admin
const emailNotification = output({
type: "email:notification",
description: "Sends order status email",
schema: z.string(),
attributes: z.object({
to: z.string(),
subject: z.string(),
type: z.enum(["customer", "admin"]),
}),
handler: async (body, ctx) => {
const { to, subject, type } = ctx.outputRef.params;
await emailService.send({
to,
subject,
body,
template: type === "customer" ? "customer-order" : "admin-notification",
});
// Track notifications in memory
if (!ctx.memory.notifications) {
ctx.memory.notifications = [];
}
ctx.memory.notifications.push({
to,
subject,
type,
sentAt: Date.now(),
});
return {
sent: true,
recipient: to,
type,
};
},
});
// Context ties it all together
const orderContext = context({
type: "order",
schema: z.object({ orderId: z.string() }),
create: () => ({
orders: {},
notifications: [],
createdAt: Date.now(),
}),
}).setActions([processOrder]);
// Complete agent
const ecommerceAgent = createDreams({
model: openai("gpt-4o"),
contexts: [orderContext],
inputs: [paymentWebhook],
outputs: [emailNotification],
instructions: `You process e-commerce orders. When you receive a payment webhook:
1. Use processOrder action to update order status
2. Send customer notification email
3. Send admin notification email
Be clear and professional in all communications.`,
});
// Now the complete flow works automatically:
// Payment webhook → Process order → Send notifications
```
## Key Takeaways
* **Building blocks work together** - Design them as a unified system, not isolated pieces
* **Consistent error handling** - Use structured responses across all handlers
* **Schema validation** - Always validate inputs with descriptive Zod schemas
* **Memory safety** - Check and initialize memory before accessing
* **Async best practices** - Use proper async/await, handle cancellation and timeouts
* **External service integration** - Handle failures gracefully with retries and fallbacks
* **Logging and observability** - Log errors and key events for debugging
The power of Daydreams comes from how seamlessly these building blocks integrate to create sophisticated, stateful agent behaviors.
file: ./content/docs/core/concepts/building-blocks.mdx
meta: {
"title": "Building Blocks",
"description": "The core components that make up a Daydreams agent."
}
Every Daydreams agent is built from four main building blocks that work together to create intelligent, stateful behavior. These components handle how agents receive information, maintain state, perform actions, and respond to users.
## The Four Building Blocks
### 1. Inputs - How Your Agent Listens
Inputs define how your agent receives and processes information from external sources. They create structured `InputRef` objects that flow through the agent's processing pipeline.
```typescript title="input-example.ts"
import { input } from "@daydreamsai/core";
import * as z from "zod";
// Text input with validation
const textInput = input({
description: "Processes user text messages",
schema: z.string(),
handler: async (data) => {
// Optional processing logic
return { processed: true, length: data.length };
}
});
```
**Examples:**
* Discord/Telegram messages
* CLI user input
* HTTP API requests
* File system events
* Timer/scheduled triggers
### 2. Outputs - How Your Agent Speaks
Outputs define how your agent sends information to external systems. The LLM uses outputs by generating `` XML tags that get processed by output handlers.
```typescript title="output-example.ts"
import { output } from "@daydreamsai/core";
import * as z from "zod";
// Text output with validation
const textOutput = output({
description: "Sends text responses to users",
schema: z.string(),
handler: async (content) => {
console.log(`Agent says: ${content}`);
return { sent: true, timestamp: Date.now() };
}
});
```
**Examples:**
* Chat platform messages (Discord, Slack)
* Email notifications
* HTTP API responses
* File system writes
* Database updates
### 3. Actions - What Your Agent Can Do
Actions are capabilities that give your agent superpowers. The LLM can call actions using `` XML tags, and results flow back into the conversation context.
```typescript title="action-example.ts"
import { action } from "@daydreamsai/core";
import * as z from "zod";
// Weather action with full context access
const getWeather = action({
name: "get-weather",
description: "Gets current weather for a location",
schema: z.object({
location: z.string(),
}),
handler: async ({ location }, ctx) => {
// Access context memory, agent, working memory
const weather = await weatherAPI.get(location);
ctx.memory.lastWeatherCheck = Date.now();
return { temperature: weather.temp, condition: weather.condition };
},
});
```
**Examples:**
* External API calls (weather, search, databases)
* Memory operations (save user preferences, retrieve history)
* File system operations (read, write, process files)
* Cross-context communication (sync user data)
### 4. Contexts - Your Agent's Workspace
Contexts are isolated, stateful workspaces that maintain separate memory for different conversations or tasks. They enable agents to handle multiple simultaneous interactions without mixing data.
```typescript title="context-example.ts"
import { context } from "@daydreamsai/core";
import * as z from "zod";
// Chat context with lifecycle hooks and composition
const chatContext = context({
type: "chat",
schema: z.object({ userId: z.string() }),
// Initialize memory
create: () => ({ messages: [], preferences: {} }),
// Context-specific actions via .setActions()
// Context composition via .use()
// Custom instructions, render, lifecycle hooks
}).setActions([/* context-specific actions */]);
```
**Examples:**
* User conversations (each user gets isolated memory)
* Game sessions (each game maintains separate state)
* Project workspaces (documents, tasks, team members)
* Multi-step workflows (onboarding, checkout processes)
## How They Work Together
Here's the complete flow showing how building blocks interact:
```typescript title="complete-flow-example.ts"
// 1. Input creates InputRef and triggers agent.send()
await agent.send({
context: chatContext,
args: { userId: "alice" },
input: { type: "text", data: "What's the weather in NYC?" }
});
```
**Execution Flow:**
1. **Input processing** → InputRef created and added to working memory
2. **Context preparation** → `chat:alice` context loaded with memory/history
3. **LLM reasoning** → Generates structured XML response with context awareness
4. **Action execution** → `` parsed and executed
5. **Action results** → Weather data returned and added to working memory
6. **Output generation** → `` sends response to user
7. **Memory persistence** → All changes saved, context state updated
## Advanced Building Block Concepts
### Context Composition
Contexts can include other contexts for modular functionality:
```typescript
const composedContext = context({ type: "main" })
.use((state) => [
{ context: analyticsContext, args: { userId: state.args.userId } },
{ context: preferencesContext, args: { userId: state.args.userId } }
]);
// Now has access to analytics and preferences actions/memory
```
### Action Scoping
Actions can be global (available everywhere) or context-specific:
```typescript
// Global actions - available in all contexts
const agent = createDreams({ actions: [globalTimeAction] });
// Context actions - only available in specific contexts
const chatContext = context({}).setActions([chatSpecificAction]);
```
### Memory System Integration
All building blocks interact with the dual-memory system:
* **Working Memory** - Temporary execution logs (inputs, outputs, actions, results)
* **Persistent Memory** - Long-term context memory + vector/KV storage
## Architecture Mental Model
If you know React, think of it this way:
* **Contexts** = React components (isolated state, lifecycle hooks, composition)
* **Actions** = Event handlers (capabilities with full context access)
* **Inputs/Outputs** = Props/callbacks (typed data flow in/out)
* **Agent** = React app (orchestrates everything with an execution engine)
* **Working Memory** = Component state during render
* **Context Memory** = Component state that persists between renders
## Common Patterns Across Building Blocks
For detailed patterns on schema validation, error handling, memory access, and external service integration that apply to all building blocks, see [Building Block Operations](/docs/core/concepts/building-block-operations).
## Next Steps
Now that you understand the building blocks, you can dive deeper into each one:
* **[Contexts](/docs/core/concepts/contexts)** - Learn how to manage state and memory
* **[Actions](/docs/core/concepts/actions)** - Define what your agent can do
* **[Inputs](/docs/core/concepts/inputs)** - Set up ways for your agent to receive information
* **[Outputs](/docs/core/concepts/outputs)** - Configure how your agent responds
* **[Building Block Operations](/docs/core/concepts/building-block-operations)** - Common patterns for all building blocks
* **[Agent Lifecycle](/docs/core/concepts/agent-lifecycle)** - Understand the complete execution flow
file: ./content/docs/core/concepts/composing-contexts.mdx
meta: {
"title": "Advanced Context Composition",
"description": "Master complex composition patterns for enterprise-grade agent systems."
}
## Advanced Context Composition
This guide covers sophisticated composition patterns for building complex, production-ready agent systems. For basic composition concepts, see [Contexts](/docs/core/concepts/contexts#composed-contexts---the-power-pattern).
## When You Need Advanced Composition
Advanced composition patterns are essential when you're building:
* **Enterprise applications** with complex business logic
* **Multi-tenant systems** with user-specific features
* **Workflow engines** that adapt based on state
* **Platform integrations** with conditional functionality
## Key Advanced Concepts
**Composer Functions**: Complex logic for determining context inclusion
**State-Driven Composition**: Contexts change based on runtime conditions\
**Error-Resilient Patterns**: Graceful handling of context failures
**Performance Optimization**: Lazy loading and conditional inclusion
## Advanced Composition Patterns
### 1. State-Driven Dynamic Composition
Contexts that change based on complex runtime state:
```typescript title="state-driven-composition.ts"
import { context, action } from "@daydreamsai/core";
import * as z from "zod";
const workflowContext = context({
type: "workflow",
schema: z.object({
workflowId: z.string(),
currentStage: z.string(),
permissions: z.array(z.string()),
}),
create: () => ({
completedSteps: [],
nextSteps: [],
approvalRequired: false,
}),
})
.use((state) => {
const contexts = [];
// Always include audit logging
contexts.push({
context: auditContext,
args: { workflowId: state.args.workflowId }
});
// Stage-specific contexts
switch (state.args.currentStage) {
case "draft":
contexts.push({ context: editingContext, args: {} });
break;
case "review":
contexts.push({ context: reviewContext, args: {} });
if (state.args.permissions.includes("approve")) {
contexts.push({ context: approvalContext, args: {} });
}
break;
case "approved":
contexts.push({ context: executionContext, args: {} });
if (state.memory.approvalRequired) {
contexts.push({ context: notificationContext, args: {} });
}
break;
}
return contexts;
});
```
### 2. Multi-Tenant Feature Composition
Dynamically enable features based on tenant configuration:
```typescript title="multi-tenant-composition.ts"
interface TenantConfig {
features: string[];
limits: Record;
integrations: string[];
}
const tenantContext = context({
type: "tenant",
schema: z.object({
tenantId: z.string(),
userId: z.string(),
}),
create: () => ({ sessions: 0, usage: {} }),
})
.use(async (state) => {
// Fetch tenant configuration
const config: TenantConfig = await getTenantConfig(state.args.tenantId);
const contexts = [
// Core functionality for all tenants
{ context: coreContext, args: { userId: state.args.userId } },
];
// Feature-based context inclusion
if (config.features.includes("analytics")) {
contexts.push({
context: analyticsContext,
args: { tenantId: state.args.tenantId }
});
}
if (config.features.includes("ai-assistant")) {
contexts.push({ context: aiContext, args: {} });
}
if (config.features.includes("advanced-reporting")) {
contexts.push({ context: reportingContext, args: {} });
}
// Integration-based contexts
for (const integration of config.integrations) {
if (integration === "salesforce") {
contexts.push({ context: salesforceContext, args: {} });
} else if (integration === "slack") {
contexts.push({ context: slackContext, args: {} });
}
}
return contexts;
});
```
### 3. Layered Architecture with Chaining
Build complex systems with multiple composition layers:
```typescript title="layered-composition.ts"
const enterpriseAppContext = context({
type: "enterprise-app",
schema: z.object({
userId: z.string(),
orgId: z.string(),
role: z.string(),
}),
})
// Layer 1: Core foundation
.use((state) => [
{ context: authContext, args: { userId: state.args.userId } },
{ context: auditContext, args: { orgId: state.args.orgId } },
])
// Layer 2: Role-based features
.use((state) => {
const contexts = [];
// Role-based context inclusion
switch (state.args.role) {
case "admin":
contexts.push({ context: adminContext, args: {} });
contexts.push({ context: settingsContext, args: {} });
break;
case "manager":
contexts.push({ context: teamContext, args: {} });
contexts.push({ context: reportingContext, args: {} });
break;
case "user":
contexts.push({ context: userWorkspaceContext, args: {} });
break;
}
return contexts;
})
// Layer 3: Organization-specific features
.use(async (state) => {
const orgConfig = await getOrgConfig(state.args.orgId);
const contexts = [];
if (orgConfig.features.customWorkflows) {
contexts.push({ context: workflowContext, args: {} });
}
if (orgConfig.features.advancedSecurity) {
contexts.push({ context: securityContext, args: {} });
}
return contexts;
});
```
## Real-World Examples
### E-commerce Shopping Assistant
```typescript title="ecommerce-agent.ts"
import { context, action } from "@daydreamsai/core";
import * as z from "zod";
// Product search context
const catalogContext = context({
type: "catalog",
schema: z.object({ storeId: z.string() }),
create: () => ({ recentSearches: [] }),
}).setActions([
action({
name: "searchProducts",
description: "Search for products",
schema: z.object({ query: z.string() }),
handler: async ({ query }, ctx) => {
ctx.memory.recentSearches.push(query);
return { products: await mockProductSearch(query) };
},
}),
]);
// Shopping cart management
const cartContext = context({
type: "cart",
schema: z.object({ sessionId: z.string() }),
create: () => ({ items: [], total: 0 }),
}).setActions([
action({
name: "addToCart",
description: "Add item to shopping cart",
schema: z.object({ productId: z.string(), quantity: z.number() }),
handler: async ({ productId, quantity }, ctx) => {
ctx.memory.items.push({ productId, quantity });
ctx.memory.total += 29.99 * quantity; // Mock price
return { success: true, cartTotal: ctx.memory.total };
},
}),
]);
// User preferences and history
const customerContext = context({
type: "customer",
schema: z.object({ userId: z.string() }),
create: () => ({ tier: "basic", preferences: [], orderHistory: [] }),
});
// Main shopping assistant that combines all contexts
const shoppingAssistant = context({
type: "shopping-assistant",
schema: z.object({
userId: z.string(),
sessionId: z.string(),
storeId: z.string()
}),
create: () => ({ conversationCount: 0 }),
})
.use((state) => [
// Always include catalog and cart
{ context: catalogContext, args: { storeId: state.args.storeId } },
{ context: cartContext, args: { sessionId: state.args.sessionId } },
// Include customer context for logged-in users
state.args.userId !== "anonymous"
? { context: customerContext, args: { userId: state.args.userId } }
: null,
].filter(Boolean))
.render((state) => `
Shopping Assistant for Store ${state.args.storeId}
Session: ${state.args.sessionId}
User: ${state.args.userId}
`)
.instructions(`You are a helpful shopping assistant. You can:
- Search for products using searchProducts
- Add items to cart using addToCart
- Help customers find what they need
Be friendly and make personalized recommendations when possible.`);
// This assistant can now:
// ✅ Search products across the store catalog
// ✅ Manage shopping cart items and totals
// ✅ Access customer preferences (when logged in)
// ✅ All actions are available in one unified context
```
### Smart Meeting Assistant
```typescript title="meeting-assistant.ts"
import { context, action } from "@daydreamsai/core";
import * as z from "zod";
// Meeting transcription context
const transcriptionContext = context({
type: "transcription",
schema: z.object({ meetingId: z.string() }),
create: () => ({ transcript: [], speakers: [] }),
}).setActions([
action({
name: "transcribeAudio",
description: "Convert speech to text",
schema: z.object({ audioUrl: z.string() }),
handler: async ({ audioUrl }, ctx) => {
const text = "Mock transcript of the meeting";
ctx.memory.transcript.push({ text, timestamp: Date.now() });
return { success: true, text };
},
}),
]);
// Action items tracking context
const actionItemsContext = context({
type: "action-items",
schema: z.object({ meetingId: z.string() }),
create: () => ({ items: [], assignments: [] }),
}).setActions([
action({
name: "addActionItem",
description: "Add action item with assignee",
schema: z.object({
task: z.string(),
assignee: z.string(),
dueDate: z.string().optional()
}),
handler: async ({ task, assignee, dueDate }, ctx) => {
ctx.memory.items.push({ task, assignee, dueDate, status: "pending" });
return { added: true, totalItems: ctx.memory.items.length };
},
}),
]);
// Calendar integration context
const calendarContext = context({
type: "calendar",
schema: z.object({ userId: z.string() }),
create: () => ({ upcomingMeetings: [], preferences: {} }),
});
// Smart meeting assistant that adapts to different meeting types
const meetingAssistant = context({
type: "meeting-assistant",
schema: z.object({
meetingId: z.string(),
meetingType: z.enum(["standup", "planning", "review", "general"]),
userId: z.string(),
}),
create: () => ({ startTime: Date.now() }),
})
.use((state) => {
const contexts = [
// Always include transcription for all meetings
{ context: transcriptionContext, args: { meetingId: state.args.meetingId } },
// Include calendar for scheduling follow-ups
{ context: calendarContext, args: { userId: state.args.userId } },
];
// Add action items context for meetings that need follow-up
if (["planning", "review"].includes(state.args.meetingType)) {
contexts.push({
context: actionItemsContext,
args: { meetingId: state.args.meetingId }
});
}
return contexts;
})
.render((state) => `
Meeting Assistant for ${state.args.meetingType} meeting
Meeting ID: ${state.args.meetingId}
Host: ${state.args.userId}
`)
.instructions((state) => {
const baseInstructions = `You are a meeting assistant. You can transcribe audio and help manage meetings.`;
if (["planning", "review"].includes(state.args.meetingType)) {
return baseInstructions + ` Focus on capturing action items and assigning them to team members.`;
}
return baseInstructions + ` Focus on capturing key discussion points.`;
});
// Features available based on meeting type:
// All meetings: ✅ Audio transcription, calendar integration
// Planning/Review: ✅ + Action item tracking and assignment
// Standup/General: ✅ Transcription only (streamlined experience)
```
## Advanced Patterns
### Error Handling in Composition
```typescript title="error-handling.ts"
const robustContext = context({ type: "robust" })
.use((state) => {
const contexts = [];
try {
// Always try to include core functionality
contexts.push({ context: coreContext, args: {} });
// Optional enhanced features
if (state.memory.featureEnabled) {
contexts.push({ context: enhancedContext, args: {} });
}
} catch (error) {
console.warn("Error in context composition:", error);
// Fall back to minimal context
contexts.push({ context: minimalContext, args: {} });
}
return contexts;
});
```
### Dynamic Context Loading
```typescript title="dynamic-loading.ts"
const adaptiveContext = context({
type: "adaptive",
schema: z.object({ features: z.array(z.string()) })
})
.use(async (state) => {
const contexts = [];
// Load different contexts based on requested features
for (const feature of state.args.features) {
switch (feature) {
case "analytics":
contexts.push({ context: analyticsContext, args: {} });
break;
case "ai-assistant":
contexts.push({ context: aiContext, args: {} });
break;
case "notifications":
contexts.push({ context: notificationContext, args: {} });
break;
}
}
return contexts;
});
```
## Best Practices
### 1. Keep Contexts Single-Purpose
Each context should have one clear responsibility:
```typescript title="single-purpose.ts"
// ✅ Good - focused contexts
const authContext = context({
type: "auth",
// Only handles user authentication
});
const profileContext = context({
type: "profile",
// Only manages user profile data
});
const preferencesContext = context({
type: "preferences",
// Only handles user settings
});
// ❌ Bad - mixed concerns
const userEverythingContext = context({
type: "user-everything",
// Handles auth + profile + preferences + notifications + billing...
});
```
### 2. Use Meaningful Context Arguments
Pass the minimum required data to composed contexts:
```typescript title="meaningful-args.ts"
// ✅ Good - clear, minimal arguments
const orderContext = context({ type: "order" })
.use((state) => [
{ context: inventoryContext, args: { storeId: state.args.storeId } },
{ context: paymentContext, args: { customerId: state.args.customerId } },
{ context: shippingContext, args: {
customerId: state.args.customerId,
storeId: state.args.storeId
}},
]);
// ❌ Bad - passing entire state objects
const orderContext = context({ type: "order" })
.use((state) => [
{ context: inventoryContext, args: state }, // Too much data
{ context: paymentContext, args: { ...state.args, ...state.memory } }, // Confusing
]);
```
### 3. Handle Optional Contexts Gracefully
Use conditional composition for optional features:
```typescript title="optional-contexts.ts"
// ✅ Good - graceful optional composition
const appContext = context({ type: "app" })
.use((state) => {
const contexts = [
// Core contexts always included
{ context: coreContext, args: {} },
];
// Optional features based on user tier
if (state.memory.userTier === "premium") {
contexts.push({ context: premiumContext, args: {} });
}
// Optional contexts based on feature flags
if (state.memory.betaFeatures?.includes("ai-assistant")) {
contexts.push({ context: aiContext, args: {} });
}
return contexts;
});
// ❌ Bad - unclear optional logic
const appContext = context({ type: "app" })
.use((state) => [
{ context: coreContext, args: {} },
state.memory.userTier === "premium" ? { context: premiumContext, args: {} } : null,
// What happens with null? Unclear!
]);
```
### 4. Plan for Context Dependencies
Document and manage context relationships:
```typescript title="context-dependencies.ts"
/**
* E-commerce checkout flow
*
* Dependencies:
* - cartContext: Manages items and quantities
* - inventoryContext: Validates item availability
* - paymentContext: Processes transactions
* - shippingContext: Calculates delivery options
*
* This context orchestrates the complete checkout process
*/
const checkoutContext = context({
type: "checkout",
schema: z.object({
sessionId: z.string(),
customerId: z.string(),
}),
})
.use((state) => [
{ context: cartContext, args: { sessionId: state.args.sessionId } },
{ context: inventoryContext, args: {} },
{ context: paymentContext, args: { customerId: state.args.customerId } },
{ context: shippingContext, args: { customerId: state.args.customerId } },
])
.instructions(`You handle checkout by:
1. Validating cart contents with inventory
2. Processing payment
3. Arranging shipping
4. Confirming the order
`);
```
### 5. Use Composition for Feature Flags
Enable/disable functionality through composition:
```typescript title="feature-flags.ts"
const featureFlags = {
aiRecommendations: true,
advancedAnalytics: false,
betaFeatures: true,
};
const dynamicContext = context({ type: "dynamic" })
.use(() => {
const contexts = [
{ context: baseContext, args: {} }, // Always include base
];
if (featureFlags.aiRecommendations) {
contexts.push({ context: aiContext, args: {} });
}
if (featureFlags.advancedAnalytics) {
contexts.push({ context: analyticsContext, args: {} });
}
return contexts;
});
```
## Common Pitfalls
### ❌ Circular Dependencies
```typescript
// Don't create circular references
const contextA = context({ type: "a" }).use(() => [{ context: contextB, args: {} }]);
const contextB = context({ type: "b" }).use(() => [{ context: contextA, args: {} }]); // ❌ Circular!
```
### ❌ Over-Composition
```typescript
// Don't compose too many contexts unnecessarily
const bloatedContext = context({ type: "bloated" })
.use(() => [
{ context: context1, args: {} },
{ context: context2, args: {} },
{ context: context3, args: {} },
// ... 20 more contexts - probably too many!
]);
```
### ❌ Forgetting to Filter Nulls
```typescript
// Remember to filter out null/undefined contexts
const buggyContext = context({ type: "buggy" })
.use((state) => [
{ context: baseContext, args: {} },
state.condition ? { context: conditionalContext, args: {} } : null, // ❌ Can be null!
]); // Should use .filter(Boolean)
```
## Key Takeaways
* **Composer functions** receive context state and return `{ context, args }` arrays
* **Conditional composition** lets you adapt behavior based on runtime conditions
* **Filter pattern** use `.filter(Boolean)` to remove null/undefined contexts
* **Keep contexts focused** on single responsibilities for better maintainability
* **Document dependencies** to help other developers understand relationships
* **Handle errors gracefully** with try/catch and fallback contexts
* **Use meaningful arguments** pass only what each context actually needs
Context composition enables sophisticated agent behaviors while maintaining clean, modular code. Start with simple contexts and compose them to create powerful, adaptive systems.
file: ./content/docs/core/concepts/context-engineering.mdx
meta: {
"title": "Context Engineering",
"description": "Structure, parsing, and customization of Daydreams prompts and XML responses."
}
## Overview
Context engineering is how Daydreams shapes what the model sees and how it should respond. It covers:
* Prompt structure: the sections we render into one prompt
* XML response contract: what tags the model must output
* Streaming + parsing: how we parse tags as they stream
* Customization hooks: swapping prompt builders, response adapters, and tag handling
Core files:
* `packages/core/src/prompts/main.ts` (prompt sections + formatter)
* `packages/core/src/prompts/default-builder.ts` (default PromptBuilder)
* `packages/core/src/response/default-xml-adapter.ts` (response adapter)
* `packages/core/src/handlers/handle-stream.ts` (streaming XML parser → logs)
* `packages/core/src/parsing/xml.ts` and `.../formatters.ts` (XML utilities)
For a higher-level tour of how prompts are assembled, see [Prompting](/docs/core/concepts/prompting).
## Prompt Structure
The main prompt template contains four sections rendered and stitched together:
* `intro`: short system role
* `instructions`: rules and the XML response contract with examples
* `content`: current situation, tools, and context
* `response`: a final nudge to begin the `` block
Source: `prompts/main.ts`
Sections are assembled by `formatPromptSections(...)`, which converts live state to XML blocks:
* Current situation: `unprocessed-inputs`, `pending-operations`, `recent-action-results`, `context-state`
* Tools: `available-actions`, `available-outputs`
* Knowledge/history: `semantic-context` (relevant memories), `recent-history`, `decision-context` (recent thoughts)
These are composed with the XML helpers (see `parsing/formatters.ts`: `xml`, `formatXml`, and formatters for actions/outputs/logs).
## XML Response Contract
The LLM must reply with a single `... ` block containing some of:
* ``: model’s plan/chain of thought (internal; captured as `thought` logs)
* `{json} `: tool invocation; JSON body must parse
* `{json|text} `: agent output; JSON body for structured outputs
Important details in `main.ts` instructions:
* Exactly one top-level `` block
* Valid JSON bodies for `` and ``
* Use the provided examples to match formatting
Template references: you can embed `{{...}}` to reference data (e.g., `{{calls[0].id}}`). See “Template Engine” note in `main.ts`.
## Streaming + Parsing
Responses are streamed and parsed incrementally:
* `default-xml-adapter.ts` wraps provider streams to ensure a single `` wrapper and exposes a `handleStream` that delegates to the core handler.
* `handlers/handle-stream.ts` uses `xmlStreamParser(...)` to parse tags as they arrive and converts them into Daydreams logs:
* `` → `ThoughtRef`
* `` → `ActionCall`
* `` → `OutputRef`
Default parsed tags: `think`, `thinking`, `response`, `output`, `action_call`, `reasoning`.
Each tag is tracked with an index and depth; text content updates the current element until the tag closes. As tags finish, corresponding logs are pushed (`pushLog`) and also chunked for streaming UIs.
## Default XML Tags
The response parser recognizes these tags by default and maps them to Daydreams logs:
* `response`: container for the whole reply; not logged, ensures a single top-level block
* `reasoning` (also `think`/`thinking`): captured as `ThoughtRef` with `content`
* `action_call name="..."` + JSON body: becomes an `ActionCall` with `name`, `params` (from attributes), and parsed `data`
* `output name="..."` + body: becomes an `OutputRef` with `name`, `params` (from attributes), and parsed `data`
Notes:
* Attributes other than `name` on `action_call`/`output` are treated as `params` on the log.
* Only these tags are handled by the default adapter/handler; additional tags are ignored unless you extend the handler or adapter.
## Customization Hooks
You can tailor both prompt generation and response parsing.
1. Replace the Prompt Builder
```ts title="custom-prompt-builder.ts"
import type { PromptBuilder } from "@daydreamsai/core";
import { mainPrompt } from "@daydreamsai/core";
export const myPrompt: PromptBuilder = {
name: "my-main",
build(input) {
// Reuse default formatter but change sizes/ordering
const data = mainPrompt.formatter({
contexts: input.contexts,
outputs: input.outputs,
actions: input.actions,
workingMemory: input.workingMemory,
maxWorkingMemorySize: 6,
chainOfThoughtSize: 2,
});
// Or modify sections here before render
const prompt = mainPrompt.render({
...data,
// e.g., prepend a policy note into decision-context
decisionContext: {
tag: "decision-context",
params: {},
children: ["Follow safety policy X before actions.", data.decisionContext],
},
} as any);
return { prompt };
},
};
// Install on the agent
const agent = createDreams({ prompt: myPrompt });
```
2. Provide Per-Context Rendering and Instructions
Contexts can inject custom text or XML via `instructions` and `render` (see `types.ts`). The default formatter includes each context’s rendered output under ``.
```ts title="context-render.ts"
const chat = context({
type: "chat",
render: (state) => `Chat:${state.args.userId} messages=${state.memory.messages.length}`,
instructions: "Be concise and friendly for chat interactions.",
});
```
3. Swap the Response Adapter
If your model/provider needs a different wrapper or parsing policy, replace the adapter:
```ts title="custom-response-adapter.ts"
const agent = createDreams({
response: {
prepareStream({ model, stream }) {
// Wrap with custom tags or sanitize text
return { stream: stream.textStream, getTextResponse: () => stream.text };
},
async handleStream({ textStream, index, defaultHandlers }) {
// Delegate to core XML handler or implement your own
await defaultXmlResponseAdapter.handleStream({
textStream,
index,
defaultHandlers,
});
},
},
});
```
4. Add Your Own Prompt Template
You can build entirely custom templates via `createPrompt` and `render` utilities (see `prompts/types.ts`) or compose multiple prompts. Then set `prompt` on the agent as above.
## Practical Tips
* Keep `` short and strictly valid; malformed JSON stops actions/outputs.
* Prefer structured outputs (JSON bodies) for machine handling; keep prose in `content` fields.
* Control prompt size with `maxWorkingMemorySize` and `chainOfThoughtSize` in your builder.
* If you add new tags, ensure your adapter/handler recognizes and maps them to logs or ignores them safely.
See also:
* [Prompting](/docs/core/concepts/prompting)
* API: `PromptBuilder` and `ResponseAdapter` in `/docs/api/Agent` and `/docs/api/api-reference`
file: ./content/docs/core/concepts/contexts.mdx
meta: {
"title": "Contexts",
"description": "Managing state, memory, and behavior for agent interactions."
}
## What is a Context?
A context is like a **separate workspace** for your agent. Think of it like
having different tabs open in your browser - each tab has its own state and
remembers different things.
## Context Patterns
Daydreams supports three main patterns for organizing your agent's behavior:
### 1. Single Context - Simple & Focused
Perfect for simple agents with one clear purpose:
```typescript title="single-context.ts"
import { context } from "@daydreamsai/core";
import * as z from "zod";
const chatBot = context({
type: "chat",
schema: z.object({ userId: z.string() }),
create: () => ({ messages: [] }),
instructions: "You are a helpful assistant.",
});
// Simple and focused - handles one thing well
```
### 2. Multiple Contexts - Separate Workspaces
When you need completely separate functionality:
```typescript title="multiple-contexts.ts"
// Chat context for conversations
const chatContext = context({
type: "chat",
schema: z.object({ userId: z.string() }),
create: () => ({ messages: [], preferences: {} }),
});
// Game context for game sessions
const gameContext = context({
type: "game",
schema: z.object({ gameId: z.string() }),
create: () => ({ health: 100, level: 1, inventory: [] }),
});
// Todo context for task management
const todoContext = context({
type: "todo",
schema: z.object({ listId: z.string() }),
create: () => ({ tasks: [] }),
});
const agent = createDreams({
model: openai("gpt-4o"),
contexts: [chatContext, gameContext, todoContext],
});
// Each context is completely isolated with separate memory
```
### 3. 🌟 Composed Contexts - The Power Pattern
**This is where Daydreams shines** - contexts that work together using `.use()`:
```typescript title="composed-contexts.ts"
import { context, action } from "@daydreamsai/core";
import * as z from "zod";
// Analytics tracks user behavior
const analyticsContext = context({
type: "analytics",
schema: z.object({ userId: z.string() }),
create: () => ({ events: [], totalSessions: 0 }),
}).setActions([
action({
name: "trackEvent",
description: "Track user interaction",
schema: z.object({ event: z.string(), data: z.any() }),
handler: async ({ event, data }, ctx) => {
ctx.memory.events.push({ event, data, timestamp: Date.now() });
return { tracked: true };
},
}),
]);
// Profile stores user preferences
const profileContext = context({
type: "profile",
schema: z.object({ userId: z.string() }),
create: () => ({ name: "", tier: "free", preferences: {} }),
});
// Premium features context
const premiumContext = context({
type: "premium",
schema: z.object({ userId: z.string() }),
create: () => ({ advancedFeatures: true }),
}).setActions([
action({
name: "generateAdvancedReport",
description: "Create detailed analytics report",
schema: z.object({ dateRange: z.string() }),
handler: async ({ dateRange }, ctx) => {
return { report: "Advanced analytics for " + dateRange };
},
}),
]);
// Smart chat context that composes all the above
const smartChatContext = context({
type: "chat",
schema: z.object({ userId: z.string() }),
create: () => ({ messages: [] }),
})
.use((state) => [
// Always include analytics for every user
{ context: analyticsContext, args: { userId: state.args.userId } },
// Always include profile
{ context: profileContext, args: { userId: state.args.userId } },
// Include premium features only for premium users
state.memory.userTier === "premium"
? { context: premiumContext, args: { userId: state.args.userId } }
: null,
].filter(Boolean));
// Now your chat context has access to:
// ✅ trackEvent action from analytics
// ✅ Profile data and preferences
// ✅ generateAdvancedReport (for premium users only)
// ✅ Unified behavior across contexts
```
## When to Use Each Pattern
| Pattern | Use When | Examples |
| ------------------------ | -------------------------------------- | ------------------------------------------------ |
| **Single Context** | Simple, focused functionality | FAQ bot, calculator, weather checker |
| **Multiple Contexts** | Separate user workflows | Chat + Games + Todo lists |
| **🌟 Composed Contexts** | Rich experiences, conditional features | E-commerce assistant, CRM agent, enterprise apps |
**Most powerful apps use composed contexts** - they provide the flexibility to:
* Share common functionality (analytics, auth, logging)
* Enable conditional features based on user tier/preferences
* Build modular systems that scale
* Maintain clean separation while enabling cooperation
## Real-World Example: E-commerce Assistant
Here's how context composition enables sophisticated behavior:
```typescript title="ecommerce-assistant.ts"
// Product search functionality
const catalogContext = context({
type: "catalog",
schema: z.object({ storeId: z.string() }),
create: () => ({ recentSearches: [] }),
}).setActions([
action({
name: "searchProducts",
description: "Search for products in the store",
schema: z.object({ query: z.string() }),
handler: async ({ query }, ctx) => {
ctx.memory.recentSearches.push(query);
return { products: await searchProductAPI(query) };
},
}),
]);
// Shopping cart management
const cartContext = context({
type: "cart",
schema: z.object({ sessionId: z.string() }),
create: () => ({ items: [], total: 0 }),
}).setActions([
action({
name: "addToCart",
description: "Add item to shopping cart",
schema: z.object({ productId: z.string(), quantity: z.number() }),
handler: async ({ productId, quantity }, ctx) => {
const product = await getProduct(productId);
ctx.memory.items.push({ productId, quantity, price: product.price });
ctx.memory.total += product.price * quantity;
return { success: true, cartTotal: ctx.memory.total };
},
}),
]);
// VIP customer perks
const vipContext = context({
type: "vip",
schema: z.object({ customerId: z.string() }),
create: () => ({ discountRate: 0.1, freeShipping: true }),
}).setActions([
action({
name: "applyVipDiscount",
description: "Apply VIP customer discount",
schema: z.object({ amount: z.number() }),
handler: async ({ amount }, ctx) => {
const discounted = amount * (1 - ctx.memory.discountRate);
return { originalAmount: amount, discountedAmount: discounted };
},
}),
]);
// Main shopping assistant that composes everything
const shoppingAssistant = context({
type: "shopping-assistant",
schema: z.object({
customerId: z.string(),
sessionId: z.string(),
storeId: z.string(),
customerTier: z.enum(["regular", "vip"]),
}),
create: () => ({ conversationStarted: Date.now() }),
})
.use((state) => [
// Always include catalog and cart
{ context: catalogContext, args: { storeId: state.args.storeId } },
{ context: cartContext, args: { sessionId: state.args.sessionId } },
// Include VIP features only for VIP customers
state.args.customerTier === "vip"
? { context: vipContext, args: { customerId: state.args.customerId } }
: null,
].filter(Boolean))
.instructions((state) => {
const baseInstructions = "You are a helpful shopping assistant. You can search products and manage the cart.";
if (state.args.customerTier === "vip") {
return baseInstructions + " This customer is VIP - offer premium service and apply discounts.";
}
return baseInstructions + " Mention our VIP program if appropriate.";
});
// This assistant can now:
// ✅ Search products across the store catalog
// ✅ Manage shopping cart items and totals
// ✅ Apply VIP discounts (only for VIP customers)
// ✅ Provide personalized experience based on customer tier
// ✅ All actions work together seamlessly
```
## The Problem: Agents Need to Remember Different Things
Without contexts, your agent mixes everything together:
```text title="confused-agent.txt"
User Alice: "My favorite color is blue"
User Bob: "What's Alice's favorite color?"
Agent: "Alice's favorite color is blue"
// ❌ Bob shouldn't see Alice's private info!
User in Game A: "Go north"
User in Game B: "What room am I in?"
Agent: "You went north" (from Game A!)
// ❌ Wrong game state mixed up!
Project Alpha discussion mixed with Project Beta tasks
// ❌ Complete chaos!
```
## The Solution: Contexts Separate Everything
With contexts, each conversation/session/game stays separate:
```text title="organized-agent.txt"
Alice's Chat Context:
- Alice: "My favorite color is blue"
- Agent remembers: Alice likes blue
Bob's Chat Context:
- Bob: "What's Alice's favorite color?"
- Agent: "I don't have information about Alice"
// ✅ Privacy maintained!
Game A Context:
- Player went north → remembers current room
Game B Context:
- Separate game state → different room
// ✅ No mixing of game states!
```
## How Contexts Work in Your Agent
### 1. You Define Different Context Types
```typescript title="define-contexts.ts"
import { createDreams } from "@daydreamsai/core";
import { openai } from "@ai-sdk/openai";
const agent = createDreams({
model: openai("gpt-4o"),
contexts: [
chatContext, // For user conversations
gameContext, // For game sessions
projectContext, // For project management
],
});
```
### 2. Inputs Route to Specific Context Instances
```typescript title="context-routing.ts"
// Discord input routes to chat contexts
discordInput.subscribe((send, agent) => {
discord.on("message", (msg) => {
// Each user gets their own chat context instance
send(
chatContext,
{ userId: msg.author.id },
{
content: msg.content,
}
);
});
});
// Game input routes to game contexts
gameInput.subscribe((send, agent) => {
gameServer.on("move", (event) => {
// Each game gets its own context instance
send(
gameContext,
{ gameId: event.gameId },
{
action: event.action,
}
);
});
});
```
### 3. Agent Maintains Separate Memory
```text title="context-instances.txt"
Chat Context Instances:
- chat:alice → { messages: [...], preferences: {...} }
- chat:bob → { messages: [...], preferences: {...} }
- chat:carol → { messages: [...], preferences: {...} }
Game Context Instances:
- game:session1 → { health: 80, level: 3, room: "forest" }
- game:session2 → { health: 100, level: 1, room: "start" }
- game:session3 → { health: 45, level: 7, room: "dungeon" }
All completely separate!
```
## Creating Your First Context
Here's a simple todo list context:
```typescript title="todo-context.ts"
import { context } from "@daydreamsai/core";
import * as z from "zod";
// Define what this context remembers
interface TodoMemory {
tasks: { id: string; title: string; done: boolean }[];
createdAt: string;
}
export const todoContext = context({
// Type identifies this kind of context
type: "todo",
// Schema defines how to identify specific instances
schema: z.object({
listId: z.string().describe("Unique ID for this todo list"),
}),
// Create initial memory when first accessed
create: (): TodoMemory => ({
tasks: [],
createdAt: new Date().toISOString(),
}),
// How this context appears to the LLM
render: (state) => {
const { tasks } = state.memory;
const pending = tasks.filter((t) => !t.done).length;
const completed = tasks.filter((t) => t.done).length;
return `
Todo List: ${state.args.listId}
Tasks: ${pending} pending, ${completed} completed
Recent tasks:
${tasks
.slice(-5)
.map((t) => `${t.done ? "✅" : "⏳"} ${t.title}`)
.join("\n")}
`;
},
// Instructions for the LLM when this context is active
instructions:
"Help the user manage their todo list. You can add, complete, and list tasks.",
});
```
Use it in your agent:
```typescript title="agent-with-todo.ts"
import { createDreams } from "@daydreamsai/core";
import { openai } from "@ai-sdk/openai";
const agent = createDreams({
model: openai("gpt-4o"),
contexts: [todoContext],
});
// Now users can have separate todo lists:
// todo:work → Work tasks
// todo:personal → Personal tasks
// todo:shopping → Shopping list
// Each maintains separate state!
```
## Context Memory: What Gets Remembered
Context memory persists between conversations:
```typescript title="memory-example.ts"
// First conversation
User: "Add 'buy milk' to my shopping list"
Agent: → todoContext(listId: "shopping")
→ memory.tasks.push({id: "1", title: "buy milk", done: false})
→ "Added 'buy milk' to your shopping list"
// Later conversation (hours/days later)
User: "What's on my shopping list?"
Agent: → todoContext(listId: "shopping")
→ Loads saved memory: {tasks: [{title: "buy milk", done: false}]}
→ "You have 'buy milk' on your shopping list"
// ✅ Context remembered the task across conversations!
```
## Multiple Contexts in One Agent
Your agent can work with multiple contexts, each maintaining separate state:
```typescript title="multi-context-usage.ts"
// User sends message to chat context
await agent.send({
context: chatContext,
args: { userId: "alice" },
input: { type: "text", data: "Add 'finish project' to my work todo list" }
});
// Later, user queries their todo list directly
await agent.send({
context: todoContext,
args: { listId: "work" },
input: { type: "text", data: "What's on my list?" }
});
// Or the same user in a different chat context
await agent.send({
context: chatContext,
args: { userId: "alice" }, // Same user, same context instance
input: { type: "text", data: "How was your day?" }
});
```
Each context maintains completely separate memory:
* `chat:alice` remembers Alice's conversation history
* `todo:work` remembers work-related tasks
* `todo:personal` would be a separate todo list
* Each operates independently with its own actions and memory
## Advanced: Context-Specific Actions
You can attach actions that only work in certain contexts:
```typescript title="context-specific-actions.ts"
import { context, action } from "@daydreamsai/core";
import * as z from "zod";
const todoContextWithActions = todoContext.setActions([
action({
name: "add-task",
description: "Adds a new task to the todo list",
schema: z.object({
title: z.string(),
}),
handler: async ({ title }, ctx) => {
// ctx.memory is automatically typed as TodoMemory!
const newTask = {
id: crypto.randomUUID(),
title,
done: false,
};
ctx.memory.tasks.push(newTask);
return {
success: true,
taskId: newTask.id,
message: `Added "${title}" to the list`,
};
},
}),
action({
name: "complete-task",
description: "Marks a task as completed",
schema: z.object({
taskId: z.string(),
}),
handler: async ({ taskId }, ctx) => {
const task = ctx.memory.tasks.find((t) => t.id === taskId);
if (!task) {
return { success: false, message: "Task not found" };
}
task.done = true;
return {
success: true,
message: `Completed "${task.title}"`,
};
},
}),
]);
```
Now these actions only appear when the todo context is active!
## Context Lifecycle
Contexts have hooks for different stages:
```typescript title="context-lifecycle.ts"
const advancedContext = context({
type: "advanced",
schema: z.object({ sessionId: z.string() }),
// Called when context instance is first created
create: () => ({
startTime: Date.now(),
interactions: 0,
}),
// Called once during context setup (before first use)
setup: async (args, settings, agent) => {
agent.logger.info(`Setting up session: ${args.sessionId}`);
return {
createdBy: "system",
setupTime: Date.now()
};
},
// Called before each LLM step
onStep: async (ctx) => {
ctx.memory.interactions++;
},
// Called when a conversation/run completes
onRun: async (ctx) => {
const duration = Date.now() - ctx.memory.startTime;
console.log(`Session completed in ${duration}ms`);
},
// Called if there's an error during execution
onError: async (error, ctx) => {
console.error(`Error in session ${ctx.id}:`, error);
},
// Custom save function (optional)
save: async (state) => {
// Custom logic to save context state
console.log(`Saving context ${state.id}`);
},
// Custom load function (optional)
load: async (id, options) => {
// Custom logic to load context memory
console.log(`Loading context ${id}`);
return { startTime: Date.now(), interactions: 0 };
},
});
```
## Advanced Context Features
### Custom Context Keys
By default, context instances use `type:key` format. You can customize key generation:
```typescript title="custom-keys.ts"
const customContext = context({
type: "user-session",
schema: z.object({
userId: z.string(),
sessionType: z.string()
}),
// Custom key function to create unique IDs
key: (args) => `${args.userId}-${args.sessionType}`,
create: () => ({ data: {} })
});
// This creates context IDs like:
// user-session:alice-support
// user-session:bob-sales
// user-session:carol-general
```
### Dynamic Instructions
Instructions can be functions that adapt based on context state:
```typescript title="dynamic-instructions.ts"
const adaptiveContext = context({
type: "adaptive",
schema: z.object({ userTier: z.string() }),
create: () => ({ features: [] }),
instructions: (state) => {
const base = "You are a helpful assistant.";
if (state.args.userTier === "premium") {
return base + " You have access to advanced features and priority support.";
}
return base + " Let me know if you'd like to upgrade for more features!";
}
});
```
### Context Settings & Model Overrides
Contexts can override agent-level settings:
```typescript title="context-settings.ts"
const specializedContext = context({
type: "specialized",
// Override the agent's model for this context
model: openai("gpt-4o"),
// Context-specific model settings
modelSettings: {
temperature: 0.1, // More focused responses
maxTokens: 2000,
},
// Limit LLM steps for this context
maxSteps: 5,
// Limit working memory size
maxWorkingMemorySize: 1000,
create: () => ({ specialized: true })
});
```
### Context Composition
Contexts can include other contexts for modular functionality:
```typescript title="context-composition.ts"
const analyticsContext = context({
type: "analytics",
schema: z.object({ userId: z.string() }),
create: () => ({ events: [] })
});
const composedContext = context({
type: "main",
schema: z.object({ userId: z.string() }),
create: () => ({ data: {} })
})
// Include analytics functionality
.use((state) => [
{ context: analyticsContext, args: { userId: state.args.userId } }
]);
// Now composedContext has access to analytics actions and memory
```
## Best Practices
### 1. Design Clear Boundaries
```typescript title="good-context-design.ts"
// ✅ Good - clear, specific purpose
const userProfileContext = context({
type: "user-profile",
schema: z.object({ userId: z.string() }),
// Manages user preferences, settings, history
});
const orderContext = context({
type: "order",
schema: z.object({ orderId: z.string() }),
// Manages specific order state, items, shipping
});
// ❌ Bad - too broad, unclear purpose
const stuffContext = context({
type: "stuff",
schema: z.object({ id: z.string() }),
// What does this manage? Everything? Nothing clear.
});
```
### 2. Keep Memory Structures Simple
```typescript title="good-memory-structure.ts"
// ✅ Good - clear, simple structure
interface ChatMemory {
messages: Array<{
sender: "user" | "agent";
content: string;
timestamp: number;
}>;
userPreferences: {
language?: string;
timezone?: string;
};
}
// ❌ Bad - overly complex, nested
interface OverComplexMemory {
data: {
nested: {
deeply: {
structured: {
confusing: {
memory: any;
};
};
};
};
};
}
```
### 3. Write Helpful Render Functions
```typescript title="good-render-function.ts"
// ✅ Good - concise, relevant information
render: (state) => `
Shopping Cart: ${state.args.cartId}
Items: ${state.memory.items.length}
Total: $${state.memory.total.toFixed(2)}
Recent items:
${state.memory.items
.slice(-3)
.map((item) => `- ${item.name} ($${item.price})`)
.join("\n")}
`;
// ❌ Bad - too much information, overwhelming
render: (state) => JSON.stringify(state.memory, null, 2); // Dumps everything!
```
### 4. Use Descriptive Schema
```typescript title="good-schema.ts"
// ✅ Good - clear descriptions
schema: z.object({
userId: z.string().uuid().describe("Unique identifier for the user"),
sessionType: z
.enum(["support", "sales", "general"])
.describe("Type of support session"),
});
// ❌ Bad - no descriptions, unclear
schema: z.object({
id: z.string(),
type: z.string(),
});
```
## Key Takeaways
* **Contexts separate state** - Each conversation/session/game gets its own isolated memory
* **Instance-based** - Same context type, different instances for different users/sessions
* **Memory persists** - State is automatically saved and restored between conversations
* **Type-safe** - Full TypeScript support for memory, args, and actions
* **Lifecycle hooks** - `setup`, `onStep`, `onRun`, `onError` for custom behavior
* **Custom key generation** - Control how context instances are identified
* **Model overrides** - Each context can use different models and settings
* **Dynamic instructions** - Instructions can adapt based on context state
* **Context composition** - Use `.use()` to combine contexts for complex behaviors
* **Custom save/load** - Override default persistence with custom logic
* **Context-specific actions** - Actions only available when context is active
Contexts provide isolated, stateful workspaces that enable sophisticated agent behaviors while keeping data separate and organized. They're essential for building agents that can handle multiple simultaneous conversations, games, projects, or any scenario requiring persistent state management.
file: ./content/docs/core/concepts/episode-export.mdx
meta: {
"title": "Episode Export",
"description": "Export conversation episodes to various formats for analysis, backup, or integration"
}
## Quick Start
Export conversation episodes from your agent's memory to JSON or Markdown:
```typescript title="export-episodes.ts"
// Export recent episodes to JSON
const episodes = await agent.memory.episodes.getByContext('context:user-123');
const result = await agent.exports.export({
episodes,
exporter: 'json',
options: { pretty: true }
});
// Save to file
fs.writeFileSync('conversation-history.json', result.metadata.content);
```
## What is Episode Export?
Episode export provides a mechanism to extract conversation data from the agent's memory system into portable formats. Each episode represents a complete interaction cycle (input → processing → output) with associated metadata, timestamps, and context.
The export system transforms the internal episode structure into standard formats like JSON or human-readable Markdown, with support for filtering, sanitization, and custom transformations.
## Export Architecture
The export system consists of three core components:
### 1. Export Manager
The `ExportManager` coordinates export operations and manages registered exporters:
```typescript title="using-export-manager.ts"
// Access the export manager
const exportManager = agent.exports;
// List available exporters
const exporters = exportManager.listExporters();
// [
// { name: 'json', formats: ['json', 'jsonl'] },
// { name: 'markdown', formats: ['md', 'markdown'] }
// ]
// Export with specific format
const result = await exportManager.export({
episodes: myEpisodes,
exporter: 'json',
format: 'jsonl', // JSON Lines format
});
```
### 2. Episode Structure
Episodes contain structured conversation data:
```typescript title="episode-structure.ts"
interface Episode {
id: string;
type: "conversation" | "action" | "event" | "compression";
input?: any; // User input
output?: any; // Agent response
context: string; // Context identifier
timestamp: number; // Unix timestamp
duration?: number; // Processing time in ms
metadata?: Record;
summary?: string; // Optional summarization
}
```
### 3. Export Result
All exporters return a standardized result:
```typescript title="export-result.ts"
interface ExportResult {
success: boolean;
location?: string; // 'memory' for in-memory results
format: string; // 'json', 'jsonl', 'md', etc.
size?: number; // Content size in bytes
metadata?: {
content: string; // The exported content
episodeCount?: number; // Number of episodes exported
};
error?: Error; // Error details if failed
}
```
## JSON Export
The JSON exporter supports two formats:
### Standard JSON Array
```typescript title="json-export.ts"
const result = await agent.exports.export({
episodes,
exporter: 'json',
options: {
pretty: true // Pretty print with indentation
}
});
// Result contains array of episodes
const episodes = JSON.parse(result.metadata.content);
```
### JSON Lines (JSONL)
For streaming or large datasets:
```typescript title="jsonl-export.ts"
const result = await agent.exports.export({
episodes,
exporter: 'json',
options: { format: 'jsonl' }
});
// Each line is a complete JSON object
result.metadata.content.split('\n').forEach(line => {
const episode = JSON.parse(line);
processEpisode(episode);
});
```
## Markdown Export
Generate human-readable conversation logs:
```typescript title="markdown-export.ts"
const result = await agent.exports.export({
episodes,
exporter: 'markdown',
options: {
includeMetadata: true, // Include metadata section
includeTimestamps: true, // Show timestamps
separator: '\n---\n' // Between episodes
}
});
// Save as markdown file
fs.writeFileSync('conversation.md', result.metadata.content);
```
Example output:
````markdown
# Episode: 7f3a2b1c-4d5e-6789
**Type**: conversation
**Date**: 2024-01-15T10:30:00.000Z
**Duration**: 1.2s
**Context**: user:123
## Conversation
### User
How do I export episodes?
### Assistant
You can export episodes using the export manager...
## Metadata
```json
{
"model": "gpt-4",
"temperature": 0.7
}
````
````
## Data Transformation
Apply transformations during export:
### Field Filtering
```typescript title="field-filtering.ts"
// Include only specific fields
const result = await agent.exports.export({
episodes,
exporter: 'json',
transform: {
fields: {
include: ['id', 'type', 'input', 'output', 'timestamp']
}
}
});
// Or exclude sensitive fields
const result = await agent.exports.export({
episodes,
exporter: 'json',
transform: {
fields: {
exclude: ['metadata', 'context']
}
}
});
````
### Custom Sanitization
```typescript title="sanitization.ts"
const result = await agent.exports.export({
episodes,
exporter: 'json',
transform: {
sanitize: (episode) => ({
...episode,
input: redactPII(episode.input),
output: redactPII(episode.output),
metadata: undefined // Remove all metadata
})
}
});
function redactPII(content: any): any {
if (typeof content === 'string') {
return content
.replace(/\b[A-Z0-9._%+-]+@[A-Z0-9.-]+\.[A-Z]{2,}\b/gi, '[EMAIL]')
.replace(/\b\d{3}[-.]?\d{3}[-.]?\d{4}\b/g, '[PHONE]');
}
return content;
}
```
### Sorting
```typescript title="sorting.ts"
const result = await agent.exports.export({
episodes,
exporter: 'json',
transform: {
sortBy: 'timestamp',
sortOrder: 'desc' // Most recent first
}
});
```
## Creating Custom Exporters
Implement the `EpisodeExporter` interface:
```typescript title="custom-csv-exporter.ts"
import { EpisodeExporter, ExportResult, Episode } from '@daydreamsai/core';
interface CSVOptions {
delimiter?: string;
headers?: boolean;
}
class CSVExporter implements EpisodeExporter {
name = 'csv';
description = 'Export episodes as CSV';
formats = ['csv', 'tsv'];
async exportEpisode(
episode: Episode,
options?: CSVOptions
): Promise {
const delimiter = options?.delimiter || ',';
const row = [
episode.id,
episode.type,
episode.timestamp,
JSON.stringify(episode.input || ''),
JSON.stringify(episode.output || '')
].join(delimiter);
return {
success: true,
format: 'csv',
metadata: { content: row }
};
}
async exportBatch(
episodes: Episode[],
options?: CSVOptions
): Promise {
const delimiter = options?.delimiter || ',';
const rows: string[] = [];
if (options?.headers !== false) {
rows.push(['id', 'type', 'timestamp', 'input', 'output'].join(delimiter));
}
episodes.forEach(episode => {
rows.push([
episode.id,
episode.type,
episode.timestamp.toString(),
JSON.stringify(episode.input || ''),
JSON.stringify(episode.output || '')
].join(delimiter));
});
return {
success: true,
format: 'csv',
size: rows.join('\n').length,
metadata: {
content: rows.join('\n'),
episodeCount: episodes.length
}
};
}
}
// Register the custom exporter
agent.exports.registerExporter(new CSVExporter());
// Use it
const result = await agent.exports.export({
episodes,
exporter: 'csv',
options: { delimiter: '\t' } // Tab-separated
});
```
## Best Practices
### ✅ DO: Batch Operations
```typescript title="good-batch.ts"
// Export all episodes for a context at once
const episodes = await agent.memory.episodes.getByContext(contextId);
const result = await agent.exports.export({
episodes,
exporter: 'json'
});
```
### ❌ DON'T: Export One by One
```typescript title="bad-individual.ts"
// Inefficient - multiple export calls
for (const episode of episodes) {
const result = await agent.exports.export({
episodes: [episode],
exporter: 'json'
});
// Process each result...
}
```
### ✅ DO: Handle Errors
```typescript title="good-error-handling.ts"
const result = await agent.exports.export({
episodes,
exporter: 'json'
});
if (!result.success) {
console.error('Export failed:', result.error);
// Handle error appropriately
} else {
// Process successful export
await saveToStorage(result.metadata.content);
}
```
### ✅ DO: Sanitize Sensitive Data
```typescript title="good-sanitization.ts"
const result = await agent.exports.export({
episodes,
exporter: 'json',
transform: {
sanitize: (episode) => ({
...episode,
metadata: {
...episode.metadata,
apiKey: undefined,
userEmail: undefined
}
})
}
});
```
## Performance Considerations
### Memory Usage
Large exports are held in memory:
```typescript title="streaming-export.ts"
// For very large datasets, process in chunks
const batchSize = 1000;
const allResults: string[] = [];
for (let offset = 0; offset < totalEpisodes; offset += batchSize) {
const batch = await agent.memory.episodes.query({
limit: batchSize,
offset
});
const result = await agent.exports.export({
episodes: batch,
exporter: 'json',
options: { format: 'jsonl' }
});
allResults.push(result.metadata.content);
}
// Combine results
const finalContent = allResults.join('\n');
```
### Transformation Performance
Transformations are applied sequentially:
```typescript title="transformation-order.ts"
// Order matters - filter first to reduce processing
const result = await agent.exports.export({
episodes,
exporter: 'json',
transform: {
fields: { include: ['id', 'type', 'timestamp'] }, // 1. Filter fields
sanitize: (e) => ({ ...e, type: e.type.toUpperCase() }), // 2. Then transform
sortBy: 'timestamp' // 3. Finally sort
}
});
```
## Real-World Usage
### Automated Backups
```typescript title="automated-backup.ts"
import { CronJob } from 'cron';
// Daily backup of all conversations
const backupJob = new CronJob('0 0 * * *', async () => {
const yesterday = new Date();
yesterday.setDate(yesterday.getDate() - 1);
const episodes = await agent.memory.episodes.getTimeline(
yesterday,
new Date()
);
const result = await agent.exports.export({
episodes,
exporter: 'json',
options: { format: 'jsonl' },
transform: {
sortBy: 'timestamp',
sortOrder: 'asc'
}
});
if (result.success) {
const filename = `backup-${yesterday.toISOString().split('T')[0]}.jsonl`;
await uploadToS3(filename, result.metadata.content);
}
});
backupJob.start();
```
### Analytics Export
```typescript title="analytics-export.ts"
// Export for analytics processing
async function exportForAnalytics(contextId: string) {
const episodes = await agent.memory.episodes.getByContext(contextId);
const result = await agent.exports.export({
episodes,
exporter: 'json',
transform: {
fields: {
include: ['id', 'type', 'timestamp', 'duration', 'metadata']
},
sanitize: (episode) => ({
...episode,
// Extract only analytics-relevant metadata
metadata: {
model: episode.metadata?.model,
tokenCount: episode.metadata?.tokenCount,
errorCount: episode.metadata?.errorCount
}
})
}
});
// Send to analytics pipeline
await sendToAnalytics(result.metadata.content);
}
```
### Compliance Export
```typescript title="compliance-export.ts"
// GDPR data export for user
async function exportUserData(userId: string) {
const userContexts = await agent.getContexts();
const userEpisodes: Episode[] = [];
for (const ctx of userContexts) {
if (ctx.id.includes(userId)) {
const episodes = await agent.memory.episodes.getByContext(ctx.id);
userEpisodes.push(...episodes);
}
}
// Export with full PII redaction for other users
const result = await agent.exports.export({
episodes: userEpisodes,
exporter: 'json',
transform: {
sanitize: (episode) => sanitizeForUser(episode, userId),
sortBy: 'timestamp',
sortOrder: 'asc'
}
});
return result.metadata.content;
}
```
file: ./content/docs/core/concepts/episodes.mdx
meta: {
"title": "Episodes",
"description": "How Daydreams captures, stores, indexes, and retrieves conversational episodes, plus how to customize episode boundaries and metadata."
}
## Overview
Episodes are coherent spans of interaction that bundle the important parts of a conversation cycle: inputs, internal thoughts, tool calls/results, and outputs. They provide a compact, searchable record of what happened and why.
Daydreams collects logs from working memory into episodes, stores them in key–value storage, and indexes summaries (and optionally the logs) into vector memory for retrieval.
## When Episodes Form
By default, Daydreams starts an episode when a meaningful interaction begins and ends it once the agent responds or completes a significant step.
You can customize boundaries with `EpisodeHooks`:
* `shouldStartEpisode(ref, workingMemory, contextState, agent)`
* `shouldEndEpisode(ref, workingMemory, contextState, agent)`
This lets you define domain‑specific rules, e.g., start on the first `input`, end after the final `output` or an `action_result` threshold.
## What Gets Stored
Each episode contains:
* `id`, `contextId`, `type`, `timestamp`, `startTime`, `endTime`, `duration`
* `summary`: a concise natural‑language description of the episode
* `logs`: the included working‑memory refs (e.g., `input`, `output`, `action_*`, `event`)
* `metadata`: any additional fields you derive
* Optional `input`/`output` fields if you extract them in `createEpisode`
The set of included refs can be controlled via `EpisodeHooks.includeRefs`.
## Indexing and Retrieval
Episodes are indexed for similarity search under the namespace `episodes:`.
Indexing policy (configurable via `EpisodicMemoryOptions.indexing`):
* `enabled`: turn indexing on/off (default: on)
* `contentMode`: `summary` | `logs` | `summary+logs` (default: `summary+logs`)
* `chunk`: naive chunking for long logs (size/overlap)
* `aggregateNamespaces`: also index into extra namespaces (e.g., org/global)
* `salience` and `tags`: compute and store helpful metadata
Retrieval APIs:
```ts title="episode-retrieval.ts"
// Get most recent episodes for a context
const recent = await agent.memory.episodes.getByContext(ctx.id, 20);
// Fetch a specific episode
const ep = await agent.memory.episodes.get("");
// Find similar episodes via vector search
const similar = await agent.memory.episodes.findSimilar(ctx.id, "refund policy", 5);
```
## Customizing Episode Content
Use hooks to shape episode data:
* `createEpisode(logs, contextState, agent)`: build the episode payload. You can return a `CreateEpisodeResult` to specify `summary`, `logs`, `input`, `output`, and additional `metadata`.
* `classifyEpisode(episodeData)`: set a `type` label (e.g., `conversation`, `action`, `task`).
* `extractMetadata(episodeData, logs)`: attach custom metadata (IDs, tags, scores).
Example:
```ts title="episode-hooks.ts"
const hooks: EpisodeHooks = {
shouldStartEpisode: (ref) => ref.ref === "input",
shouldEndEpisode: (ref) => ref.ref === "output" || ref.ref === "action_result",
createEpisode: (logs) => ({
summary: "User asked for pricing; agent explained tiers",
metadata: { product: "pro" },
}),
classifyEpisode: () => "conversation",
includeRefs: ["input", "output", "action_call", "action_result", "event"],
};
// Configure episodic memory when creating the agent
const agent = createDreams({
// ...
contexts: [/* ... */],
// Episodic memory options can be passed where you construct MemorySystem
});
```
## Size, Retention, and Cleanup
Control growth with:
* `maxEpisodesPerContext`: cap stored episodes per context (oldest are pruned)
* `minEpisodeGap`: avoid storing near‑duplicate episodes in rapid succession
* `memory.forget(...)`: remove vector/KV entries by pattern, context, or age
* `memory.episodes.clearContext(contextId)`: drop all episodes for a context
## Exporting Episodes
Use the export manager to produce JSON/JSONL or Markdown for analysis, backups, or dataset generation.
```ts title="export-episodes.ts"
const episodes = await agent.memory.episodes.getByContext(ctx.id);
const result = await agent.exports.export({ episodes, exporter: "json", options: { pretty: true } });
```
See concept guide: [Episode Export](/docs/core/concepts/episode-export)
## Further Reading
* API reference: [Episodic Memory](/docs/api/EpisodicMemory)
* Hooks reference: [Episode Hooks](/docs/api/EpisodeHooks)
* Export manager: [ExportManager](/docs/api/ExportManager)
file: ./content/docs/core/concepts/extensions-vs-services.mdx
meta: {
"title": "Extensions vs Services",
"description": "When to use extensions vs services in Daydreams."
}
## Quick Comparison
**Services** = Infrastructure ("how to connect")
**Extensions** = Features ("what agent can do")
| | Service | Extension |
| ------------- | --------------------------- | ---------------------------------- |
| **Purpose** | Manage infrastructure | Provide complete features |
| **Contains** | API clients, DB connections | Actions, contexts, inputs, outputs |
| **Used by** | Multiple extensions | Agents directly |
| **Analogy** | Power supply, motherboard | Complete software package |
| **Lifecycle** | `register()` → `boot()` | `install()` when added |
## When to Use Each
### Create a Service When:
* Managing external connections (database, API clients)
* Sharing utilities across multiple extensions
* Handling lifecycle management (startup, shutdown)
* Environment-based configuration
### Create an Extension When:
* Bundling complete feature set for a domain
* Adding platform support (Discord, Twitter, etc.)
* Packaging related actions/contexts/inputs/outputs
* Building reusable functionality for agents
## Real-World Example
### Service: Discord Client Management
```typescript title="discord-service.ts"
const discordService = service({
name: "discord",
register: (container) => {
container.singleton("discordClient", () => new Client({
token: process.env.DISCORD_TOKEN,
intents: [GatewayIntentBits.Guilds]
}));
},
boot: async (container) => {
await container.resolve("discordClient").login();
},
});
```
### Extension: Complete Discord Integration
```typescript title="discord-extension.ts"
const discord = extension({
name: "discord",
services: [discordService], // Uses the service for client
contexts: [discordContext], // Server/channel state
actions: [sendMessage, createChannel], // What agent can do
inputs: [messageListener], // Listen for messages
outputs: [messageReply], // Send responses
});
```
## Complete Example: Trading System
```typescript title="trading-system.ts"
// 1. Services handle API connections
const alpacaService = service({
name: "alpaca",
register: (container) => {
container.singleton("alpacaClient", () => new AlpacaApi({
key: process.env.ALPACA_KEY,
secret: process.env.ALPACA_SECRET,
}));
},
boot: async (container) => {
await container.resolve("alpacaClient").authenticate();
},
});
const marketDataService = service({
name: "marketData",
register: (container) => {
container.singleton("marketClient", () => new MarketDataClient(process.env.MARKET_KEY));
},
});
// 2. Extension bundles all trading features
const trading = extension({
name: "trading",
services: [alpacaService, marketDataService], // Infrastructure
contexts: [portfolioContext, watchlistContext], // State management
actions: [buyStock, sellStock, getQuote], // Capabilities
inputs: [priceAlerts], // Event listening
outputs: [orderConfirmation], // Notifications
});
// 3. Agent gets complete trading functionality
const agent = createDreams({
model: openai("gpt-4o"),
extensions: [trading], // One line = full trading system
});
```
## Architecture Flow
```text title="architecture.txt"
Extension Layer (What Agent Can Do)
├── Contexts: portfolio, watchlist state
├── Actions: buy, sell, get quotes
├── Inputs: listen for price alerts
└── Outputs: send confirmations
Service Layer (How to Connect)
├── alpacaService: trading API client
└── marketDataService: market data client
Execution Flow:
1. Services register and boot (API connections)
2. Extension components merge into agent
3. LLM can use all features automatically
4. Shared infrastructure across all actions
```
## Design Guidelines
### Services Should:
* Focus on single infrastructure domain
* Provide clean abstractions for external systems
* Handle connection lifecycle and configuration
* Be reusable across multiple extensions
### Extensions Should:
* Bundle cohesive feature sets
* Include everything needed for the domain
* Use services for infrastructure needs
* Provide complete agent capabilities
## Key Takeaways
* **Services = Infrastructure** - API clients, databases, utilities
* **Extensions = Features** - Complete domain functionality
* **Clear separation** - "How to connect" vs "What agent can do"
* **Composition** - Extensions use services, agents use extensions
* **Reusability** - Services shared across extensions, extensions shared across agents
## See Also
* [Extensions](/docs/core/concepts/extensions) - Building feature packages
* [Services](/docs/core/concepts/services) - Infrastructure management
file: ./content/docs/core/concepts/extensions.mdx
meta: {
"title": "Extensions",
"description": "Building modular feature packages for Daydreams agents."
}
## What Are Extensions?
Extensions are **feature packages** that bundle everything needed for a specific capability. Like installing an app on your phone, extensions add complete functionality with one import.
## Extension Architecture
Extensions bundle four types of components:
```typescript title="extension-structure.ts"
const weatherExtension = extension({
name: "weather",
services: [weatherService], // Infrastructure (API clients, DB connections)
contexts: [weatherContext], // Stateful workspaces
actions: [getWeatherAction], // What agent can do
inputs: [weatherAlertInput], // How to listen for events
outputs: [weatherNotifyOutput], // How to send responses
});
```
## Usage Example
```typescript title="using-extensions.ts"
import { createDreams } from "@daydreamsai/core";
import { discord, weather, trading } from "./extensions";
const agent = createDreams({
model: openai("gpt-4o"),
extensions: [discord, weather, trading],
});
// Agent now has:
// - Discord messaging (inputs/outputs/contexts)
// - Weather data (actions/contexts)
// - Trading capabilities (actions/contexts/services)
```
## Building an Extension
```typescript title="complete-weather-extension.ts"
// 1. Service for API management
const weatherService = service({
name: "weather",
register: (container) => {
container.singleton("weatherClient", () => new WeatherAPI({
apiKey: process.env.WEATHER_API_KEY,
baseUrl: "https://api.openweathermap.org/data/2.5"
}));
},
boot: async (container) => {
await container.resolve("weatherClient").connect();
},
});
// 2. Context for user preferences
const weatherContext = context({
type: "weather-prefs",
schema: z.object({ userId: z.string() }),
create: () => ({ defaultLocation: null, units: "metric" }),
});
// 3. Actions for functionality
const getWeatherAction = action({
name: "get-weather",
description: "Get current weather for a location",
schema: z.object({ location: z.string() }),
handler: async ({ location }, ctx) => {
const client = ctx.container.resolve("weatherClient");
const weather = await client.getCurrentWeather(location);
ctx.memory.lastChecked = Date.now();
return { temperature: weather.temp, condition: weather.condition };
},
});
// 4. Bundle into extension
export const weather = extension({
name: "weather",
services: [weatherService],
contexts: [weatherContext],
actions: [getWeatherAction],
});
```
## Extension Lifecycle
```text title="extension-lifecycle.txt"
1. Agent Creation → Extensions registered
2. agent.start() called:
├── Services registered (define dependencies)
├── Services booted (connect to APIs/DBs)
├── Components merged (actions, contexts, inputs, outputs)
├── extension.install() called for setup
└── Inputs start listening
3. Agent Ready → All features available to LLM
```
## Advanced Features
### Extension Dependencies
```typescript title="extension-dependencies.ts"
// Extensions can build on each other
const weatherDiscordBot = extension({
name: "weather-discord-bot",
// Assumes discord and weather extensions are also loaded
actions: [
action({
name: "send-weather-to-discord",
handler: async ({ channelId, location }, ctx) => {
const weatherClient = ctx.container.resolve("weatherClient");
const discordClient = ctx.container.resolve("discordClient");
const weather = await weatherClient.getCurrentWeather(location);
const channel = await discordClient.channels.fetch(channelId);
await channel.send(`🌤️ ${location}: ${weather.temp}°C`);
return { sent: true };
},
}),
],
});
// Use together
const agent = createDreams({
model: openai("gpt-4o"),
extensions: [discord, weather, weatherDiscordBot],
});
```
## Best Practices
### Single Domain Focus
```typescript
// ✅ Good - cohesive feature set
const discord = extension({ name: "discord" /* Discord-only features */ });
// ❌ Bad - mixed responsibilities
const everything = extension({ name: "mixed" /* Discord + weather + trading */ });
```
### Complete Functionality
```typescript
// ✅ Good - everything needed for the domain
const weather = extension({
services: [weatherService], // API management
contexts: [weatherContext], // User preferences
actions: [getWeather], // Core functionality
inputs: [weatherAlerts], // Event listening
outputs: [weatherNotify], // Notifications
});
```
## Publishing Extensions
```text title="extension-package-structure.txt"
my-extension/
├── src/
│ ├── index.ts # Export extension
│ ├── services/ # Infrastructure components
│ ├── contexts/ # Stateful workspaces
│ ├── actions/ # Agent capabilities
│ └── types.ts # TypeScript definitions
├── package.json
└── README.md
```
```json title="package.json"
{
"name": "@yourorg/daydreams-weather",
"version": "1.0.0",
"main": "dist/index.js",
"types": "dist/index.d.ts",
"peerDependencies": {
"@daydreamsai/core": "^1.0.0"
}
}
```
## Key Takeaways
* **Extensions are feature packages** - Bundle everything needed for a capability
* **Automatic lifecycle management** - Services boot, features register seamlessly
* **Modular composition** - Combine extensions like building blocks
* **Clean agent configuration** - Add complex features with single imports
* **Reusable across projects** - Build once, share everywhere
## See Also
* [Services](/docs/core/concepts/services) - Infrastructure management layer
* [Extensions vs Services](/docs/core/concepts/extensions-vs-services) - Decision guide
file: ./content/docs/core/concepts/index.mdx
meta: {
"title": "Concepts",
"description": "The Daydreams framework enables autonomous agent behavior through core concepts that work together. This guide helps you navigate these concepts effectively."
}
The Daydreams framework enables autonomous agent behavior through core concepts
that work together. This guide helps you navigate these concepts effectively.
## Learning Path
**New to agent frameworks?** Start here:
1. [Building Blocks](/docs/core/concepts/building-blocks) - The four main
components
2. [Agent Lifecycle](/docs/core/concepts/agent-lifecycle) - How agents process
information
**Ready to build?** Dive into each component:
* [Contexts](/docs/core/concepts/contexts) - Manage state and memory
* [Actions](/docs/core/concepts/actions) - Define capabilities
* [Inputs](/docs/core/concepts/inputs) - Handle external events
* [Outputs](/docs/core/concepts/outputs) - Send responses
* [Building Block Operations](/docs/core/concepts/building-block-operations) - Common patterns
**System Architecture:**
* [Services](/docs/core/concepts/services) - Infrastructure management
* [Extensions](/docs/core/concepts/extensions) - Feature packages
* [Extensions vs Services](/docs/core/concepts/extensions-vs-services) - Decision guide
**Advanced topics:**
* [Context Composition](/docs/core/concepts/composing-contexts) - Modular functionality
* [Prompting](/docs/core/concepts/prompting) - LLM interaction structure
* [MCP Integration](/docs/core/concepts/mcp) - External service connections
file: ./content/docs/core/concepts/inputs.mdx
meta: {
"title": "Inputs",
"description": "How Daydreams agents receive information and trigger processing."
}
## What is an Input?
An input is how your agent **listens** to the outside world. Inputs trigger your agent when something happens - like a Discord message, file change, or webhook event.
For common patterns like schema validation, error handling, and external service integration, see [Building Block Operations](/docs/core/concepts/building-block-operations).
## Inputs vs Actions/Outputs
Understanding the difference:
| Building Block | Purpose | Triggers Agent | LLM Controls |
| -------------- | ---------------------------- | ------------------------ | -------------------------- |
| **Inputs** | Listen for external events | ✅ Yes - wakes up agent | ❌ No - automatic listening |
| **Actions** | Get data, perform operations | ❌ No - LLM calls them | ✅ Yes - LLM decides when |
| **Outputs** | Communicate results | ❌ No - LLM triggers them | ✅ Yes - LLM decides when |
### Input Flow
```text title="input-flow.txt"
1. External event happens (Discord message, file change, etc.)
2. Input detects the event
3. Input calls send(context, args, data) → triggers agent
4. Agent processes the input and responds
```
## Creating Your First Input
Here's a simple input that listens for file changes:
```typescript title="file-watcher-input.ts"
import { input } from "@daydreamsai/core";
import * as z from "zod";
import fs from "fs";
export const fileWatcher = input({
type: "file:watcher",
// Schema validates incoming data
schema: z.object({
filename: z.string(),
content: z.string(),
event: z.enum(["created", "modified", "deleted"]),
}),
// Subscribe function starts listening and returns cleanup
subscribe: (send, agent) => {
const watcher = fs.watch("./watched-files", (eventType, filename) => {
if (filename) {
try {
const content = fs.readFileSync(`./watched-files/${filename}`, "utf8");
send(fileContext, { filename }, {
filename,
content,
event: eventType === "rename" ? "created" : "modified",
});
} catch (error) {
send(fileContext, { filename }, {
filename,
content: "",
event: "deleted",
});
}
}
});
return () => watcher.close(); // Cleanup function
},
});
```
## Context Targeting
Inputs route data to specific context instances:
```typescript title="context-targeting.ts"
const discordInput = input({
type: "discord:message",
schema: z.object({
content: z.string(),
userId: z.string(),
channelId: z.string(),
}),
subscribe: (send, agent) => {
discord.on("messageCreate", (message) => {
// Route to user-specific chat context
send(
chatContext,
{ userId: message.author.id }, // Context args
{
content: message.content,
userId: message.author.id,
channelId: message.channel.id,
}
);
});
return () => discord.removeAllListeners("messageCreate");
},
});
```
This creates separate context instances:
* User "alice" → `chat:alice` context
* User "bob" → `chat:bob` context
* Each maintains separate memory
## Input Patterns
### Real-Time (Event-Driven) - Preferred
```typescript title="realtime-input.ts"
// ✅ Responds immediately to events
subscribe: (send, agent) => {
websocket.on("message", (data) => {
send(context, args, data);
});
return () => websocket.close();
};
```
### Polling - When Necessary
```typescript title="polling-input.ts"
// For APIs without webhooks or real-time events
subscribe: (send, agent) => {
const checkForUpdates = async () => {
const updates = await api.getUpdates();
updates.forEach(item => {
send(context, { id: item.id }, item);
});
};
const interval = setInterval(checkForUpdates, 5000);
return () => clearInterval(interval);
};
```
## Multiple Inputs
Agents can listen to multiple sources simultaneously:
```typescript title="multiple-inputs.ts"
const agent = createDreams({
model: openai("gpt-4o"),
inputs: [
discordInput, // Discord messages
slackInput, // Slack messages
webhookInput, // API webhooks
fileWatcher, // File changes
cronTrigger, // Scheduled events
],
});
// Agent responds to any input automatically
```
## Subscribe Pattern Best Practices
### Always Return Cleanup Functions
```typescript title="cleanup-pattern.ts"
// ✅ Good - proper cleanup prevents memory leaks
subscribe: (send, agent) => {
const listener = (data) => send(context, args, data);
eventSource.addEventListener("event", listener);
return () => {
eventSource.removeEventListener("event", listener);
eventSource.close();
};
};
// ❌ Bad - no cleanup causes memory leaks
subscribe: (send, agent) => {
eventSource.addEventListener("event", (data) => {
send(context, args, data);
});
return () => {}; // Nothing cleaned up!
};
```
### Context Routing
```typescript title="context-routing.ts"
subscribe: (send, agent) => {
service.on("event", (event) => {
// Route to appropriate context based on event data
switch (event.type) {
case "user_message":
send(chatContext, { userId: event.userId }, event.data);
break;
case "system_alert":
send(alertContext, { alertId: event.id }, event.data);
break;
case "game_move":
send(gameContext, { gameId: event.gameId }, event.data);
break;
}
});
return () => service.removeAllListeners("event");
};
```
## Key Takeaways
* **Inputs trigger agents** - Turn agents from one-time scripts into responsive assistants
* **Subscribe pattern** - Watch external sources, call `send()` when events occur
* **Context targeting** - Route inputs to appropriate context instances
* **Always cleanup** - Return cleanup functions to prevent memory leaks
* **Real-time preferred** - Use event-driven patterns over polling when possible
For error handling, connection resilience, and validation patterns, see [Building Block Operations](/docs/core/concepts/building-block-operations).
file: ./content/docs/core/concepts/mcp.mdx
meta: {
"title": "Model Context Protocol (MCP)",
"description": "Connect your agent to any MCP server for expanded capabilities and context."
}
## What is MCP?
[**Model Context Protocol (MCP)**](https://modelcontextprotocol.io) is an open
standard that enables AI applications to securely access external data sources,
tools, and services. Instead of building custom integrations for every service,
MCP provides a universal interface.
Your Daydreams agent becomes an **MCP client** that can connect to any MCP
server and use its capabilities as if they were native actions.
## Adding MCP to Your Agent
MCP integration happens through the extension system - just add servers and
start using their tools:
```typescript title="mcp-setup.ts"
import { createDreams } from "@daydreamsai/core";
import { createMcpExtension } from "@daydreamsai/mcp";
const agent = createDreams({
extensions: [
createMcpExtension([
{
id: "sqlite",
name: "Database Access",
transport: {
type: "stdio",
command: "npx",
args: ["@modelcontextprotocol/server-sqlite", "./data.db"],
},
},
{
id: "filesystem",
name: "File Access",
transport: {
type: "stdio",
command: "npx",
args: ["@modelcontextprotocol/server-filesystem", "./docs"],
},
},
]),
],
});
```
## Available Actions
The MCP extension adds these actions:
* **`mcp.listServers`** - Show connected MCP servers
* **`mcp.listTools`** - List tools available on a server
* **`mcp.callTool`** - Execute a server's tool
* **`mcp.listResources`** - List server resources (files, data, etc.)
* **`mcp.readResource`** - Read a specific resource
* **`mcp.listPrompts`** - List server-defined prompts
* **`mcp.getPrompt`** - Get a prompt with arguments
## Transport Types
### Local Servers (stdio)
Most MCP servers run as local processes:
```typescript
{
id: "server-name",
transport: {
type: "stdio",
command: "node",
args: ["server.js", "--option", "value"]
}
}
```
### Remote Servers (SSE)
For HTTP-based MCP servers:
```typescript
{
id: "remote-api",
transport: {
type: "sse",
serverUrl: "https://mcp-server.example.com"
}
}
```
## Popular MCP Servers
The MCP ecosystem includes servers for common use cases:
* **[@modelcontextprotocol/server-filesystem](https://github.com/modelcontextprotocol/servers/tree/main/src/filesystem)** -
File system access
* **[@modelcontextprotocol/server-sqlite](https://github.com/modelcontextprotocol/servers/tree/main/src/sqlite)** -
SQLite database access
* **[firecrawl-mcp](https://github.com/modelcontextprotocol/servers/tree/main/src/fetch)** -
Web scraping
* **[blender-mcp](https://github.com/modelcontextprotocol/servers)** - 3D
rendering
[Browse all available servers →](https://github.com/modelcontextprotocol/servers)
## Next Steps
* **[MCP Tutorial](/docs/tutorials/mcp/mcp-guide)** - Step-by-step setup guide
* **[Multiple MCP Servers](/docs/tutorials/mcp/multi-server)** - Connect to
multiple servers
* **[Extensions](/docs/core/concepts/extensions)** - Learn about the extension
system
## Key Benefits
✅ **Universal interface** - Same API for all external tools\
✅ **No custom integrations** - Use existing MCP servers\
✅ **Secure by default** - Controlled access to external resources\
✅ **Scalable** - Connect to multiple servers simultaneously
MCP transforms your agent from isolated code into a connected system with access
to the entire MCP ecosystem.
file: ./content/docs/core/concepts/memory.mdx
meta: {
"title": "Memory",
"description": "How Daydreams stores and retrieves information, from working memory and logs to persistent memories and episodes."
}
## Overview
Daydreams provides a unified memory system that lets agents remember, recall, and organize information over time. It combines:
* Persistent storage (key–value, vector, and graph)
* Per-context working memory that holds the current session’s logs
* Episodic memory that captures conversation “episodes” for later retrieval and export (see [Episodes](/docs/core/concepts/episodes))
At runtime, agents write logs into working memory (inputs, outputs, actions, etc.). As conversations progress, Daydreams can form episodes and index them for search and analysis.
## Storage Layers
The `MemorySystem` wires together multiple providers and exposes simple APIs:
* Key–Value: `memory.kv` for structured state under string keys.
* Vector: `memory.vector` for similarity search over text (with optional embeddings/metadata).
* Graph: `memory.graph` for entities and relationships.
* Episodic: `memory.episodes` for creating and retrieving conversation episodes.
Common operations:
```ts title="remember-and-recall.ts"
// Store a simple text snippet (auto-indexed in vector memory)
await agent.memory.remember("User likes Neapolitan pizza", {
scope: "context", // or "global"
contextId: ctx.id,
type: "preference",
metadata: { salience: 0.8 },
});
// Recall similar content later
const results = await agent.memory.recall("pizza preferences", {
contextId: ctx.id,
topK: 5,
include: { content: true, metadata: true },
});
// Or get just the top match
const best = await agent.memory.recallOne("neapolitan", { contextId: ctx.id });
```
Notes on storage behavior:
* Scoping and keys: when storing strings, Daydreams creates IDs like `memory:context::-` (or `memory:global:...`). When passing structured records to `rememberRecord`, you can provide your own `id`, `namespace`, and `metadata`.
* Metadata: for vector documents, Daydreams stores `scope`, `contextId`, `type`, `timestamp`, and any custom metadata you provide.
* Search weighting: recall supports salience and recency boosts via `weighting`, plus grouping/deduping via `groupBy` and `dedupeBy`.
* Batch ingest: use `rememberBatch(records, { chunk })` for naive chunking of large text.
## Working Memory
Working memory is per-context, short‑term state that holds the live log stream:
* Stored under `working-memory:` in key–value storage.
* Automatically created and updated by the agent runtime.
* Useful for prompt construction, short‑term reasoning, and UI inspection.
Structure (arrays of timestamped refs):
* `inputs`, `outputs`, `thoughts`
* `calls` (action calls), `results` (action results)
* `events`, `steps`, `runs`
Utilities:
* `getContextWorkingMemory(agent, contextId)` and `saveContextWorkingMemory(...)` to read/write the whole structure.
* `getWorkingMemoryLogs(memory, includeThoughts?)` to get sorted logs used for prompts.
* `getWorkingMemoryAllLogs(memory, includeThoughts?)` to include `steps`/`runs` as well.
Summarization hooks and pruning policies can be layered on top via a `MemoryManager` (e.g., compress long histories, keep the most recent N inputs/outputs, etc.).
## Logs
Each entry in working memory is a strongly typed “ref” with `ref`, `id`, `timestamp`, and relevant fields:
* `input`: inbound messages/events captured by inputs
* `output`: agent responses produced by outputs
* `thought`: internal reasoning traces
* `action_call` and `action_result`: tool invocations and their results
* `event`: domain events
* `step` and `run`: orchestration milestones
These references are pushed in order and can be formatted for display or prompt inclusion. You generally won’t push them manually—agent execution collects and stores them as it runs.
## Episodes
Episodes are coherent spans of interaction (e.g., “user asks X → agent reasons → calls tools → replies”). Daydreams can collect logs into an episode, generate a summary, and persist it.
Key points:
* Creation: episodes are formed from working-memory logs via default heuristics or custom hooks (`EpisodeHooks`) for when to start/end and how to summarize.
* Storage: episodes are saved in key–value storage and indexed into vector memory for retrieval. By default, Daydreams indexes the summary and (optionally chunked) conversation logs under `namespace = episodes:`.
* Retrieval: use `memory.episodes.get(id)`, `getByContext(contextId)`, and `findSimilar(contextId, query)`.
* Export: export episodes to JSON/JSONL or Markdown via the export manager.
Learn more:
* Conceptual: [Episodes](/docs/core/concepts/episodes)
* Conceptual: [Episode export and formats](/docs/core/concepts/episode-export)
* API: [Episodic Memory](/docs/api/EpisodicMemory) and [Episode Hooks](/docs/api/EpisodeHooks)
## Housekeeping
Remove old or unwanted data as needed:
```ts title="cleanup.ts"
// Delete by key pattern, context, or time
await agent.memory.forget({ pattern: "memory:global:*" });
await agent.memory.forget({ context: ctx.id });
await agent.memory.forget({ olderThan: new Date(Date.now() - 7 * 24 * 60 * 60 * 1000) });
// Clear episodic data for a context
await agent.memory.episodes.clearContext(ctx.id);
```
That’s the core: working memory for the live run, persistent stores for long‑term recall, and episodes to capture complete interactions you can search and export.
file: ./content/docs/core/concepts/outputs.mdx
meta: {
"title": "Outputs",
"description": "How Daydreams agents send information and responses."
}
## What is an Output?
An output is how your agent **sends** information to the outside world. Outputs are the final communication step - they don't return data to the LLM for further reasoning.
For common patterns like schema validation, error handling, and external service integration, see [Building Block Operations](/docs/core/concepts/building-block-operations).
## Outputs vs Actions/Inputs
Understanding when to use each:
| Building Block | Purpose | LLM Uses When | Returns Data |
| -------------- | ---------------------------- | ------------------------ | -------------------------- |
| **Outputs** | Communicate final results | Wants to respond/notify | ❌ No - final step |
| **Actions** | Get data, perform operations | Needs info for reasoning | ✅ Yes - for next steps |
| **Inputs** | Listen for external events | Never - triggers agent | ❌ No - starts conversation |
### Output Decision Matrix
```typescript title="when-to-use-outputs.ts"
// ✅ Use OUTPUT for final communication
Weather: 72°F, sunny. Have a great day!
// ✅ Use ACTION to get data for reasoning
{"city": "Boston"}
// LLM gets result and can use it in next step
// ✅ Common pattern: Action → Output
{"city": "NYC"}
Current weather in NYC: {{calls[0].temperature}}, {{calls[0].condition}}
```
## Creating Your First Output
Here's a simple file output with attributes:
```typescript title="file-output.ts"
import { output } from "@daydreamsai/core";
import * as z from "zod";
import fs from "fs/promises";
export const saveToFile = output({
type: "file:save",
description: "Saves a message to a text file",
// Content schema - what goes inside the output tag
schema: z.string().describe("The message to save"),
// Attributes schema - extra parameters on the tag
attributes: z.object({
filename: z.string().describe("Name of the file to save to"),
}),
handler: async (message, ctx) => {
const { filename } = ctx.outputRef.params;
await fs.writeFile(filename, message + "\n", { flag: "a" });
return { saved: true, filename };
},
});
// LLM uses it like this:
// This is my message
```
## Output Features
### Attributes for Extra Parameters
Outputs support attributes for additional configuration:
```typescript title="discord-output.ts"
const discordOutput = output({
type: "discord:message",
description: "Sends a message to Discord",
schema: z.string(), // Message content
// Attributes appear as XML attributes
attributes: z.object({
channelId: z.string(),
threadId: z.string().optional(),
}),
handler: async (message, ctx) => {
const { channelId, threadId } = ctx.outputRef.params;
await discord.send(channelId, message, { threadId });
return { sent: true };
},
});
// LLM uses it like:
//
// Hello Discord!
//
```
### Memory Access in Outputs
Outputs can read and update context memory for tracking:
```typescript title="notification-with-memory.ts"
const notificationOutput = output({
type: "notification:send",
schema: z.string(),
attributes: z.object({ priority: z.enum(["low", "high"]) }),
handler: async (message, ctx) => {
// Track notifications in memory
if (!ctx.memory.notificationsSent) {
ctx.memory.notificationsSent = 0;
}
ctx.memory.notificationsSent++;
const { priority } = ctx.outputRef.params;
await notificationService.send({ message, priority });
return {
sent: true,
totalSent: ctx.memory.notificationsSent,
};
},
});
```
### Multiple Outputs
Agents can send multiple outputs in one response:
```xml title="multiple-outputs.xml"
I'll notify both Discord and email about this alert
🚨 Server maintenance starting in 10 minutes!
Server maintenance beginning. Discord users notified.
```
## External Service Integration
Outputs are perfect for integrating with external services:
```typescript title="slack-output.ts"
const slackMessage = output({
type: "slack:message",
description: "Sends a message to Slack",
schema: z.string(),
attributes: z.object({
channel: z.string().describe("Slack channel name"),
threadId: z.string().optional().describe("Thread ID for replies"),
}),
handler: async (message, ctx) => {
try {
const { channel, threadId } = ctx.outputRef.params;
const result = await slackClient.chat.postMessage({
channel,
text: message,
thread_ts: threadId,
});
return {
success: true,
messageId: result.ts,
channel: result.channel,
message: `Message sent to #${channel}`,
};
} catch (error) {
console.error("Failed to send Slack message:", error);
return {
success: false,
error: error.message,
message: "Failed to send Slack message",
};
}
},
});
```
## Best Practices
### 1. Use Clear Types and Descriptions
```typescript title="good-naming.ts"
// ✅ Good - clear what it does
const userNotification = output({
type: "user:notification",
description:
"Sends a notification directly to the user via their preferred channel",
// ...
});
// ❌ Bad - unclear purpose
const sendStuff = output({
type: "send",
description: "Sends something",
// ...
});
```
### 2. Validate Content with Schemas
Use proper Zod validation to ensure your outputs receive correct data. See [Schema Validation Best Practices](/docs/core/concepts/building-blocks#schema-validation-best-practices) for complete patterns and examples.
### 3. Handle Errors Gracefully
```typescript title="error-handling.ts"
handler: async (message, ctx) => {
try {
await sendMessage(message);
return { sent: true };
} catch (error) {
// Log for debugging
console.error("Failed to send message:", error);
// Return structured error info
return {
sent: false,
error: error.message,
message: "Failed to send message - will retry later",
};
}
};
```
### 4. Use Async/Await for External Services
```typescript title="async-best-practice.ts"
// ✅ Good - properly handles async
handler: async (message, ctx) => {
const result = await emailService.send(message);
return { emailId: result.id };
};
// ❌ Bad - doesn't wait for async operations
handler: (message, ctx) => {
emailService.send(message); // This returns a Promise that's ignored!
return { status: "sent" }; // Completes before email actually sends
};
```
### 5. Provide Good Examples
```typescript title="good-examples.ts"
examples: [
'Hello everyone! ',
'Thanks for the question! ',
];
```
## Key Takeaways
* **Outputs enable communication** - Without them, agents can think but not
respond
* **LLM chooses when to use them** - Based on types and descriptions you provide
* **Different from actions** - Outputs communicate results, actions get data
* **Content and attributes validated** - Zod schemas ensure correct format
* **Memory can be updated** - Track what was sent for future reference
* **Error handling is crucial** - External services can fail, handle gracefully
Outputs complete the conversation loop - they're how your intelligent agent
becomes a helpful communicator that users can actually interact with.
file: ./content/docs/core/concepts/prompting.mdx
meta: {
"title": "Prompting",
"description": "How Daydreams structures prompts to guide LLM reasoning and actions."
}
## What is a Prompt?
A prompt is the text you send to an AI model to tell it what to do. Think of it
like giving instructions to a smart assistant.
For details on the low-level template, XML contract, and parsing behavior, see [Context Engineering](/docs/core/concepts/context-engineering).
## Simple Prompts vs Agent Prompts
### Simple Prompt (ChatGPT style)
```text title="simple-prompt.txt"
User: What's the weather in New York?
Assistant: I don't have access to real-time weather data...
```
### Agent Prompt (what Daydreams creates)
```text title="agent-prompt.txt"
You are an AI agent that can:
- Call weather APIs
- Send Discord messages
- Remember conversation history
Current situation:
- User asked: "What's the weather in New York?"
- Available actions: getWeather, sendMessage
- Chat context: user123 in #general channel
Please respond with:
{"city": "New York"}
It's 72°F and sunny in New York!
```
## The Problem: LLMs Need Structure
Without structure, LLMs can't:
* Know what tools they have available
* Remember previous conversations
* Follow consistent output formats
* Handle complex multi-step tasks
**Example of what goes wrong:**
```text title="unstructured-problem.txt"
User: "Check weather and send to Discord"
LLM: "I'll check the weather for you!"
// ❌ Doesn't actually call any APIs
// ❌ Doesn't know how to send to Discord
// ❌ Just generates text
```
## The Solution: Structured Prompts
Daydreams automatically creates structured prompts that include:
1. **Available Tools** - What the agent can do
2. **Current State** - What's happening right now
3. **Response Format** - How to respond properly
4. **Context Memory** - What happened before
```text title="structured-solution.txt"
Available Actions:
- getWeather(city: string) - Gets current weather
- sendDiscord(message: string) - Sends Discord message
Current Context:
- User: user123
- Channel: #general
- Previous messages: [...]
New Input:
- "Check weather in Boston and send to Discord"
Respond with XML:
{"city": "Boston"}
Weather in Boston: 65°F, cloudy
```
## How Daydreams Builds Prompts
Every time your agent thinks, Daydreams automatically builds a prompt like this:
### 1. Instructions
```text title="instructions-section.txt"
You are an AI agent. Your job is to:
- Analyze new information
- Decide what actions to take
- Respond appropriately
```
### 2. Available Tools
```xml title="tools-section.xml"
Gets current weather for a city
{"type": "object", "properties": {"city": {"type": "string"}}}
Sends a message to Discord
{"type": "string"}
```
### 3. Current Context State
```xml title="context-section.xml"
Previous messages:
user123: Hi there!
agent: Hello! How can I help?
user123: What's the weather like?
```
### 4. What Just Happened
```xml title="updates-section.xml"
What's the weather in Boston?
```
### 5. Expected Response Format
```xml title="response-format.xml"
Respond with:
Your thought process here
{"argument": "value"}
Your response here
```
## What the LLM Sees (Complete Example)
Here's what a complete prompt looks like:
```text title="complete-prompt.txt"
You are an AI agent. Analyze the updates and decide what to do.
Gets current weather for a city
{"type": "object", "properties": {"city": {"type": "string"}}}
Sends a message to Discord
{"type": "string"}
user123: Hi there!
agent: Hello! How can I help?
What's the weather in Boston?
Respond with:
Your thought process
{"arg": "value"}
Your message
```
## LLM Response Example
The LLM responds with structured XML:
```xml title="llm-response.xml"
The user is asking about weather in Boston. I should:
1. Call the getWeather action to get current conditions
2. Send the result to Discord
{"city": "Boston"}
Checking the weather in Boston for you!
```
Daydreams automatically:
* Parses the `` and runs the weather API
* Parses the `` and sends the Discord message
* Saves the `` for debugging
## Advanced Features
### Template References
LLMs can reference previous action results within the same response:
```xml title="template-example.xml"
I'll get weather, then send a detailed message
{"city": "Boston"}
Weather in Boston: {{calls[0].temperature}}°F, {{calls[0].condition}}
```
The `{{calls[0].temperature}}` gets replaced with the actual weather data.
### Multi-Context Prompts
When multiple contexts are active:
```xml title="multi-context.xml"
Chat history with user123...
Current game state: level 5, health 80...
```
## Key Benefits
* **Consistency** - All agents use the same reliable prompt structure
* **Clarity** - LLMs always know what tools they have and how to use them
* **Memory** - Context and conversation history included automatically
* **Debugging** - You can see exactly what the LLM was told
* **Extensibility** - Easy to add new actions and outputs
## Prompt Architecture
Daydreams uses a modular prompt system defined in `packages/core/src/prompts/main.ts`:
```typescript title="prompt-structure.ts"
// Template sections with placeholders
export const templateSections = {
intro: `You are an expert AI assistant...`,
instructions: `Follow these steps to process...`,
content: `## CURRENT SITUATION\n{{currentTask}}...`,
response: `Structure your response: ...`,
footer: `Guiding Principles for Your Response...`
};
// Formatter processes data into prompt sections
export function formatPromptSections({
contexts, outputs, actions, workingMemory
}) {
return {
currentTask: xml("current-task", undefined, unprocessedLogs),
contextState: xml("context-state", undefined, contexts),
actions: xml("available-actions", undefined, actions),
// ... more sections
};
}
// Main prompt combines template + formatter
export const mainPrompt = {
template: promptTemplate,
sections: templateSections,
render: (data) => render(template, data),
formatter: formatPromptSections
};
```
### Context Customization
Contexts can customize their contribution to prompts:
```typescript title="context-customization.ts"
const chatContext = context({
type: "chat",
schema: z.object({ userId: z.string() }),
// Custom instructions for this context type
instructions: `You are a helpful chat assistant. Be friendly and conversational.`,
// Custom rendering of context state in prompts
render: (state) => `
Chat Context: ${state.id}
User: ${state.args.userId}
Messages: ${state.memory.messages?.length || 0}
Last active: ${new Date(state.memory.lastActive || Date.now()).toLocaleString()}
`
});
```
## Key Takeaways
* **Prompts are automatically generated** - You don't write them manually
* **Structure enables capabilities** - Tools, memory, and context included
automatically
* **LLMs respond with XML** - Parsed automatically into actions and outputs
* **Templates enable complex flows** - Reference previous results within
responses
* **Customizable per context** - Add specific instructions and state rendering
The prompting system is what makes your agent intelligent - it provides the LLM
with everything needed to understand the situation and respond appropriately.
file: ./content/docs/core/concepts/services.mdx
meta: {
"title": "Services",
"description": "Infrastructure management with dependency injection."
}
## What Are Services?
Services manage **infrastructure** - database connections, API clients, utilities. They handle the "how" of connecting to external systems so actions can focus on business logic.
## Service Structure
```typescript title="service-example.ts"
const databaseService = service({
name: "database",
// Register: Define HOW to create dependencies
register: (container) => {
container.singleton("db", () => new MongoDB(process.env.DB_URI));
container.singleton("userRepo", (c) => new UserRepository(c.resolve("db")));
},
// Boot: WHEN to initialize (agent startup)
boot: async (container) => {
const db = container.resolve("db");
await db.connect();
console.log("✅ Database connected");
},
});
```
## The Container
Services use a **dependency injection container** for shared resource management:
```typescript title="container-methods.ts"
const container = createContainer();
// singleton() - Create once, reuse everywhere
container.singleton("apiClient", () => new ApiClient());
// register() - Create new instance each time
container.register("requestId", () => crypto.randomUUID());
// instance() - Store pre-created object
container.instance("config", { apiKey: "secret123" });
// Usage in actions
const client = ctx.container.resolve("apiClient");
```
## Without Services: Connection Chaos
```typescript title="without-services.ts"
// ❌ Actions manage their own connections (slow, repetitive)
const sendMessageAction = action({
handler: async ({ channelId, message }) => {
// Create new client every time!
const client = new Discord.Client({ token: process.env.DISCORD_TOKEN });
await client.login(); // Slow connection each time
await client.channels.get(channelId).send(message);
await client.destroy();
},
});
```
## With Services: Shared Infrastructure
```typescript title="with-services.ts"
// ✅ Service manages connection once, actions reuse it
const discordService = service({
name: "discord",
register: (container) => {
container.singleton("discordClient", () => new Discord.Client({ token: process.env.DISCORD_TOKEN }));
},
boot: async (container) => {
await container.resolve("discordClient").login(); // Connect once at startup
},
});
const sendMessageAction = action({
handler: async ({ channelId, message }, ctx) => {
const client = ctx.container.resolve("discordClient"); // Already connected!
await client.channels.get(channelId).send(message);
},
});
```
## Service Lifecycle
```typescript title="service-lifecycle.ts"
const redisService = service({
name: "redis",
// Phase 1: REGISTER - Define factory functions
register: (container) => {
container.singleton("redisConfig", () => ({
host: process.env.REDIS_HOST || "localhost",
port: process.env.REDIS_PORT || 6379,
}));
container.singleton("redisClient", (c) => new Redis(c.resolve("redisConfig")));
},
// Phase 2: BOOT - Actually connect/initialize
boot: async (container) => {
const client = container.resolve("redisClient");
await client.connect();
console.log("✅ Redis connected");
},
});
// Execution order:
// 1. All services register() (define dependencies)
// 2. All services boot() (initialize connections)
// 3. Ensures dependencies available when needed
```
## Service Examples
### Multi-Component Service
```typescript title="trading-service.ts"
const tradingService = service({
name: "trading",
register: (container) => {
container.singleton("alpacaClient", () => new Alpaca({
key: process.env.ALPACA_KEY,
secret: process.env.ALPACA_SECRET,
}));
container.singleton("portfolio", (c) => new PortfolioTracker(c.resolve("alpacaClient")));
container.singleton("riskManager", () => new RiskManager({ maxPosition: 0.1 }));
},
boot: async (container) => {
await container.resolve("alpacaClient").authenticate();
await container.resolve("portfolio").sync();
console.log("💰 Trading ready");
},
});
// Actions use all components
const buyStock = action({
handler: async ({ symbol, quantity }, ctx) => {
const client = ctx.container.resolve("alpacaClient");
const riskManager = ctx.container.resolve("riskManager");
if (riskManager.canBuy(symbol, quantity)) {
return await client.createOrder({ symbol, qty: quantity, side: "buy" });
}
throw new Error("Risk limits exceeded");
},
});
```
### Environment-Based Configuration
```typescript title="storage-service.ts"
const storageService = service({
name: "storage",
register: (container) => {
if (process.env.NODE_ENV === "production") {
container.singleton("storage", () => new S3Storage({ bucket: process.env.S3_BUCKET }));
} else {
container.singleton("storage", () => new LocalStorage({ path: "./uploads" }));
}
},
boot: async (container) => {
await container.resolve("storage").initialize();
console.log(`📁 ${process.env.NODE_ENV === "production" ? "S3" : "Local"} storage ready`);
},
});
```
## Service Dependencies
```typescript title="service-dependencies.ts"
// Base service
const databaseService = service({
name: "database",
register: (container) => {
container.singleton("db", () => new MongoDB(process.env.DB_URI));
},
boot: async (container) => {
await container.resolve("db").connect();
},
});
// Dependent service
const cacheService = service({
name: "cache",
register: (container) => {
container.singleton("redis", () => new Redis(process.env.REDIS_URL));
container.singleton("cacheManager", (c) => new CacheManager({
fastCache: c.resolve("redis"),
slowCache: c.resolve("db"), // From databaseService
}));
},
boot: async (container) => {
await container.resolve("redis").connect();
await container.resolve("cacheManager").initialize();
},
});
// Extension using both services
const dataExtension = extension({
name: "data",
services: [databaseService, cacheService],
actions: [
action({
name: "get-user",
handler: async ({ userId }, ctx) => {
const cache = ctx.container.resolve("cacheManager");
return await cache.getOrFetch(`user:${userId}`, () =>
ctx.container.resolve("db").collection("users").findOne({ _id: userId })
);
},
}),
],
});
```
## Best Practices
### Single Responsibility
```typescript
// ✅ Good - focused on one domain
const databaseService = service({ name: "database" /* only DB connection */ });
const cacheService = service({ name: "cache" /* only caching */ });
// ❌ Bad - mixed responsibilities
const everythingService = service({ name: "everything" /* DB + cache + API + logging */ });
```
### Graceful Error Handling
```typescript
const apiService = service({
name: "external-api",
boot: async (container) => {
try {
await container.resolve("apiClient").healthCheck();
console.log("✅ External API ready");
} catch (error) {
console.warn("⚠️ API unavailable, features limited");
// Don't crash agent - let actions handle gracefully
}
},
});
```
### Resource Cleanup
```typescript
const databaseService = service({
register: (container) => {
container.singleton("db", () => {
const db = new MongoDB(process.env.DB_URI);
process.on("SIGINT", async () => {
await db.close();
process.exit(0);
});
return db;
});
},
});
```
## Common Issues
### Missing Dependencies
```typescript
// Error: "Token 'databaseClient' not found"
// ❌ Problem
const action = action({
handler: async (args, ctx) => {
const db = ctx.container.resolve("databaseClient"); // Not registered!
},
});
// ✅ Solution
const service = service({
register: (container) => {
container.singleton("databaseClient", () => new Database());
// ^^^^^^^^^^^^^^ Must match resolve token
},
});
```
### Circular Dependencies
```typescript
// ✅ Solution - break cycles with coordinator pattern
const coordinatorService = service({
register: (container) => {
container.singleton("a", () => new A());
container.singleton("b", () => new B());
},
boot: async (container) => {
// Wire relationships after creation
const coordinator = new Coordinator(container.resolve("a"), container.resolve("b"));
coordinator.wireComponents();
},
});
```
## Key Takeaways
* **Services manage infrastructure** - API clients, databases, utilities
* **Dependency injection container** - Shared resources across all actions
* **Two-phase lifecycle** - Register (define) then boot (initialize)
* **Separation of concerns** - Infrastructure separate from business logic
* **Resource efficiency** - One connection shared across all actions
## See Also
* [Extensions](/docs/core/concepts/extensions) - Feature package layer
* [Extensions vs Services](/docs/core/concepts/extensions-vs-services) - Decision guide
file: ./content/docs/core/providers/ai-sdk.mdx
meta: {
"title": "AI SDK Integration",
"description": "Leveraging the Vercel AI SDK with Daydreams."
}
## What is the Vercel AI SDK?
The [Vercel AI SDK](https://sdk.vercel.ai/docs/introduction) provides a unified
way to connect to different AI providers like OpenAI, Anthropic, Google, and
many others. Instead of learning each provider's unique API, you use one
consistent interface.
## Why This Matters for Your Agent
Daydreams is built on top of the Vercel AI SDK, which means you get:
### Easy Provider Switching
```typescript title="easy-switching.ts"
// Switch from OpenAI to Anthropic by changing one line
// model: openai("gpt-4o") // OpenAI
model: anthropic("claude-3-sonnet"); // Anthropic
// Everything else stays the same!
```
### Access to All Major Providers
* **OpenAI** - GPT-4, GPT-4o, GPT-3.5
* **Anthropic** - Claude 3 Opus, Sonnet, Haiku
* **Google** - Gemini Pro, Gemini Flash
* **Groq** - Ultra-fast Llama, Mixtral models
* **OpenRouter** - Access to 100+ models through one API
* **And many more** - See the
[full list](https://sdk.vercel.ai/docs/foundations/providers-and-models)
## The Problem: Each AI Provider is Different
Without a unified SDK, you'd need different code for each provider:
```typescript title="without-ai-sdk.ts"
// ❌ Without AI SDK - different APIs for each provider
if (provider === 'openai') {
const openai = new OpenAI({ apiKey: process.env.OPENAI_API_KEY });
const response = await openai.chat.completions.create({
model: "gpt-4",
messages: [...],
});
} else if (provider === 'anthropic') {
const anthropic = new Anthropic({ apiKey: process.env.ANTHROPIC_API_KEY });
const response = await anthropic.messages.create({
model: "claude-3-sonnet-20240229",
messages: [...],
});
}
// Different response formats, error handling, etc.
```
## The Solution: One Interface for All Providers
With the AI SDK, all providers work the same way:
```typescript title="with-ai-sdk.ts"
// ✅ With AI SDK - same code for any provider
const agent = createDreams({
model: openai("gpt-4o"), // Or anthropic("claude-3-sonnet")
// model: groq("llama3-70b"), // Or any other provider
// Everything else stays identical!
});
```
## Setting Up Your First Provider
### 1. Choose Your Provider
For this example, we'll use OpenAI, but the process is similar for all
providers.
### 2. Install the Provider Package
```bash title="install-openai.sh"
npm install @ai-sdk/openai
```
### 3. Get Your API Key
1. Go to [OpenAI's API platform](https://platform.openai.com/api-keys)
2. Create a new API key
3. Add it to your environment:
```bash title=".env"
OPENAI_API_KEY=your_api_key_here
```
### 4. Use in Your Agent
```typescript title="openai-agent.ts"
import { createDreams } from "@daydreamsai/core";
import { openai } from "@ai-sdk/openai";
const agent = createDreams({
model: openai("gpt-4o-mini"), // Fast and cost-effective
// model: openai("gpt-4o"), // More capable but slower/pricier
// Your agent configuration
extensions: [...],
contexts: [...],
});
await agent.start();
```
## All Supported Providers
### DreamsRouter (100+ Models)
```typescript title="daydreamsai-setup.ts"
// Install: npm install @daydreamsai/ai-sdk-provider
import { dreamsrouter } from "@daydreamsai/ai-sdk-provider";
model: dreamsrouter("google/gemini-pro");
```
### OpenAI
```typescript title="openai-setup.ts"
// Install: npm install @ai-sdk/openai
import { openai } from "@ai-sdk/openai";
model: openai("gpt-4o-mini"); // Fast, cheap
model: openai("gpt-4o"); // Most capable
model: openai("gpt-3.5-turbo"); // Legacy but cheap
```
**Get API key:** [platform.openai.com](https://platform.openai.com/api-keys)
### Anthropic (Claude)
```typescript title="anthropic-setup.ts"
// Install: npm install @ai-sdk/anthropic
import { anthropic } from "@ai-sdk/anthropic";
model: anthropic("claude-3-haiku-20240307"); // Fast, cheap
model: anthropic("claude-3-sonnet-20240229"); // Balanced
model: anthropic("claude-3-opus-20240229"); // Most capable
```
**Get API key:** [console.anthropic.com](https://console.anthropic.com/)
### Google (Gemini)
```typescript title="google-setup.ts"
// Install: npm install @ai-sdk/google
import { google } from "@ai-sdk/google";
model: google("gemini-1.5-flash"); // Fast, cheap
model: google("gemini-1.5-pro"); // More capable
```
**Get API key:** [aistudio.google.com](https://aistudio.google.com/app/apikey)
### Groq (Ultra-Fast)
```typescript title="groq-setup.ts"
// Install: npm install @ai-sdk/groq
import { createGroq } from "@ai-sdk/groq";
const groq = createGroq();
model: groq("llama3-70b-8192"); // Fast Llama
model: groq("mixtral-8x7b-32768"); // Fast Mixtral
```
**Get API key:** [console.groq.com](https://console.groq.com/keys)
### OpenRouter (100+ Models)
```typescript title="openrouter-setup.ts"
// Install: npm install @openrouter/ai-sdk-provider
import { openrouter } from "@openrouter/ai-sdk-provider";
model: openrouter("anthropic/claude-3-opus");
model: openrouter("google/gemini-pro");
model: openrouter("meta-llama/llama-3-70b");
// And 100+ more models!
```
**Get API key:** [openrouter.ai](https://openrouter.ai/keys)
## Switching Providers
The beauty of the AI SDK integration is how easy it is to switch:
```typescript title="provider-switching.ts"
// Development: Use fast, cheap models
const devAgent = createDreams({
model: groq("llama3-8b-8192"), // Ultra-fast for testing
// ... rest of config
});
// Production: Use more capable models
const prodAgent = createDreams({
model: openai("gpt-4o"), // High quality for users
// ... exact same config
});
// Experimenting: Try different providers
const experimentAgent = createDreams({
model: anthropic("claude-3-opus"), // Different reasoning style
// ... exact same config
});
```
## Environment Variables
Set up your API keys in your `.env` file:
```bash title=".env"
# OpenAI
OPENAI_API_KEY=sk-...
# Anthropic
ANTHROPIC_API_KEY=sk-ant-...
# Google
GOOGLE_GENERATIVE_AI_API_KEY=AI...
# Groq
GROQ_API_KEY=gsk_...
# OpenRouter
OPENROUTER_API_KEY=sk-or-...
```
The AI SDK automatically picks up the right environment variable for each
provider.
## Model Recommendations
### For Development/Testing
* **Groq Llama3-8B** - Ultra-fast responses for quick iteration
* **OpenAI GPT-4o-mini** - Good balance of speed and capability
### For Production
* **OpenAI GPT-4o** - Best overall capability and reliability
* **Anthropic Claude-3-Sonnet** - Great reasoning, good for complex tasks
### For Cost Optimization
* **OpenAI GPT-3.5-turbo** - Cheapest OpenAI option
* **Anthropic Claude-3-Haiku** - Cheapest Anthropic option
* **Google Gemini Flash** - Very affordable with good performance
### For Special Use Cases
* **OpenRouter** - Access models not available elsewhere
* **Local models** - Use [Ollama](https://ollama.ai/) for privacy
## Advanced Configuration
You can also configure providers with custom settings:
```typescript title="advanced-config.ts"
import { createOpenAI } from "@ai-sdk/openai";
import { createAnthropic } from "@ai-sdk/anthropic";
// Custom OpenAI configuration
const customOpenAI = createOpenAI({
apiKey: process.env.CUSTOM_OPENAI_KEY,
baseURL: "https://your-proxy.com/v1", // Use a proxy
});
// Custom Anthropic configuration
const customAnthropic = createAnthropic({
apiKey: process.env.CUSTOM_ANTHROPIC_KEY,
baseURL: "https://your-endpoint.com", // Custom endpoint
});
const agent = createDreams({
model: customOpenAI("gpt-4o"),
// ... rest of config
});
```
## Troubleshooting
### Missing API Key
```
Error: Missing API key
```
**Solution:** Make sure your environment variable is set and the process can
access it.
### Model Not Found
```
Error: Model 'gpt-5' not found
```
**Solution:** Check the
[AI SDK docs](https://sdk.vercel.ai/docs/foundations/providers-and-models) for
available model names.
### Rate Limits
```
Error: Rate limit exceeded
```
**Solution:** Switch to a provider with higher limits or implement retry logic.
## Next Steps
* **[Core Concepts](/docs/core/concepts/core)** - Learn how to build agents
* **[Your First Agent](/docs/core/first-agent)** - Build a working example
* **[Vercel AI SDK Docs](https://sdk.vercel.ai/docs/introduction)** - Complete
provider documentation
* **[Model Comparison](https://artificialanalysis.ai/)** - Compare different
models' performance and cost
## Key Takeaways
* **One interface, many providers** - Same code works with OpenAI, Anthropic,
Google, etc.
* **Easy switching** - Change providers by changing one line of code
* **Automatic key handling** - Environment variables work automatically
* **Cost flexibility** - Use cheap models for development, premium for
production
* **Future-proof** - New providers added to AI SDK work immediately with
Daydreams
The AI SDK integration gives you the freedom to choose the best model for your
use case without vendor lock-in.
file: ./content/docs/tutorials/basic/multi-context-agent.mdx
meta: {
"title": "Multi-Context Agent",
"description": "This guide will walk you through creating an AI agent that can respond to multiple contexts."
}
## Prerequisites
* `OPENAI_API_KEY`: Your OpenAI API key
```typescript title="agent.ts"
import { createDreams, context, input, output } from "@daydreamsai/core";
import { cliExtension } from "@daydreamsai/cli";
import { openai } from "@ai-sdk/openai";
import * as z from "zod";
const fetchContext = context({
type: "fetch",
schema: z.object({}),
instructions:
"You are a helpful assistant that can fetch data from a test API. When asked, fetch and display data from the JSONPlaceholder API.",
actions: {
fetchPosts: {
schema: z.object({
limit: z.number().optional().default(5),
}),
description: "Fetch posts from the test API",
handler: async ({ limit }) => {
const response = await fetch(
"https://jsonplaceholder.typicode.com/posts"
);
const posts = await response.json();
return posts.slice(0, limit);
},
},
fetchUser: {
schema: z.object({
userId: z.number(),
}),
description: "Fetch a specific user by ID",
handler: async ({ userId }) => {
const response = await fetch(
`https://jsonplaceholder.typicode.com/users/${userId}`
);
return response.json();
},
},
},
});
// 1. Define the main context for our agent
const echoContext = context({
type: "echo",
// No specific arguments needed for this simple context
schema: z.object({}),
instructions:
"You are a simple echo bot. Repeat the user's message back to them.",
});
// 2. Create the agent instance
const agent = createDreams({
// Configure the LLM model to use
model: openai("gpt-4o-mini"),
// Include the CLI extension for input/output handling
extensions: [cliExtension],
// Register our custom context
contexts: [echoContext, fetchContext],
});
// 3. Start the agent and run the context
async function main() {
// Initialize the agent (sets up services like readline)
await agent.start();
console.log("Multi-context agent started. Type 'exit' to quit.");
console.log("Available contexts:");
console.log("1. Echo context - repeats your messages");
console.log("2. Fetch context - fetches data from JSONPlaceholder test API");
console.log("");
// You can run different contexts based on user choice
// For this example, we'll run the fetch context
await agent.run({
context: fetchContext,
args: {}, // Empty object since our schema requires no arguments
});
// Agent stops when the input loop breaks
console.log("Agent stopped.");
}
// Start the application
main();
```
### Run your agent
Ensure your `OPENAI_API_KEY` environment variable is set, then run:
```bash title="run-agent.sh"
node agent.ts
```
file: ./content/docs/tutorials/basic/single-context.mdx
meta: {
"title": "Single Context Agent",
"description": "This guide will walk you through creating an AI agent that can respond to a single context."
}
## Prerequisites
* `OPENAI_API_KEY`: Your OpenAI API key
```typescript title="agent.ts"
import { createDreams, context, input, output } from "@daydreamsai/core";
import { cliExtension } from "@daydreamsai/cli";
import { openai } from "@ai-sdk/openai";
import * as z from "zod";
// 1. Define the main context for our agent
const echoContext = context({
type: "echo",
// No specific arguments needed for this simple context
schema: z.object({}),
// Instructions that guide the LLM's behavior
instructions:
"You are a simple echo bot. Repeat the user's message back to them.",
});
// 2. Create the agent instance
const agent = createDreams({
// Configure the LLM model to use
model: openai("gpt-4o-mini"),
// Include the CLI extension for input/output handling
extensions: [cliExtension],
// Register our custom context
contexts: [echoContext],
});
// 3. Start the agent and run the context
async function main() {
// Initialize the agent (sets up services like readline)
await agent.start();
console.log("Echo agent started. Type 'exit' to quit.");
// Run our echo context
// The cliExtension automatically handles console input/output
await agent.run({
context: echoContext,
args: {}, // Empty object since our schema requires no arguments
});
// Agent stops when the input loop breaks (e.g., user types "exit")
console.log("Agent stopped.");
}
// Start the application
main();
```
Your agent will start listening for input. Type any message and watch as the
agent echoes it back using the LLM and CLI handlers provided by the
`cliExtension`.
file: ./content/docs/tutorials/basic/starting-agent.mdx
meta: {
"title": "Starting Agent",
"description": "Agent that manages goals and tasks with simple memory and actions."
}
## 1. Environment setup and imports
```typescript title="index.ts"
import { createDreams, context, action, validateEnv } from "@daydreamsai/core";
import { cliExtension } from "@daydreamsai/cli";
import { anthropic } from "@ai-sdk/anthropic";
import * as z from "zod";
validateEnv(
z.object({
ANTHROPIC_API_KEY: z.string().min(1, "ANTHROPIC_API_KEY is required"),
})
);
```
The agent requires an Anthropic API key for the Claude language model. Set this
environment variable before running the agent.
## 2. Define the goal context and memory structure
```typescript title="index.ts"
type GoalMemory = {
goal: string;
tasks: string[];
currentTask: string;
};
const goalContext = context({
type: "goal",
schema: z.object({
id: z.string(),
}),
key: ({ id }) => id,
create: () => ({
goal: "",
tasks: [],
currentTask: "",
}),
render: ({ memory }) => `
Current Goal: ${memory.goal || "No goal set"}
Tasks: ${memory.tasks.length > 0 ? memory.tasks.join(", ") : "No tasks"}
Current Task: ${memory.currentTask || "None"}
`,
});
```
The context maintains the agent's current goal, list of tasks, and which task is
currently active. Memory persists between conversations.
## 3. Define task management actions
```typescript title="index.ts"
const taskActions = [
action({
name: "setGoal",
description: "Set a new goal for the agent",
schema: z.object({
goal: z.string().describe("The goal to work towards"),
}),
handler: ({ goal }, ctx) => {
const memory = ctx.agentMemory as GoalMemory;
memory.goal = goal;
memory.tasks = [];
memory.currentTask = "";
return { success: true, message: `Goal set to: ${goal}` };
},
}),
action({
name: "addTask",
description: "Add a task to accomplish the goal",
schema: z.object({
task: z.string().describe("The task to add"),
}),
handler: ({ task }, ctx) => {
const memory = ctx.agentMemory as GoalMemory;
memory.tasks.push(task);
if (!memory.currentTask && memory.tasks.length === 1) {
memory.currentTask = task;
}
return { success: true, message: `Added task: ${task}` };
},
}),
action({
name: "completeTask",
description: "Mark the current task as complete",
schema: z.object({
task: z.string().describe("The task to complete"),
}),
handler: ({ task }, ctx) => {
const memory = ctx.agentMemory as GoalMemory;
memory.tasks = memory.tasks.filter((t) => t !== task);
if (memory.currentTask === task) {
memory.currentTask = memory.tasks[0] || "";
}
return {
success: true,
message: `Completed task: ${task}`,
remainingTasks: memory.tasks.length,
};
},
}),
];
```
## 4. Create and start the agent
```typescript title="index.ts"
createDreams({
model: anthropic("claude-3-7-sonnet-latest"),
extensions: [cliExtension],
context: goalContext,
actions: taskActions,
}).start({ id: "basic-agent" });
```
file: ./content/docs/tutorials/mcp/multi-server.mdx
meta: {
"title": "Multiple Servers",
"description": "Configure a Daydreams agent to connect to and use tools from multiple Model Context Protocol (MCP) servers simultaneously."
}
This tutorial shows how to configure an agent to connect to two separate MCP
servers: one for web scraping (`firecrawl-mcp`) and another for 3D rendering
(`blender-mcp`).
### Configuration
The agent is configured by passing an array of server configurations to
`createMcpExtension`. Each server has a unique `id` which is used to direct tool
calls to the correct server.
```typescript title="multi-mcp-agent.ts"
import { createDreams, Logger, LogLevel } from "@daydreamsai/core";
import { createMcpExtension } from "@daydreamsai/mcp";
import { cliExtension } from "@daydreamsai/cli";
import { dreamsrouter } from "@daydreamsai/ai-sdk-provider";
createDreams({
model: dreamsrouter("google/gemini-2.5-pro"),
logger: new Logger({
level: LogLevel.INFO,
}),
extensions: [
cliExtension,
createMcpExtension([
{
id: "firecrawl-mcp",
name: "Firecrawl MCP Server",
transport: {
type: "stdio",
command: "npx",
args: ["-y", "firecrawl-mcp"],
},
},
{
id: "blender-mcp",
name: "Blender MCP Server",
transport: {
type: "stdio",
command: "uvx",
args: ["blender-mcp"],
},
},
]),
],
}).start();
```
### Key Concepts
* The `createMcpExtension` function takes an array of server configuration
objects.
* Each server requires a unique `id` which the agent uses to target tool calls
(e.g., `firecrawl-mcp`).
* The `transport` object defines the connection method. For local executables,
`stdio` is used with a `command` and an `args` array.
file: ./content/docs/tutorials/x402/nanoservice.mdx
meta: {
"title": "Building a Nanoservice Agent",
"description": "Create a pay-per-use AI agent using DreamsAI's nanoservice infrastructure with built-in micropayments."
}
This tutorial demonstrates how to build an AI agent that operates as a
nanoservice - a pay-per-use service that handles micropayments automatically
through the DreamsAI router.
## Step 1: Set Up Authentication
First, configure your wallet authentication using a private key. This wallet
will handle micropayments for each AI request.
```typescript title="Setup authentication"
import { createDreamsRouterAuth } from "@daydreamsai/ai-sdk-provider";
import { privateKeyToAccount } from "viem/accounts";
const { dreamsRouter, user } = await createDreamsRouterAuth(
privateKeyToAccount(Bun.env.PRIVATE_KEY as `0x${string}`),
{
payments: {
amount: "100000", // $0.10 USDC per request
network: "base-sepolia",
},
}
);
// Check your balance
console.log("User balance:", user.balance);
```
The `createDreamsRouterAuth` function:
* Takes your wallet's private key (stored securely in environment variables)
* Configures payment settings (amount per request and network)
* Returns a router for model access and user information
## Step 2: Configure the Agent
Set up your DreamsAI agent with the authenticated router and desired model.
```typescript title="Configure agent"
import { createDreams, LogLevel } from "@daydreamsai/core";
import { cliExtension } from "@daydreamsai/cli";
const agent = createDreams({
logLevel: LogLevel.DEBUG,
model: dreamsRouter("google-vertex/gemini-2.5-flash"),
extensions: [cliExtension], // Add CLI interface for testing
});
```
Configuration options:
* `logLevel`: Controls debugging output
* `model`: Specifies the AI model accessed through the router
* `extensions`: Add capabilities like CLI, Discord, or custom integrations
## Step 3: Launch the Nanoservice
Start your agent to begin handling requests with automatic micropayments.
```typescript title="Complete implementation"
import { createDreamsRouterAuth } from "@daydreamsai/ai-sdk-provider";
import { createDreams, LogLevel } from "@daydreamsai/core";
import { cliExtension } from "@daydreamsai/cli";
import { privateKeyToAccount } from "viem/accounts";
// Step 1: Authentication
const { dreamsRouter, user } = await createDreamsRouterAuth(
privateKeyToAccount(Bun.env.PRIVATE_KEY as `0x${string}`),
{
payments: {
amount: "100000", // $0.10 USDC
network: "base-sepolia",
},
}
);
console.log("Balance:", user.balance);
// Step 2 & 3: Configure and start
createDreams({
logLevel: LogLevel.DEBUG,
model: dreamsRouter("google-vertex/gemini-2.5-flash"),
extensions: [cliExtension],
}).start();
```
## How It Works
Each request to your agent:
1. Deducts the configured amount from your wallet balance
2. Routes the request to the specified AI model
3. Returns the response to your agent
4. Handles all blockchain transactions automatically
This creates a true nanoservice where users pay only for what they use, with no
subscription fees or upfront costs.
file: ./content/docs/tutorials/x402/server.mdx
meta: {
"title": "AI Nanoservice with x402 Payments",
"description": "Build a paid AI assistant API using Daydreams agents and x402 micropayments"
}
This tutorial shows you how to create an AI nanoservice - a pay-per-use API
endpoint where users pay micropayments for each AI request. We'll use Daydreams
for the AI agent and x402 for handling payments.
## What You'll Build
A production-ready AI service that:
* Charges $0.01 per API request automatically
* Maintains conversation history per user session
* Handles payments through x402 middleware
* Provides a clean REST API interface
## Prerequisites
* Bun installed (`curl -fsSL https://bun.sh/install | bash`)
* OpenAI API key
* Ethereum wallet with some test funds (for Base Sepolia)
## Step 1: Create the Project
First, set up your project structure:
```bash
mkdir ai-nanoservice
cd ai-nanoservice
bun init -y
```
Install the required dependencies:
```bash
bun add @daydreamsai/core @ai-sdk/openai hono @hono/node-server x402-hono dotenv zod
```
## Step 2: Build the AI Service
Create `server.ts` with the following code:
```typescript
import { config } from "dotenv";
import { Hono } from "hono";
import { serve } from "@hono/node-server";
import { paymentMiddleware, type Network, type Resource } from "x402-hono";
import { createDreams, context, LogLevel } from "@daydreamsai/core";
import { openai } from "@ai-sdk/openai";
import * as z from "zod";
config();
// Payment configuration
const facilitatorUrl = "https://facilitator.x402.rs";
const payTo = (process.env.ADDRESS as `0x${string}`) || "0xYourWalletAddress";
const network = (process.env.NETWORK as Network) || "base-sepolia";
const openaiKey = process.env.OPENAI_API_KEY;
if (!openaiKey) {
console.error("Missing OPENAI_API_KEY");
process.exit(1);
}
```
## Step 3: Define the AI Context
Contexts in Daydreams manage conversation state and memory. Add this to your
`server.ts`:
```typescript
// Memory structure for each session
interface AssistantMemory {
requestCount: number;
lastQuery?: string;
history: Array<{ query: string; response: string; timestamp: Date }>;
}
// Create a stateful context
const assistantContext = context({
type: "ai-assistant",
schema: z.object({
sessionId: z.string().describe("Session identifier"),
}),
create: () => ({
requestCount: 0,
history: [],
}),
render: (state) => {
return `
AI Assistant Session: ${state.args.sessionId}
Requests: ${state.memory.requestCount}
${state.memory.lastQuery ? `Last Query: ${state.memory.lastQuery}` : ""}
Recent History: ${
state.memory.history
.slice(-3)
.map((h) => `- ${h.query}`)
.join("\n") || "None"
}
`.trim();
},
instructions: `You are a helpful AI assistant providing a paid nano service.
You should provide concise, valuable responses to user queries.
Remember the conversation history and context.`,
});
```
## Step 4: Create the Agent
Initialize the Daydreams agent with your context:
```typescript
const agent = createDreams({
logLevel: LogLevel.INFO,
model: openai("gpt-4o-mini"), // Using mini for cost efficiency
contexts: [assistantContext],
inputs: {
text: {
description: "User query",
schema: z.string(),
},
},
outputs: {
text: {
description: "Assistant response",
schema: z.string(),
},
},
});
// Start the agent
await agent.start();
```
## Step 5: Set Up the API Server
Create the Hono server with payment middleware:
```typescript
const app = new Hono();
console.log("AI Assistant nano service is running on port 4021");
console.log(`Payment required: $0.01 per request to ${payTo}`);
// Apply payment middleware to the assistant endpoint
app.use(
paymentMiddleware(
payTo,
{
"/assistant": {
price: "$0.01", // $0.01 per request
network,
},
},
{
url: facilitatorUrl,
}
)
);
```
## Step 6: Implement the Assistant Endpoint
Add the main API endpoint that processes AI requests:
```typescript
app.post("/assistant", async (c) => {
try {
const body = await c.req.json();
const { query, sessionId = "default" } = body;
if (!query) {
return c.json({ error: "Query is required" }, 400);
}
// Get the context state
const contextState = await agent.getContext({
context: assistantContext,
args: { sessionId },
});
// Update request count
contextState.memory.requestCount++;
contextState.memory.lastQuery = query;
// Send query to agent
const result = await agent.send({
context: assistantContext,
args: { sessionId },
input: { type: "text", data: query },
});
// Extract response
const output = result.find((r) => r.ref === "output");
const response =
output && "data" in output
? output.data
: "I couldn't process your request.";
return c.json({
response,
sessionId,
requestCount: contextState.memory.requestCount,
});
} catch (error) {
console.error("Error:", error);
return c.json({ error: "Internal server error" }, 500);
}
});
// Start server
serve({
fetch: app.fetch,
port: 4021,
});
```
## Step 7: Environment Configuration
Create a `.env` file:
```bash
# x402 Payment Configuration
ADDRESS=0xYourWalletAddressHere
NETWORK=base-sepolia
# OpenAI API Key
OPENAI_API_KEY=sk-...
```
## Step 8: Testing Your Service
### Start the Server
```bash
bun run server.ts
```
You should see:
```
AI Assistant nano service is running on port 4021
Payment required: $0.01 per request to 0xYourWallet...
```
### Test with the x402 Client
Create a test client using `x402-fetch`:
```typescript
import { privateKeyToAccount } from "viem/accounts";
import { wrapFetchWithPayment } from "x402-fetch";
const account = privateKeyToAccount("0xYourPrivateKey");
const fetchWithPayment = wrapFetchWithPayment(fetch, account);
// Make a paid request
const response = await fetchWithPayment("http://localhost:4021/assistant", {
method: "POST",
headers: { "Content-Type": "application/json" },
body: JSON.stringify({
query: "What is the capital of France?",
sessionId: "user-123",
}),
});
const result = await response.json();
console.log("AI Response:", result.response);
```
## Understanding the Payment Flow
1. **Client Request**: User sends a POST request to `/assistant`
2. **Payment Middleware**: x402 intercepts and requests payment
3. **Blockchain Transaction**: Client wallet signs and sends micropayment
4. **Request Processing**: After payment confirmation, request reaches your
handler
5. **AI Response**: Agent processes query and returns response
The payment happens automatically when using `x402-fetch` on the client side.
## Advanced Features
The example includes an advanced server (`advanced-server.ts`) with:
* Multiple service tiers (assistant, analyzer, generator)
* Different pricing for each service
* Custom actions for text analysis
* User preferences and credits system
## Production Considerations
1. **Security**: Always use environment variables for sensitive data
2. **Persistence**: Consider using a database for session storage
3. **Scaling**: Use Docker for easy deployment
4. **Monitoring**: Add logging and analytics
5. **Error Handling**: Implement proper error responses
## Complete Example
The full example is available at:
[examples/x402/nanoservice](https://github.com/daydreamsai/daydreams/tree/main/examples/x402/nanoservice)
This includes:
* Basic and advanced server implementations
* Client examples with payment handling
* Docker configuration
* Interactive CLI client
## Next Steps
* Deploy to a cloud provider
* Add custom actions for your use case
* Implement different pricing tiers
* Create a web interface
* Add authentication for user management \*/}
file: ./content/docs/router/v1/healthz/get.mdx
meta: {
"title": "Health check",
"full": true,
"_openapi": {
"method": "GET",
"route": "/v1/healthz",
"toc": [],
"structuredData": {
"headings": [],
"contents": [
{
"content": "Basic health check endpoint that returns the operational status of the API server.\n\n**Use Cases:**\n- Load balancer health checks\n- Monitoring system verification\n- Service discovery confirmation\n- Uptime monitoring\n\n**Response:** Simple \"ok\" text response with 200 status code indicates the service is operational.\n\n**Public Endpoint:** No authentication required - designed for automated monitoring systems."
}
]
}
}
}
{/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */}
Basic health check endpoint that returns the operational status of the API server.
**Use Cases:**
* Load balancer health checks
* Monitoring system verification
* Service discovery confirmation
* Uptime monitoring
**Response:** Simple "ok" text response with 200 status code indicates the service is operational.
**Public Endpoint:** No authentication required - designed for automated monitoring systems.
file: ./content/docs/router/v1/models/get.mdx
meta: {
"title": "List all models",
"full": true,
"_openapi": {
"method": "GET",
"route": "/v1/models",
"toc": [],
"structuredData": {
"headings": [],
"contents": [
{
"content": "Get all available AI models in OpenAI-compatible format"
}
]
}
}
}
{/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */}
Get all available AI models in OpenAI-compatible format
file: ./content/docs/router/v1/chat/completions/post.mdx
meta: {
"title": "Create chat completion",
"full": true,
"_openapi": {
"method": "POST",
"route": "/v1/chat/completions",
"toc": [],
"structuredData": {
"headings": [],
"contents": [
{
"content": "# Chat Completions API\n\nGenerate AI chat completions using various models with support for multiple payment methods.\n\n## Payment Methods\n\n### 1. Credit-Based Payments (Traditional)\nPre-fund your account and pay per request. Costs are deducted from your balance automatically.\n\n- **Simple Setup**: Add funds to your account\n- **Instant Processing**: No additional payment verification needed\n- **Predictable Billing**: Pre-pay for usage\n\n### 2. x402 Cryptocurrency Payments\nPay for requests in real-time using cryptocurrency without pre-funding accounts.\n\n- **Supported Assets**: USDC\n- **Networks**: Base\n- **Protocol**: x402 standard for AI micropayments\n- **Benefits**: No account funding, transparent pricing, crypto-native experience\n\n## Cost Calculation\nCosts are calculated based on:\n- **Input Tokens**: Text you send to the model\n- **Output Tokens**: Text generated by the model\n- **Model Pricing**: Different models have different rates\n\n## Error Handling\nThe API handles various error scenarios:\n- **402 Payment Required**: Insufficient balance or invalid x402 payment\n- **429 Rate Limited**: Too many requests\n- **400 Bad Request**: Invalid request parameters\n- **500 Server Error**: Internal processing errors"
}
]
}
}
}
{/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */}
# Chat Completions API
Generate AI chat completions using various models with support for multiple payment methods.
## Payment Methods
### 1. Credit-Based Payments (Traditional)
Pre-fund your account and pay per request. Costs are deducted from your balance automatically.
* **Simple Setup**: Add funds to your account
* **Instant Processing**: No additional payment verification needed
* **Predictable Billing**: Pre-pay for usage
### 2. x402 Cryptocurrency Payments
Pay for requests in real-time using cryptocurrency without pre-funding accounts.
* **Supported Assets**: USDC
* **Networks**: Base
* **Protocol**: x402 standard for AI micropayments
* **Benefits**: No account funding, transparent pricing, crypto-native experience
## Cost Calculation
Costs are calculated based on:
* **Input Tokens**: Text you send to the model
* **Output Tokens**: Text generated by the model
* **Model Pricing**: Different models have different rates
## Error Handling
The API handles various error scenarios:
* **402 Payment Required**: Insufficient balance or invalid x402 payment
* **429 Rate Limited**: Too many requests
* **400 Bad Request**: Invalid request parameters
* **500 Server Error**: Internal processing errors
file: ./content/docs/router/v1/images/edits/post.mdx
meta: {
"title": "Edit images",
"full": true,
"_openapi": {
"method": "POST",
"route": "/v1/images/edits",
"toc": [],
"structuredData": {
"headings": [],
"contents": [
{
"content": "# Image Editing API\n\nEdit existing images using AI models with text prompts and optional masks. Supports single or multiple input images for models that accept multi-image context.\n\n## Supported Models\n- **gemini-25-flash-image-preview**: Google Gemini model for image editing\n- **fal-ai/bytedance/seedream/v4/edit**: Seedream v4 edit on fal.ai (supports URL and base64 images)\n\n## Request Parameters\n- **model**: The model to use for editing\n- **image / images**: Base64-encoded source image, or an array of images. For multipart, pass one or more image files or use an images array field.\n- **prompt**: Text description of the edits to make\n- **mask**: Optional base64-encoded mask image (areas to edit)\n- **n**: Number of edited variations to generate (1-8)\n- **size**: Target dimensions for output\n- **quality**: Model-specific tier (e.g., \"low/medium/high\" for GPT-Image 1, \"standard/hd\" for Imagen)\n- **response_format**: \"url\" or \"b64_json\"\n\n## Pricing\nSame as image generation - charged per output image"
}
]
}
}
}
{/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */}
# Image Editing API
Edit existing images using AI models with text prompts and optional masks. Supports single or multiple input images for models that accept multi-image context.
## Supported Models
* **gemini-25-flash-image-preview**: Google Gemini model for image editing
* **fal-ai/bytedance/seedream/v4/edit**: Seedream v4 edit on fal.ai (supports URL and base64 images)
## Request Parameters
* **model**: The model to use for editing
* **image / images**: Base64-encoded source image, or an array of images. For multipart, pass one or more image files or use an images array field.
* **prompt**: Text description of the edits to make
* **mask**: Optional base64-encoded mask image (areas to edit)
* **n**: Number of edited variations to generate (1-8)
* **size**: Target dimensions for output
* **quality**: Model-specific tier (e.g., "low/medium/high" for GPT-Image 1, "standard/hd" for Imagen)
* **response\_format**: "url" or "b64\_json"
## Pricing
Same as image generation - charged per output image
file: ./content/docs/router/v1/images/generations/post.mdx
meta: {
"title": "Generate images",
"full": true,
"_openapi": {
"method": "POST",
"route": "/v1/images/generations",
"toc": [],
"structuredData": {
"headings": [],
"contents": [
{
"content": "# Image Generation API\n\nGenerate images from text prompts using AI models like OpenAI GPT-Image 1 and Google Imagen.\n\n## Supported Models\n- **openai/gpt-image-1**: OpenAI's latest image model with editing, inpainting, and outpainting support via the Images API\n- **imagen-4.0-generate-001**: Google Imagen 4.0 for high-quality image generation\n\n## Request Parameters\n- **prompt**: Text description of the image to generate\n- **n**: Number of images to generate (1-8)\n- **size**: Image dimensions (e.g., \"1024x1024\", \"1920x1080\")\n- **quality**: Model-specific quality tier (e.g., \"low/medium/high\" for GPT-Image 1, \"standard/hd\" for Imagen)\n- **response_format**: \"url\" or \"b64_json\"\n\n## Pricing\nImages are charged per generated output image. GPT-Image 1 supports granular pricing per quality tier and resolution (e.g., low 1024x1024 at $0.011), while Google Imagen continues to offer standard vs. HD (2x) pricing.\n- Platform fee: 20% added to the computed base cost"
}
]
}
}
}
{/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */}
# Image Generation API
Generate images from text prompts using AI models like OpenAI GPT-Image 1 and Google Imagen.
## Supported Models
* **openai/gpt-image-1**: OpenAI's latest image model with editing, inpainting, and outpainting support via the Images API
* **imagen-4.0-generate-001**: Google Imagen 4.0 for high-quality image generation
## Request Parameters
* **prompt**: Text description of the image to generate
* **n**: Number of images to generate (1-8)
* **size**: Image dimensions (e.g., "1024x1024", "1920x1080")
* **quality**: Model-specific quality tier (e.g., "low/medium/high" for GPT-Image 1, "standard/hd" for Imagen)
* **response\_format**: "url" or "b64\_json"
## Pricing
Images are charged per generated output image. GPT-Image 1 supports granular pricing per quality tier and resolution (e.g., low 1024x1024 at $0.011), while Google Imagen continues to offer standard vs. HD (2x) pricing.
* Platform fee: 20% added to the computed base cost
file: ./content/docs/router/v1/images/jobs/get.mdx
meta: {
"title": "List image jobs",
"full": true,
"_openapi": {
"method": "GET",
"route": "/v1/images/jobs",
"toc": [],
"structuredData": {
"headings": [],
"contents": []
}
}
}
{/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */}
file: ./content/docs/router/v1/images/jobs/post.mdx
meta: {
"title": "Submit image generation job",
"full": true,
"_openapi": {
"method": "POST",
"route": "/v1/images/jobs",
"toc": [],
"structuredData": {
"headings": [],
"contents": []
}
}
}
{/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */}
file: ./content/docs/router/v1/images/models/get.mdx
meta: {
"title": "List available image generation models",
"full": true,
"_openapi": {
"method": "GET",
"route": "/v1/images/models",
"toc": [],
"structuredData": {
"headings": [],
"contents": [
{
"content": "Get a list of all available image generation models with their capabilities and pricing"
}
]
}
}
}
{/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */}
Get a list of all available image generation models with their capabilities and pricing
file: ./content/docs/router/v1/models/categories/get.mdx
meta: {
"title": "Get model categories",
"full": true,
"_openapi": {
"method": "GET",
"route": "/v1/models/categories",
"toc": [],
"structuredData": {
"headings": [],
"contents": [
{
"content": "Get models grouped by type/category"
}
]
}
}
}
{/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */}
Get models grouped by type/category
file: ./content/docs/router/v1/models/detailed/get.mdx
meta: {
"title": "Get filtered detailed models",
"full": true,
"_openapi": {
"method": "GET",
"route": "/v1/models/detailed",
"toc": [],
"structuredData": {
"headings": [],
"contents": [
{
"content": "Get detailed model information with filtering capabilities"
}
]
}
}
}
{/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */}
Get detailed model information with filtering capabilities
file: ./content/docs/router/v1/models/edit-capable/get.mdx
meta: {
"title": "Get image edit-capable models",
"full": true,
"_openapi": {
"method": "GET",
"route": "/v1/models/edit-capable",
"toc": [],
"structuredData": {
"headings": [],
"contents": [
{
"content": "Get models suitable for image editing (filtered by tag=image-editing)."
}
]
}
}
}
{/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */}
Get models suitable for image editing (filtered by tag=image-editing).
file: ./content/docs/router/v1/models/id/get.mdx
meta: {
"title": "Get model details",
"full": true,
"_openapi": {
"method": "GET",
"route": "/v1/models/{id}",
"toc": [],
"structuredData": {
"headings": [],
"contents": [
{
"content": "Get detailed information about a specific AI model including capabilities, pricing, limits, and current status"
}
]
}
}
}
{/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */}
Get detailed information about a specific AI model including capabilities, pricing, limits, and current status
file: ./content/docs/router/v1/models/providers/get.mdx
meta: {
"title": "Get available providers",
"full": true,
"_openapi": {
"method": "GET",
"route": "/v1/models/providers",
"toc": [],
"structuredData": {
"headings": [],
"contents": [
{
"content": "Get list of available AI providers with their model counts and status"
}
]
}
}
}
{/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */}
Get list of available AI providers with their model counts and status
file: ./content/docs/router/v1/models/recommendations/post.mdx
meta: {
"title": "Get model recommendations",
"full": true,
"_openapi": {
"method": "POST",
"route": "/v1/models/recommendations",
"toc": [],
"structuredData": {
"headings": [],
"contents": [
{
"content": "Get model recommendations based on specified requirements"
}
]
}
}
}
{/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */}
Get model recommendations based on specified requirements
file: ./content/docs/router/v1/models/search/get.mdx
meta: {
"title": "Search models",
"full": true,
"_openapi": {
"method": "GET",
"route": "/v1/models/search",
"toc": [],
"structuredData": {
"headings": [],
"contents": [
{
"content": "Search models by name, description, or tags"
}
]
}
}
}
{/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */}
Search models by name, description, or tags
file: ./content/docs/router/v1/models/stats/get.mdx
meta: {
"title": "Get model statistics",
"full": true,
"_openapi": {
"method": "GET",
"route": "/v1/models/stats",
"toc": [],
"structuredData": {
"headings": [],
"contents": [
{
"content": "Get comprehensive statistics about available models and recommendations"
}
]
}
}
}
{/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */}
Get comprehensive statistics about available models and recommendations
file: ./content/docs/router/v1/videos/jobs/get.mdx
meta: {
"title": "List video jobs",
"full": true,
"_openapi": {
"method": "GET",
"route": "/v1/videos/jobs",
"toc": [],
"structuredData": {
"headings": [],
"contents": []
}
}
}
{/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */}
file: ./content/docs/router/v1/images/jobs/id/get.mdx
meta: {
"title": "Get image job by id",
"full": true,
"_openapi": {
"method": "GET",
"route": "/v1/images/jobs/{id}",
"toc": [],
"structuredData": {
"headings": [],
"contents": []
}
}
}
{/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */}
file: ./content/docs/router/v1/videos/jobs/id/get.mdx
meta: {
"title": "Get video job by id",
"full": true,
"_openapi": {
"method": "GET",
"route": "/v1/videos/jobs/{id}",
"toc": [],
"structuredData": {
"headings": [],
"contents": []
}
}
}
{/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */}
file: ./content/docs/router/v1/videos/veo-3/jobs/post.mdx
meta: {
"title": "Submit video generation job (Google Veo 3)",
"full": true,
"_openapi": {
"method": "POST",
"route": "/v1/videos/veo-3/jobs",
"toc": [],
"structuredData": {
"headings": [],
"contents": []
}
}
}
{/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */}
file: ./content/docs/router/v1/videos/veo-3-fast/jobs/post.mdx
meta: {
"title": "Submit fast video generation job (Google Veo 3 Fast)",
"full": true,
"_openapi": {
"method": "POST",
"route": "/v1/videos/veo-3-fast/jobs",
"toc": [],
"structuredData": {
"headings": [],
"contents": []
}
}
}
{/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */}
file: ./content/docs/router/v1/videos/jobs/id/events/get.mdx
meta: {
"title": "Stream job status updates (SSE)",
"full": true,
"_openapi": {
"method": "GET",
"route": "/v1/videos/jobs/{id}/events",
"toc": [],
"structuredData": {
"headings": [],
"contents": []
}
}
}
{/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */}
file: ./content/docs/router/v1/videos/kling/image-to-video/jobs/post.mdx
meta: {
"title": "Submit image-to-video job (Kling v2.1 Pro)",
"full": true,
"_openapi": {
"method": "POST",
"route": "/v1/videos/kling/image-to-video/jobs",
"toc": [],
"structuredData": {
"headings": [],
"contents": []
}
}
}
{/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */}
file: ./content/docs/router/v1/videos/kling/image-to-video-v2-5-turbo/jobs/post.mdx
meta: {
"title": "Submit image-to-video job (Kling v2.5 Turbo)",
"full": true,
"_openapi": {
"method": "POST",
"route": "/v1/videos/kling/image-to-video-v2.5-turbo/jobs",
"toc": [],
"structuredData": {
"headings": [],
"contents": []
}
}
}
{/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */}
file: ./content/docs/router/v1/videos/wan-25/image-to-video/jobs/post.mdx
meta: {
"title": "Submit image-to-video job (Wan 2.5 Preview)",
"full": true,
"_openapi": {
"method": "POST",
"route": "/v1/videos/wan-25/image-to-video/jobs",
"toc": [],
"structuredData": {
"headings": [],
"contents": []
}
}
}
{/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */}
file: ./content/docs/router/v1/videos/wan-25/text-to-video/jobs/post.mdx
meta: {
"title": "Submit text-to-video job (Wan 2.5 Preview)",
"full": true,
"_openapi": {
"method": "POST",
"route": "/v1/videos/wan-25/text-to-video/jobs",
"toc": [],
"structuredData": {
"headings": [],
"contents": []
}
}
}
{/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */}
file: ./content/docs/router/v1/videos/wan/animate/move/jobs/post.mdx
meta: {
"title": "Submit video animation job (Wan Animate Move)",
"full": true,
"_openapi": {
"method": "POST",
"route": "/v1/videos/wan/animate/move/jobs",
"toc": [],
"structuredData": {
"headings": [],
"contents": []
}
}
}
{/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */}
file: ./content/docs/router/v1/videos/wan/animate/replace/jobs/post.mdx
meta: {
"title": "Submit video animation job (Wan Animate Replace)",
"full": true,
"_openapi": {
"method": "POST",
"route": "/v1/videos/wan/animate/replace/jobs",
"toc": [],
"structuredData": {
"headings": [],
"contents": []
}
}
}
{/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */}