Outputs
How Daydreams agents send information and responses.
Outputs are how Daydreams agents communicate results or send information to external systems or users. If Inputs are how agents "listen," Outputs are how they "speak" or "act" based on the LLM's reasoning.
Examples of outputs include:
- Sending a message to a Discord channel or Telegram chat.
- Posting a tweet.
- Returning a response in a CLI session.
- Calling an external API based on the agent's decision.
Defining an Output
Outputs are defined using the output
helper function exported from
@daydreamsai/core
. Each definition specifies how the agent should structure
information for a particular output channel and how to execute the sending
logic.
Key Parameters:
type
(string): Unique identifier used in<output type="...">
.description
/instructions
(string, optional): Help the LLM understand what the output does and when to use it.schema
(Zod Schema, optional): Defines the structure and validates the content placed inside the<output>
tag by the LLM. Defaults toz.string()
.attributes
(Zod Schema, optional): Defines and validates attributes placed on the<output>
tag itself (e.g.,<output type="discord:message" channelId="...">
). These provide necessary parameters for thehandler
.handler
(Function): Executes the logic to send the information externally. It receives:data
: The validated content from theschema
.ctx
: TheAgentContext
, includingctx.outputRef
which contains the parsedparams
(attributes) and originalcontent
.agent
: The agent instance.- It can optionally return an
OutputRefResponse
(or array thereof) to update the log entry or mark it as processed.
format
(Function, optional): Customizes the log representation of theOutputRef
.examples
(string[], optional): Provides concrete examples to the LLM on how to structure the<output>
tag.install
/enabled
/context
(Functions/Context, optional): Similar to Actions and Inputs for setup, conditional availability, and context scoping.
LLM Interaction
- Availability: Enabled outputs are presented to the LLM within the
<available-outputs>
tag in the prompt, including their type, description, instructions, content schema (content_schema
), attribute schema (attributes_schema
), and examples. - Invocation: The LLM generates an output by including an
<output>
tag in its response stream, matching one of the available types. It must provide any required attributes defined in theattributes
schema and the content inside the tag matching theschema
.
Execution Flow
- Parsing: When the framework parses an
<output>
tag from the LLM stream (handleStream
instreaming.ts
), it extracts thetype
,attributes
, andcontent
. - Log Creation: An initial
OutputRef
log is created (getOrCreateRef
instreaming.ts
). - Processing: Once the tag is fully parsed (
el.done
),handlePushLog
callshandleOutputStream
(local) which in turn callshandleOutput
(handlers.ts
). - Validation:
handleOutput
finds the corresponding output definition bytype
. It validates the extractedcontent
against theoutput.schema
and the extractedattributes
against theoutput.attributes
schema. - Handler Execution: If validation passes,
handleOutput
executes theoutput.handler
function, passing the validated content (data
) and theAgentContext
(which includes theoutputRef
containing parsed attributes inoutputRef.params
). - External Action: The
handler
performs the necessary external operation (e.g., sending the Discord message). - Logging: The
handler
can optionally return data to update theOutputRef
log. TheOutputRef
is added to theWorkingMemory
.
Outputs allow the agent to respond and communicate, completing the interaction loop initiated by Inputs and guided by Actions.