Agents 可以与托管在任何提供商上的 AI 模型进行通信,包括:
您还可以使用 AI Gateway 中的模型路由功能来跨提供商路由、评估响应和管理 AI 提供商速率限制。
因为 Agents 构建在 Durable Objects 之上,每个 Agent 或聊天会话都与一个有状态的计算实例相关联。传统的无服务器架构通常为聊天等实时应用程序所需的持久连接带来挑战。
用户可能在现代推理模型(如 o3-mini
或 DeepSeek R1)的长时间运行响应期间断开连接,或在刷新浏览器时丢失对话上下文。Agents 不依赖请求-响应模式和管理外部数据库来跟踪和存储对话状态,而是可以直接在 Agent 内存储状态。如果客户端断开连接,Agent 可以写入其自己的分布式存储,并在客户端重新连接时立即更新——即使是几小时或几天后。
您可以从 Agent 内的任何方法调用模型,包括使用 onRequest
处理器处理 HTTP 请求时、运行计划任务时、在 onMessage
处理器中处理 WebSocket 消息时,或从您自己的任何方法中。
重要的是,Agents 可以自主调用 AI 模型,并且可以处理可能需要几分钟(或更长时间)才能完全响应的长时间运行响应。
现代推理模型 ↗或"思考"模型可能需要一些时间来生成响应并且将响应流式传输回客户端。
您可以使用 WebSocket API 将响应流式传输回客户端,而不是缓冲整个响应或冒客户端断开连接的风险。
import { Agent } from "agents";import { OpenAI } from "openai";
export class MyAgent extends Agent { async onConnect(connection, ctx) { // }
async onMessage(connection, message) { let msg = JSON.parse(message); // 这可以运行任意长的时间,并返回任意数量的消息! await queryReasoningModel(connection, msg.prompt); }
async queryReasoningModel(connection, userPrompt) { const client = new OpenAI({ apiKey: this.env.OPENAI_API_KEY, });
try { const stream = await client.chat.completions.create({ model: this.env.MODEL || "o3-mini", messages: [{ role: "user", content: userPrompt }], stream: true, });
// 将响应作为 WebSocket 消息流式传输回去 for await (const chunk of stream) { const content = chunk.choices[0]?.delta?.content || ""; if (content) { connection.send(JSON.stringify({ type: "chunk", content })); } }
// 发送完成消息 connection.send(JSON.stringify({ type: "done" })); } catch (error) { connection.send(JSON.stringify({ type: "error", error: error })); } }}
import { Agent } from "agents";import { OpenAI } from "openai";
export class MyAgent extends Agent<Env> { async onConnect(connection: Connection, ctx: ConnectionContext) { // }
async onMessage(connection: Connection, message: WSMessage) { let msg = JSON.parse(message); // 这可以运行任意长的时间,并返回任意数量的消息! await queryReasoningModel(connection, msg.prompt); }
async queryReasoningModel(connection: Connection, userPrompt: string) { const client = new OpenAI({ apiKey: this.env.OPENAI_API_KEY, });
try { const stream = await client.chat.completions.create({ model: this.env.MODEL || "o3-mini", messages: [{ role: "user", content: userPrompt }], stream: true, });
// 将响应作为 WebSocket 消息流式传输回去 for await (const chunk of stream) { const content = chunk.choices[0]?.delta?.content || ""; if (content) { connection.send(JSON.stringify({ type: "chunk", content })); } }
// 发送完成消息 connection.send(JSON.stringify({ type: "done" })); } catch (error) { connection.send(JSON.stringify({ type: "error", error: error })); } }}
您还可以使用 this.setState
方法将 AI 模型响应持久化到 Agent 的内部状态。例如,如果您运行计划任务,您可以存储任务的输出并稍后读取。或者,如果用户断开连接,读取消息历史记录并在他们重新连接时发送给用户。
您可以通过配置绑定在 Agent 中使用 Workers AI 中可用的任何模型。
Workers AI 通过设置 stream: true
开箱即用地支持流式响应,我们强烈推荐使用它们来避免缓冲和延迟响应,特别是对于较大的模型或需要更多时间生成响应的推理模型。
import { Agent } from "agents";
export class MyAgent extends Agent { async onRequest(request) { const response = await env.AI.run( "@cf/deepseek-ai/deepseek-r1-distill-qwen-32b", { prompt: "为我构建一个返回 JSON 的 Cloudflare Worker。", stream: true, // 流式传输响应,不阻塞客户端! }, );
// 返回流 return new Response(answer, { headers: { "content-type": "text/event-stream" }, }); }}
import { Agent } from "agents";
interface Env { AI: Ai;}
export class MyAgent extends Agent<Env> { async onRequest(request: Request) { const response = await env.AI.run( "@cf/deepseek-ai/deepseek-r1-distill-qwen-32b", { prompt: "为我构建一个返回 JSON 的 Cloudflare Worker。", stream: true, // 流式传输响应,不阻塞客户端! }, );
// 返回流 return new Response(answer, { headers: { "content-type": "text/event-stream" }, }); }}
您的 Wrangler 配置需要添加 ai
绑定:
{ "ai": { "binding": "AI" }}
[ai]binding = "AI"
您还可以通过在调用 AI 绑定时指定 gateway
配置,直接从 Agent 使用 AI Gateway 中的模型路由功能。
import { Agent } from "agents";
export class MyAgent extends Agent { async onRequest(request) { const response = await env.AI.run( "@cf/deepseek-ai/deepseek-r1-distill-qwen-32b", { prompt: "为我构建一个返回 JSON 的 Cloudflare Worker。", }, { gateway: { id: "{gateway_id}", // 在此处指定您的 AI Gateway ID skipCache: false, cacheTtl: 3360, }, }, );
return Response.json(response); }}
import { Agent } from "agents";
interface Env { AI: Ai;}
export class MyAgent extends Agent<Env> { async onRequest(request: Request) { const response = await env.AI.run( "@cf/deepseek-ai/deepseek-r1-distill-qwen-32b", { prompt: "为我构建一个返回 JSON 的 Cloudflare Worker。", }, { gateway: { id: "{gateway_id}", // 在此处指定您的 AI Gateway ID skipCache: false, cacheTtl: 3360, }, }, );
return Response.json(response); }}
您的 Wrangler 配置需要添加 ai
绑定。这在 Workers AI 和 AI Gateway 之间共享。
{ "ai": { "binding": "AI" }}
[ai]binding = "AI"
访问 AI Gateway 文档 了解如何配置网关并检索网关 ID。
AI SDK ↗ 为使用 AI 模型提供了统一的 API,包括文本生成、工具调用、结构化响应、图像生成等。
To use the AI SDK, install the ai
package and use it within your Agent. The example below shows how it use it to generate text on request, but you can use it from any method within your Agent, including WebSocket handlers, as part of a scheduled task, or even when the Agent is initialized.
npm i ai @ai-sdk/openai
yarn add ai @ai-sdk/openai
pnpm add ai @ai-sdk/openai
import { Agent } from "agents";import { generateText } from "ai";import { openai } from "@ai-sdk/openai";
export class MyAgent extends Agent { async onRequest(request) { const { text } = await generateText({ model: openai("o3-mini"), prompt: "Build me an AI agent on Cloudflare Workers", });
return Response.json({ modelResponse: text }); }}
import { Agent } from "agents";import { generateText } from "ai";import { openai } from "@ai-sdk/openai";
export class MyAgent extends Agent<Env> { async onRequest(request: Request): Promise<Response> { const { text } = await generateText({ model: openai("o3-mini"), prompt: "Build me an AI agent on Cloudflare Workers", });
return Response.json({ modelResponse: text }); }}
Agents can call models across any service, including those that support the OpenAI API. For example, you can use the OpenAI SDK to use one of Google's Gemini models ↗ directly from your Agent.
Agents can stream responses back over HTTP using Server Sent Events (SSE) from within an onRequest
handler, or by using the native WebSockets API in your Agent to responses back to a client, which is especially useful for larger models that can take over 30+ seconds to reply.
import { Agent } from "agents";import { OpenAI } from "openai";
export class MyAgent extends Agent { async onRequest(request) { const openai = new OpenAI({ apiKey: this.env.GEMINI_API_KEY, baseURL: "https://generativelanguage.googleapis.com/v1beta/openai/", });
// Create a TransformStream to handle streaming data let { readable, writable } = new TransformStream(); let writer = writable.getWriter(); const textEncoder = new TextEncoder();
// Use ctx.waitUntil to run the async function in the background // so that it doesn't block the streaming response ctx.waitUntil( (async () => { const stream = await openai.chat.completions.create({ model: "4o", messages: [ { role: "user", content: "Write me a Cloudflare Worker." }, ], stream: true, });
// loop over the data as it is streamed and write to the writeable for await (const part of stream) { writer.write( textEncoder.encode(part.choices[0]?.delta?.content || ""), ); } writer.close(); })(), );
// Return the readable stream back to the client return new Response(readable); }}
import { Agent } from "agents";import { OpenAI } from "openai";
export class MyAgent extends Agent<Env> { async onRequest(request: Request): Promise<Response> { const openai = new OpenAI({ apiKey: this.env.GEMINI_API_KEY, baseURL: "https://generativelanguage.googleapis.com/v1beta/openai/", });
// Create a TransformStream to handle streaming data let { readable, writable } = new TransformStream(); let writer = writable.getWriter(); const textEncoder = new TextEncoder();
// Use ctx.waitUntil to run the async function in the background // so that it doesn't block the streaming response ctx.waitUntil( (async () => { const stream = await openai.chat.completions.create({ model: "4o", messages: [ { role: "user", content: "Write me a Cloudflare Worker." }, ], stream: true, });
// loop over the data as it is streamed and write to the writeable for await (const part of stream) { writer.write( textEncoder.encode(part.choices[0]?.delta?.content || ""), ); } writer.close(); })(), );
// Return the readable stream back to the client return new Response(readable); }}
- @2025 Cloudflare Ubitools
- Cf Repo