Initial commit: claude-api skill

This commit is contained in:
2026-03-22 11:41:50 +08:00
commit 268400a2dc
26 changed files with 5454 additions and 0 deletions

View File

@@ -0,0 +1,220 @@
# Agent SDK — TypeScript
The Claude Agent SDK provides a higher-level interface for building AI agents with built-in tools, safety features, and agentic capabilities.
## Installation
```bash
npm install @anthropic-ai/claude-agent-sdk
```
---
## Quick Start
```typescript
import { query } from "@anthropic-ai/claude-agent-sdk";
for await (const message of query({
prompt: "Explain this codebase",
options: { allowedTools: ["Read", "Glob", "Grep"] },
})) {
if ("result" in message) {
console.log(message.result);
}
}
```
---
## Built-in Tools
| Tool | Description |
| --------- | ------------------------------------ |
| Read | Read files in the workspace |
| Write | Create new files |
| Edit | Make precise edits to existing files |
| Bash | Execute shell commands |
| Glob | Find files by pattern |
| Grep | Search files by content |
| WebSearch | Search the web for information |
| WebFetch | Fetch and analyze web pages |
| AskUserQuestion | Ask user clarifying questions |
| Agent | Spawn subagents |
---
## Permission System
```typescript
for await (const message of query({
prompt: "Refactor the authentication module",
options: {
allowedTools: ["Read", "Edit", "Write"],
permissionMode: "acceptEdits",
},
})) {
if ("result" in message) console.log(message.result);
}
```
Permission modes:
- `"default"`: Prompt for dangerous operations
- `"plan"`: Planning only, no execution
- `"acceptEdits"`: Auto-accept file edits
- `"dontAsk"`: Don't prompt (useful for CI/CD)
- `"bypassPermissions"`: Skip all prompts (requires `allowDangerouslySkipPermissions: true` in options)
---
## MCP (Model Context Protocol) Support
```typescript
for await (const message of query({
prompt: "Open example.com and describe what you see",
options: {
mcpServers: {
playwright: { command: "npx", args: ["@playwright/mcp@latest"] },
},
},
})) {
if ("result" in message) console.log(message.result);
}
```
### In-Process MCP Tools
You can define custom tools that run in-process using `tool()` and `createSdkMcpServer`:
```typescript
import { query, tool, createSdkMcpServer } from "@anthropic-ai/claude-agent-sdk";
import { z } from "zod";
const myTool = tool("my-tool", "Description", { input: z.string() }, async (args) => {
return { content: [{ type: "text", text: "result" }] };
});
const server = createSdkMcpServer({ name: "my-server", tools: [myTool] });
// Pass to query
for await (const message of query({
prompt: "Use my-tool to do something",
options: { mcpServers: { myServer: server } },
})) {
if ("result" in message) console.log(message.result);
}
```
---
## Hooks
```typescript
import { query, HookCallback } from "@anthropic-ai/claude-agent-sdk";
import { appendFileSync } from "fs";
const logFileChange: HookCallback = async (input) => {
const filePath = (input as any).tool_input?.file_path ?? "unknown";
appendFileSync(
"./audit.log",
`${new Date().toISOString()}: modified ${filePath}\n`,
);
return {};
};
for await (const message of query({
prompt: "Refactor utils.py to improve readability",
options: {
allowedTools: ["Read", "Edit", "Write"],
permissionMode: "acceptEdits",
hooks: {
PostToolUse: [{ matcher: "Edit|Write", hooks: [logFileChange] }],
},
},
})) {
if ("result" in message) console.log(message.result);
}
```
Available hook events: `PreToolUse`, `PostToolUse`, `PostToolUseFailure`, `Notification`, `UserPromptSubmit`, `SessionStart`, `SessionEnd`, `Stop`, `SubagentStart`, `SubagentStop`, `PreCompact`, `PermissionRequest`, `Setup`, `TeammateIdle`, `TaskCompleted`, `ConfigChange`
---
## Common Options
`query()` takes a top-level `prompt` (string) and an `options` object:
```typescript
query({ prompt: "...", options: { ... } })
```
| Option | Type | Description |
| ----------------------------------- | ------ | -------------------------------------------------------------------------- |
| `cwd` | string | Working directory for file operations |
| `allowedTools` | array | Tools the agent can use (e.g., `["Read", "Edit", "Bash"]`) |
| `tools` | array | Built-in tools to make available (restricts the default set) |
| `disallowedTools` | array | Tools to explicitly disallow |
| `permissionMode` | string | How to handle permission prompts |
| `allowDangerouslySkipPermissions` | bool | Must be `true` to use `permissionMode: "bypassPermissions"` |
| `mcpServers` | object | MCP servers to connect to |
| `hooks` | object | Hooks for customizing behavior |
| `systemPrompt` | string | Custom system prompt |
| `maxTurns` | number | Maximum agent turns before stopping |
| `maxBudgetUsd` | number | Maximum budget in USD for the query |
| `model` | string | Model ID (default: determined by CLI) |
| `agents` | object | Subagent definitions (`Record<string, AgentDefinition>`) |
| `outputFormat` | object | Structured output schema |
| `thinking` | object | Thinking/reasoning control |
| `betas` | array | Beta features to enable (e.g., `["context-1m-2025-08-07"]`) |
| `settingSources` | array | Settings to load (e.g., `["project"]`). Default: none (no CLAUDE.md files) |
| `env` | object | Environment variables to set for the session |
---
## Subagents
```typescript
for await (const message of query({
prompt: "Use the code-reviewer agent to review this codebase",
options: {
allowedTools: ["Read", "Glob", "Grep", "Agent"],
agents: {
"code-reviewer": {
description: "Expert code reviewer for quality and security reviews.",
prompt: "Analyze code quality and suggest improvements.",
tools: ["Read", "Glob", "Grep"],
},
},
},
})) {
if ("result" in message) console.log(message.result);
}
```
---
## Message Types
```typescript
for await (const message of query({
prompt: "Find TODO comments",
options: { allowedTools: ["Read", "Glob", "Grep"] },
})) {
if ("result" in message) {
console.log(message.result);
} else if (message.type === "system" && message.subtype === "init") {
const sessionId = message.session_id; // Capture for resuming later
}
}
```
---
## Best Practices
1. **Always specify allowedTools** — Explicitly list which tools the agent can use
2. **Set working directory** — Always specify `cwd` for file operations
3. **Use appropriate permission modes** — Start with `"default"` and only escalate when needed
4. **Handle all message types** — Check for `result` property to get agent output
5. **Limit maxTurns** — Prevent runaway agents with reasonable limits

View File

@@ -0,0 +1,150 @@
# Agent SDK Patterns — TypeScript
## Basic Agent
```typescript
import { query } from "@anthropic-ai/claude-agent-sdk";
async function main() {
for await (const message of query({
prompt: "Explain what this repository does",
options: {
cwd: "/path/to/project",
allowedTools: ["Read", "Glob", "Grep"],
},
})) {
if ("result" in message) {
console.log(message.result);
}
}
}
main();
```
---
## Hooks
### After Tool Use Hook
```typescript
import { query, HookCallback } from "@anthropic-ai/claude-agent-sdk";
import { appendFileSync } from "fs";
const logFileChange: HookCallback = async (input) => {
const filePath = (input as any).tool_input?.file_path ?? "unknown";
appendFileSync(
"./audit.log",
`${new Date().toISOString()}: modified ${filePath}\n`,
);
return {};
};
for await (const message of query({
prompt: "Refactor utils.py to improve readability",
options: {
allowedTools: ["Read", "Edit", "Write"],
permissionMode: "acceptEdits",
hooks: {
PostToolUse: [{ matcher: "Edit|Write", hooks: [logFileChange] }],
},
},
})) {
if ("result" in message) console.log(message.result);
}
```
---
## Subagents
```typescript
import { query } from "@anthropic-ai/claude-agent-sdk";
for await (const message of query({
prompt: "Use the code-reviewer agent to review this codebase",
options: {
allowedTools: ["Read", "Glob", "Grep", "Agent"],
agents: {
"code-reviewer": {
description: "Expert code reviewer for quality and security reviews.",
prompt: "Analyze code quality and suggest improvements.",
tools: ["Read", "Glob", "Grep"],
},
},
},
})) {
if ("result" in message) console.log(message.result);
}
```
---
## MCP Server Integration
### Browser Automation (Playwright)
```typescript
for await (const message of query({
prompt: "Open example.com and describe what you see",
options: {
mcpServers: {
playwright: { command: "npx", args: ["@playwright/mcp@latest"] },
},
},
})) {
if ("result" in message) console.log(message.result);
}
```
---
## Session Resumption
```typescript
import { query } from "@anthropic-ai/claude-agent-sdk";
let sessionId: string | undefined;
// First query: capture the session ID
for await (const message of query({
prompt: "Read the authentication module",
options: { allowedTools: ["Read", "Glob"] },
})) {
if (message.type === "system" && message.subtype === "init") {
sessionId = message.session_id;
}
}
// Resume with full context from the first query
for await (const message of query({
prompt: "Now find all places that call it",
options: { resume: sessionId },
})) {
if ("result" in message) console.log(message.result);
}
```
---
## Custom System Prompt
```typescript
import { query } from "@anthropic-ai/claude-agent-sdk";
for await (const message of query({
prompt: "Review this code",
options: {
allowedTools: ["Read", "Glob", "Grep"],
systemPrompt: `You are a senior code reviewer focused on:
1. Security vulnerabilities
2. Performance issues
3. Code maintainability
Always provide specific line numbers and suggestions for improvement.`,
},
})) {
if ("result" in message) console.log(message.result);
}
```

View File

@@ -0,0 +1,313 @@
# Claude API — TypeScript
## Installation
```bash
npm install @anthropic-ai/sdk
```
## Client Initialization
```typescript
import Anthropic from "@anthropic-ai/sdk";
// Default (uses ANTHROPIC_API_KEY env var)
const client = new Anthropic();
// Explicit API key
const client = new Anthropic({ apiKey: "your-api-key" });
```
---
## Basic Message Request
```typescript
const response = await client.messages.create({
model: "claude-opus-4-6",
max_tokens: 1024,
messages: [{ role: "user", content: "What is the capital of France?" }],
});
console.log(response.content[0].text);
```
---
## System Prompts
```typescript
const response = await client.messages.create({
model: "claude-opus-4-6",
max_tokens: 1024,
system:
"You are a helpful coding assistant. Always provide examples in Python.",
messages: [{ role: "user", content: "How do I read a JSON file?" }],
});
```
---
## Vision (Images)
### URL
```typescript
const response = await client.messages.create({
model: "claude-opus-4-6",
max_tokens: 1024,
messages: [
{
role: "user",
content: [
{
type: "image",
source: { type: "url", url: "https://example.com/image.png" },
},
{ type: "text", text: "Describe this image" },
],
},
],
});
```
### Base64
```typescript
import fs from "fs";
const imageData = fs.readFileSync("image.png").toString("base64");
const response = await client.messages.create({
model: "claude-opus-4-6",
max_tokens: 1024,
messages: [
{
role: "user",
content: [
{
type: "image",
source: { type: "base64", media_type: "image/png", data: imageData },
},
{ type: "text", text: "What's in this image?" },
],
},
],
});
```
---
## Prompt Caching
### Automatic Caching (Recommended)
Use top-level `cache_control` to automatically cache the last cacheable block in the request:
```typescript
const response = await client.messages.create({
model: "claude-opus-4-6",
max_tokens: 1024,
cache_control: { type: "ephemeral" }, // auto-caches the last cacheable block
system: "You are an expert on this large document...",
messages: [{ role: "user", content: "Summarize the key points" }],
});
```
### Manual Cache Control
For fine-grained control, add `cache_control` to specific content blocks:
```typescript
const response = await client.messages.create({
model: "claude-opus-4-6",
max_tokens: 1024,
system: [
{
type: "text",
text: "You are an expert on this large document...",
cache_control: { type: "ephemeral" }, // default TTL is 5 minutes
},
],
messages: [{ role: "user", content: "Summarize the key points" }],
});
// With explicit TTL (time-to-live)
const response2 = await client.messages.create({
model: "claude-opus-4-6",
max_tokens: 1024,
system: [
{
type: "text",
text: "You are an expert on this large document...",
cache_control: { type: "ephemeral", ttl: "1h" }, // 1 hour TTL
},
],
messages: [{ role: "user", content: "Summarize the key points" }],
});
```
---
## Extended Thinking
> **Opus 4.6 and Sonnet 4.6:** Use adaptive thinking. `budget_tokens` is deprecated on both Opus 4.6 and Sonnet 4.6.
> **Older models:** Use `thinking: {type: "enabled", budget_tokens: N}` (must be < `max_tokens`, min 1024).
```typescript
// Opus 4.6: adaptive thinking (recommended)
const response = await client.messages.create({
model: "claude-opus-4-6",
max_tokens: 16000,
thinking: { type: "adaptive" },
output_config: { effort: "high" }, // low | medium | high | max
messages: [
{ role: "user", content: "Solve this math problem step by step..." },
],
});
for (const block of response.content) {
if (block.type === "thinking") {
console.log("Thinking:", block.thinking);
} else if (block.type === "text") {
console.log("Response:", block.text);
}
}
```
---
## Error Handling
Use the SDK's typed exception classes — never check error messages with string matching:
```typescript
import Anthropic from "@anthropic-ai/sdk";
try {
const response = await client.messages.create({...});
} catch (error) {
if (error instanceof Anthropic.BadRequestError) {
console.error("Bad request:", error.message);
} else if (error instanceof Anthropic.AuthenticationError) {
console.error("Invalid API key");
} else if (error instanceof Anthropic.RateLimitError) {
console.error("Rate limited - retry later");
} else if (error instanceof Anthropic.APIError) {
console.error(`API error ${error.status}:`, error.message);
}
}
```
All classes extend `Anthropic.APIError` with a typed `status` field. Check from most specific to least specific. See [shared/error-codes.md](../../shared/error-codes.md) for the full error code reference.
---
## Multi-Turn Conversations
The API is stateless — send the full conversation history each time. Use `Anthropic.MessageParam[]` to type the messages array:
```typescript
const messages: Anthropic.MessageParam[] = [
{ role: "user", content: "My name is Alice." },
{ role: "assistant", content: "Hello Alice! Nice to meet you." },
{ role: "user", content: "What's my name?" },
];
const response = await client.messages.create({
model: "claude-opus-4-6",
max_tokens: 1024,
messages: messages,
});
```
**Rules:**
- Messages must alternate between `user` and `assistant`
- First message must be `user`
- Use SDK types (`Anthropic.MessageParam`, `Anthropic.Message`, `Anthropic.Tool`, etc.) for all API data structures — don't redefine equivalent interfaces
---
### Compaction (long conversations)
> **Beta, Opus 4.6 only.** When conversations approach the 200K context window, compaction automatically summarizes earlier context server-side. The API returns a `compaction` block; you must pass it back on subsequent requests — append `response.content`, not just the text.
```typescript
import Anthropic from "@anthropic-ai/sdk";
const client = new Anthropic();
const messages: Anthropic.Beta.BetaMessageParam[] = [];
async function chat(userMessage: string): Promise<string> {
messages.push({ role: "user", content: userMessage });
const response = await client.beta.messages.create({
betas: ["compact-2026-01-12"],
model: "claude-opus-4-6",
max_tokens: 4096,
messages,
context_management: {
edits: [{ type: "compact_20260112" }],
},
});
// Append full content — compaction blocks must be preserved
messages.push({ role: "assistant", content: response.content });
const textBlock = response.content.find((block) => block.type === "text");
return textBlock?.text ?? "";
}
// Compaction triggers automatically when context grows large
console.log(await chat("Help me build a Python web scraper"));
console.log(await chat("Add support for JavaScript-rendered pages"));
console.log(await chat("Now add rate limiting and error handling"));
```
---
## Stop Reasons
The `stop_reason` field in the response indicates why the model stopped generating:
| Value | Meaning |
| --------------- | --------------------------------------------------------------- |
| `end_turn` | Claude finished its response naturally |
| `max_tokens` | Hit the `max_tokens` limit — increase it or use streaming |
| `stop_sequence` | Hit a custom stop sequence |
| `tool_use` | Claude wants to call a tool — execute it and continue |
| `pause_turn` | Model paused and can be resumed (agentic flows) |
| `refusal` | Claude refused for safety reasons — output may not match schema |
---
## Cost Optimization Strategies
### 1. Use Prompt Caching for Repeated Context
```typescript
// Automatic caching (simplest — caches the last cacheable block)
const response = await client.messages.create({
model: "claude-opus-4-6",
max_tokens: 1024,
cache_control: { type: "ephemeral" },
system: largeDocumentText, // e.g., 50KB of context
messages: [{ role: "user", content: "Summarize the key points" }],
});
// First request: full cost
// Subsequent requests: ~90% cheaper for cached portion
```
### 2. Use Token Counting Before Requests
```typescript
const countResponse = await client.messages.countTokens({
model: "claude-opus-4-6",
messages: messages,
system: system,
});
const estimatedInputCost = countResponse.input_tokens * 0.000005; // $5/1M tokens
console.log(`Estimated input cost: $${estimatedInputCost.toFixed(4)}`);
```

View File

@@ -0,0 +1,106 @@
# Message Batches API — TypeScript
The Batches API (`POST /v1/messages/batches`) processes Messages API requests asynchronously at 50% of standard prices.
## Key Facts
- Up to 100,000 requests or 256 MB per batch
- Most batches complete within 1 hour; maximum 24 hours
- Results available for 29 days after creation
- 50% cost reduction on all token usage
- All Messages API features supported (vision, tools, caching, etc.)
---
## Create a Batch
```typescript
import Anthropic from "@anthropic-ai/sdk";
const client = new Anthropic();
const messageBatch = await client.messages.batches.create({
requests: [
{
custom_id: "request-1",
params: {
model: "claude-opus-4-6",
max_tokens: 1024,
messages: [
{ role: "user", content: "Summarize climate change impacts" },
],
},
},
{
custom_id: "request-2",
params: {
model: "claude-opus-4-6",
max_tokens: 1024,
messages: [
{ role: "user", content: "Explain quantum computing basics" },
],
},
},
],
});
console.log(`Batch ID: ${messageBatch.id}`);
console.log(`Status: ${messageBatch.processing_status}`);
```
---
## Poll for Completion
```typescript
let batch;
while (true) {
batch = await client.messages.batches.retrieve(messageBatch.id);
if (batch.processing_status === "ended") break;
console.log(
`Status: ${batch.processing_status}, processing: ${batch.request_counts.processing}`,
);
await new Promise((resolve) => setTimeout(resolve, 60_000));
}
console.log("Batch complete!");
console.log(`Succeeded: ${batch.request_counts.succeeded}`);
console.log(`Errored: ${batch.request_counts.errored}`);
```
---
## Retrieve Results
```typescript
for await (const result of await client.messages.batches.results(
messageBatch.id,
)) {
switch (result.result.type) {
case "succeeded":
console.log(
`[${result.custom_id}] ${result.result.message.content[0].text.slice(0, 100)}`,
);
break;
case "errored":
if (result.result.error.type === "invalid_request") {
console.log(`[${result.custom_id}] Validation error - fix and retry`);
} else {
console.log(`[${result.custom_id}] Server error - safe to retry`);
}
break;
case "expired":
console.log(`[${result.custom_id}] Expired - resubmit`);
break;
}
}
```
---
## Cancel a Batch
```typescript
const cancelled = await client.messages.batches.cancel(messageBatch.id);
console.log(`Status: ${cancelled.processing_status}`); // "canceling"
```

View File

@@ -0,0 +1,98 @@
# Files API — TypeScript
The Files API uploads files for use in Messages API requests. Reference files via `file_id` in content blocks, avoiding re-uploads across multiple API calls.
**Beta:** Pass `betas: ["files-api-2025-04-14"]` in your API calls (the SDK sets the required header automatically).
## Key Facts
- Maximum file size: 500 MB
- Total storage: 100 GB per organization
- Files persist until deleted
- File operations (upload, list, delete) are free; content used in messages is billed as input tokens
- Not available on Amazon Bedrock or Google Vertex AI
---
## Upload a File
```typescript
import Anthropic, { toFile } from "@anthropic-ai/sdk";
import fs from "fs";
const client = new Anthropic();
const uploaded = await client.beta.files.upload({
file: await toFile(fs.createReadStream("report.pdf"), undefined, {
type: "application/pdf",
}),
betas: ["files-api-2025-04-14"],
});
console.log(`File ID: ${uploaded.id}`);
console.log(`Size: ${uploaded.size_bytes} bytes`);
```
---
## Use a File in Messages
### PDF / Text Document
```typescript
const response = await client.beta.messages.create({
model: "claude-opus-4-6",
max_tokens: 1024,
messages: [
{
role: "user",
content: [
{ type: "text", text: "Summarize the key findings in this report." },
{
type: "document",
source: { type: "file", file_id: uploaded.id },
title: "Q4 Report",
citations: { enabled: true },
},
],
},
],
betas: ["files-api-2025-04-14"],
});
console.log(response.content[0].text);
```
---
## Manage Files
### List Files
```typescript
const files = await client.beta.files.list({
betas: ["files-api-2025-04-14"],
});
for (const f of files.data) {
console.log(`${f.id}: ${f.filename} (${f.size_bytes} bytes)`);
}
```
### Delete a File
```typescript
await client.beta.files.delete("file_011CNha8iCJcU1wXNR6q4V8w", {
betas: ["files-api-2025-04-14"],
});
```
### Download a File
```typescript
const response = await client.beta.files.download(
"file_011CNha8iCJcU1wXNR6q4V8w",
{ betas: ["files-api-2025-04-14"] },
);
const content = Buffer.from(await response.arrayBuffer());
await fs.promises.writeFile("output.txt", content);
```

View File

@@ -0,0 +1,178 @@
# Streaming — TypeScript
## Quick Start
```typescript
const stream = client.messages.stream({
model: "claude-opus-4-6",
max_tokens: 1024,
messages: [{ role: "user", content: "Write a story" }],
});
for await (const event of stream) {
if (
event.type === "content_block_delta" &&
event.delta.type === "text_delta"
) {
process.stdout.write(event.delta.text);
}
}
```
---
## Handling Different Content Types
> **Opus 4.6:** Use `thinking: {type: "adaptive"}`. On older models, use `thinking: {type: "enabled", budget_tokens: N}` instead.
```typescript
const stream = client.messages.stream({
model: "claude-opus-4-6",
max_tokens: 16000,
thinking: { type: "adaptive" },
messages: [{ role: "user", content: "Analyze this problem" }],
});
for await (const event of stream) {
switch (event.type) {
case "content_block_start":
switch (event.content_block.type) {
case "thinking":
console.log("\n[Thinking...]");
break;
case "text":
console.log("\n[Response:]");
break;
}
break;
case "content_block_delta":
switch (event.delta.type) {
case "thinking_delta":
process.stdout.write(event.delta.thinking);
break;
case "text_delta":
process.stdout.write(event.delta.text);
break;
}
break;
}
}
```
---
## Streaming with Tool Use (Tool Runner)
Use the tool runner with `stream: true`. The outer loop iterates over tool runner iterations (messages), the inner loop processes stream events:
```typescript
import Anthropic from "@anthropic-ai/sdk";
import { betaZodTool } from "@anthropic-ai/sdk/helpers/beta/zod";
import { z } from "zod";
const client = new Anthropic();
const getWeather = betaZodTool({
name: "get_weather",
description: "Get current weather for a location",
inputSchema: z.object({
location: z.string().describe("City and state, e.g., San Francisco, CA"),
}),
run: async ({ location }) => `72°F and sunny in ${location}`,
});
const runner = client.beta.messages.toolRunner({
model: "claude-opus-4-6",
max_tokens: 4096,
tools: [getWeather],
messages: [
{ role: "user", content: "What's the weather in Paris and London?" },
],
stream: true,
});
// Outer loop: each tool runner iteration
for await (const messageStream of runner) {
// Inner loop: stream events for this iteration
for await (const event of messageStream) {
switch (event.type) {
case "content_block_delta":
switch (event.delta.type) {
case "text_delta":
process.stdout.write(event.delta.text);
break;
case "input_json_delta":
// Tool input being streamed
break;
}
break;
}
}
}
```
---
## Getting the Final Message
```typescript
const stream = client.messages.stream({
model: "claude-opus-4-6",
max_tokens: 1024,
messages: [{ role: "user", content: "Hello" }],
});
for await (const event of stream) {
// Process events...
}
const finalMessage = await stream.finalMessage();
console.log(`Tokens used: ${finalMessage.usage.output_tokens}`);
```
---
## Stream Event Types
| Event Type | Description | When it fires |
| --------------------- | --------------------------- | --------------------------------- |
| `message_start` | Contains message metadata | Once at the beginning |
| `content_block_start` | New content block beginning | When a text/tool_use block starts |
| `content_block_delta` | Incremental content update | For each token/chunk |
| `content_block_stop` | Content block complete | When a block finishes |
| `message_delta` | Message-level updates | Contains `stop_reason`, usage |
| `message_stop` | Message complete | Once at the end |
## Best Practices
1. **Always flush output** — Use `process.stdout.write()` for immediate display
2. **Handle partial responses** — If the stream is interrupted, you may have incomplete content
3. **Track token usage** — The `message_delta` event contains usage information
4. **Use `finalMessage()`** — Get the complete `Anthropic.Message` object even when streaming. Don't wrap `.on()` events in `new Promise()``finalMessage()` handles all completion/error/abort states internally
5. **Buffer for web UIs** — Consider buffering a few tokens before rendering to avoid excessive DOM updates
6. **Use `stream.on("text", ...)` for deltas** — The `text` event provides just the delta string, simpler than manually filtering `content_block_delta` events
7. **For agentic loops with streaming** — See the [Streaming Manual Loop](./tool-use.md#streaming-manual-loop) section in tool-use.md for combining `stream()` + `finalMessage()` with a tool-use loop
## Raw SSE Format
If using raw HTTP (not SDKs), the stream returns Server-Sent Events:
```
event: message_start
data: {"type":"message_start","message":{"id":"msg_...","type":"message",...}}
event: content_block_start
data: {"type":"content_block_start","index":0,"content_block":{"type":"text","text":""}}
event: content_block_delta
data: {"type":"content_block_delta","index":0,"delta":{"type":"text_delta","text":"Hello"}}
event: content_block_stop
data: {"type":"content_block_stop","index":0}
event: message_delta
data: {"type":"message_delta","delta":{"stop_reason":"end_turn"},"usage":{"output_tokens":12}}
event: message_stop
data: {"type":"message_stop"}
```

View File

@@ -0,0 +1,477 @@
# Tool Use — TypeScript
For conceptual overview (tool definitions, tool choice, tips), see [shared/tool-use-concepts.md](../../shared/tool-use-concepts.md).
## Tool Runner (Recommended)
**Beta:** The tool runner is in beta in the TypeScript SDK.
Use `betaZodTool` with Zod schemas to define tools with a `run` function, then pass them to `client.beta.messages.toolRunner()`:
```typescript
import Anthropic from "@anthropic-ai/sdk";
import { betaZodTool } from "@anthropic-ai/sdk/helpers/beta/zod";
import { z } from "zod";
const client = new Anthropic();
const getWeather = betaZodTool({
name: "get_weather",
description: "Get current weather for a location",
inputSchema: z.object({
location: z.string().describe("City and state, e.g., San Francisco, CA"),
unit: z.enum(["celsius", "fahrenheit"]).optional(),
}),
run: async (input) => {
// Your implementation here
return `72°F and sunny in ${input.location}`;
},
});
// The tool runner handles the agentic loop and returns the final message
const finalMessage = await client.beta.messages.toolRunner({
model: "claude-opus-4-6",
max_tokens: 4096,
tools: [getWeather],
messages: [{ role: "user", content: "What's the weather in Paris?" }],
});
console.log(finalMessage.content);
```
**Key benefits of the tool runner:**
- No manual loop — the SDK handles calling tools and feeding results back
- Type-safe tool inputs via Zod schemas
- Tool schemas are generated automatically from Zod definitions
- Iteration stops automatically when Claude has no more tool calls
---
## Manual Agentic Loop
Use this when you need fine-grained control (custom logging, conditional tool execution, streaming individual iterations, human-in-the-loop approval):
```typescript
import Anthropic from "@anthropic-ai/sdk";
const client = new Anthropic();
const tools: Anthropic.Tool[] = [...]; // Your tool definitions
let messages: Anthropic.MessageParam[] = [{ role: "user", content: userInput }];
while (true) {
const response = await client.messages.create({
model: "claude-opus-4-6",
max_tokens: 4096,
tools: tools,
messages: messages,
});
if (response.stop_reason === "end_turn") break;
// Server-side tool hit iteration limit; re-send to continue
if (response.stop_reason === "pause_turn") {
messages = [
{ role: "user", content: userInput },
{ role: "assistant", content: response.content },
];
continue;
}
const toolUseBlocks = response.content.filter(
(b): b is Anthropic.ToolUseBlock => b.type === "tool_use",
);
messages.push({ role: "assistant", content: response.content });
const toolResults: Anthropic.ToolResultBlockParam[] = [];
for (const tool of toolUseBlocks) {
const result = await executeTool(tool.name, tool.input);
toolResults.push({
type: "tool_result",
tool_use_id: tool.id,
content: result,
});
}
messages.push({ role: "user", content: toolResults });
}
```
### Streaming Manual Loop
Use `client.messages.stream()` + `finalMessage()` instead of `.create()` when you need streaming within a manual loop. Text deltas are streamed on each iteration; `finalMessage()` collects the complete `Message` so you can inspect `stop_reason` and extract tool-use blocks:
```typescript
import Anthropic from "@anthropic-ai/sdk";
const client = new Anthropic();
const tools: Anthropic.Tool[] = [...];
let messages: Anthropic.MessageParam[] = [{ role: "user", content: userInput }];
while (true) {
const stream = client.messages.stream({
model: "claude-opus-4-6",
max_tokens: 4096,
tools,
messages,
});
// Stream text deltas on each iteration
stream.on("text", (delta) => {
process.stdout.write(delta);
});
// finalMessage() resolves with the complete Message — no need to
// manually wire up .on("message") / .on("error") / .on("abort")
const message = await stream.finalMessage();
if (message.stop_reason === "end_turn") break;
// Server-side tool hit iteration limit; re-send to continue
if (message.stop_reason === "pause_turn") {
messages = [
{ role: "user", content: userInput },
{ role: "assistant", content: message.content },
];
continue;
}
const toolUseBlocks = message.content.filter(
(b): b is Anthropic.ToolUseBlock => b.type === "tool_use",
);
messages.push({ role: "assistant", content: message.content });
const toolResults: Anthropic.ToolResultBlockParam[] = [];
for (const tool of toolUseBlocks) {
const result = await executeTool(tool.name, tool.input);
toolResults.push({
type: "tool_result",
tool_use_id: tool.id,
content: result,
});
}
messages.push({ role: "user", content: toolResults });
}
```
> **Important:** Don't wrap `.on()` events in `new Promise()` to collect the final message — use `stream.finalMessage()` instead. The SDK handles all error/abort/completion states internally.
> **Error handling in the loop:** Use the SDK's typed exceptions (e.g., `Anthropic.RateLimitError`, `Anthropic.APIError`) — see [Error Handling](./README.md#error-handling) for examples. Don't check error messages with string matching.
> **SDK types:** Use `Anthropic.MessageParam`, `Anthropic.Tool`, `Anthropic.ToolUseBlock`, `Anthropic.ToolResultBlockParam`, `Anthropic.Message`, etc. for all API-related data structures. Don't redefine equivalent interfaces.
---
## Handling Tool Results
```typescript
const response = await client.messages.create({
model: "claude-opus-4-6",
max_tokens: 1024,
tools: tools,
messages: [{ role: "user", content: "What's the weather in Paris?" }],
});
for (const block of response.content) {
if (block.type === "tool_use") {
const result = await executeTool(block.name, block.input);
const followup = await client.messages.create({
model: "claude-opus-4-6",
max_tokens: 1024,
tools: tools,
messages: [
{ role: "user", content: "What's the weather in Paris?" },
{ role: "assistant", content: response.content },
{
role: "user",
content: [
{ type: "tool_result", tool_use_id: block.id, content: result },
],
},
],
});
}
}
```
---
## Tool Choice
```typescript
const response = await client.messages.create({
model: "claude-opus-4-6",
max_tokens: 1024,
tools: tools,
tool_choice: { type: "tool", name: "get_weather" },
messages: [{ role: "user", content: "What's the weather in Paris?" }],
});
```
---
## Code Execution
### Basic Usage
```typescript
import Anthropic from "@anthropic-ai/sdk";
const client = new Anthropic();
const response = await client.messages.create({
model: "claude-opus-4-6",
max_tokens: 4096,
messages: [
{
role: "user",
content:
"Calculate the mean and standard deviation of [1, 2, 3, 4, 5, 6, 7, 8, 9, 10]",
},
],
tools: [{ type: "code_execution_20260120", name: "code_execution" }],
});
```
### Upload Files for Analysis
```typescript
import Anthropic, { toFile } from "@anthropic-ai/sdk";
import { createReadStream } from "fs";
const client = new Anthropic();
// 1. Upload a file
const uploaded = await client.beta.files.upload({
file: await toFile(createReadStream("sales_data.csv"), undefined, {
type: "text/csv",
}),
betas: ["files-api-2025-04-14"],
});
// 2. Pass to code execution
// Code execution is GA; Files API is still beta (pass via RequestOptions)
const response = await client.messages.create(
{
model: "claude-opus-4-6",
max_tokens: 4096,
messages: [
{
role: "user",
content: [
{
type: "text",
text: "Analyze this sales data. Show trends and create a visualization.",
},
{ type: "container_upload", file_id: uploaded.id },
],
},
],
tools: [{ type: "code_execution_20260120", name: "code_execution" }],
},
{ headers: { "anthropic-beta": "files-api-2025-04-14" } },
);
```
### Retrieve Generated Files
```typescript
import path from "path";
import fs from "fs";
const OUTPUT_DIR = "./claude_outputs";
await fs.promises.mkdir(OUTPUT_DIR, { recursive: true });
for (const block of response.content) {
if (block.type === "bash_code_execution_tool_result") {
const result = block.content;
if (result.type === "bash_code_execution_result" && result.content) {
for (const fileRef of result.content) {
if (fileRef.type === "bash_code_execution_output") {
const metadata = await client.beta.files.retrieveMetadata(
fileRef.file_id,
);
const response = await client.beta.files.download(fileRef.file_id);
const fileBytes = Buffer.from(await response.arrayBuffer());
const safeName = path.basename(metadata.filename);
if (!safeName || safeName === "." || safeName === "..") {
console.warn(`Skipping invalid filename: ${metadata.filename}`);
continue;
}
const outputPath = path.join(OUTPUT_DIR, safeName);
await fs.promises.writeFile(outputPath, fileBytes);
console.log(`Saved: ${outputPath}`);
}
}
}
}
}
```
### Container Reuse
```typescript
// First request: set up environment
const response1 = await client.messages.create({
model: "claude-opus-4-6",
max_tokens: 4096,
messages: [
{
role: "user",
content: "Install tabulate and create data.json with sample user data",
},
],
tools: [{ type: "code_execution_20260120", name: "code_execution" }],
});
// Reuse container
const containerId = response1.container.id;
const response2 = await client.messages.create({
container: containerId,
model: "claude-opus-4-6",
max_tokens: 4096,
messages: [
{
role: "user",
content: "Read data.json and display as a formatted table",
},
],
tools: [{ type: "code_execution_20260120", name: "code_execution" }],
});
```
---
## Memory Tool
### Basic Usage
```typescript
const response = await client.messages.create({
model: "claude-opus-4-6",
max_tokens: 2048,
messages: [
{
role: "user",
content: "Remember that my preferred language is TypeScript.",
},
],
tools: [{ type: "memory_20250818", name: "memory" }],
});
```
### SDK Memory Helper
Use `betaMemoryTool` with a `MemoryToolHandlers` implementation:
```typescript
import {
betaMemoryTool,
type MemoryToolHandlers,
} from "@anthropic-ai/sdk/helpers/beta/memory";
const handlers: MemoryToolHandlers = {
async view(command) { ... },
async create(command) { ... },
async str_replace(command) { ... },
async insert(command) { ... },
async delete(command) { ... },
async rename(command) { ... },
};
const memory = betaMemoryTool(handlers);
const runner = client.beta.messages.toolRunner({
model: "claude-opus-4-6",
max_tokens: 2048,
tools: [memory],
messages: [{ role: "user", content: "Remember my preferences" }],
});
for await (const message of runner) {
console.log(message);
}
```
For full implementation examples, use WebFetch:
- `https://github.com/anthropics/anthropic-sdk-typescript/blob/main/examples/tools-helpers-memory.ts`
---
## Structured Outputs
### JSON Outputs (Zod — Recommended)
```typescript
import Anthropic from "@anthropic-ai/sdk";
import { z } from "zod";
import { zodOutputFormat } from "@anthropic-ai/sdk/helpers/zod";
const ContactInfoSchema = z.object({
name: z.string(),
email: z.string(),
plan: z.string(),
interests: z.array(z.string()),
demo_requested: z.boolean(),
});
const client = new Anthropic();
const response = await client.messages.parse({
model: "claude-opus-4-6",
max_tokens: 1024,
messages: [
{
role: "user",
content:
"Extract: Jane Doe (jane@co.com) wants Enterprise, interested in API and SDKs, wants a demo.",
},
],
output_config: {
format: zodOutputFormat(ContactInfoSchema),
},
});
console.log(response.parsed_output.name); // "Jane Doe"
```
### Strict Tool Use
```typescript
const response = await client.messages.create({
model: "claude-opus-4-6",
max_tokens: 1024,
messages: [
{
role: "user",
content: "Book a flight to Tokyo for 2 passengers on March 15",
},
],
tools: [
{
name: "book_flight",
description: "Book a flight to a destination",
strict: true,
input_schema: {
type: "object",
properties: {
destination: { type: "string" },
date: { type: "string", format: "date" },
passengers: {
type: "integer",
enum: [1, 2, 3, 4, 5, 6, 7, 8],
},
},
required: ["destination", "date", "passengers"],
additionalProperties: false,
},
},
],
});
```