Skip to content

Conversation

@PedramNavid
Copy link
Collaborator

⚠️ TEST PR - DO NOT MERGE

This is a test PR to verify the notebook diff comment workflow works correctly.

What's being tested

  • The notebook-diff-comment.yml workflow should trigger
  • It should detect the changed notebook
  • It should post a comment with nbdime diffs

Expected behavior

A comment should appear below with collapsible sections showing the diffs for:

  • claude_agent_sdk/00_The_one_liner_research_agent.ipynb

Changes in this test

  • Modified title to include (TEST VERSION)
  • Added test markdown cell at the end

Once verified, this PR will be closed without merging.

🤖 Generated with Claude Code

@github-actions
Copy link

github-actions bot commented Nov 11, 2025

Summary

Status Count
🔍 Total 7
✅ Successful 0
⏳ Timeouts 0
🔀 Redirected 0
👻 Excluded 0
❓ Unknown 0
🚫 Errors 7
⛔ Unsupported 0

Errors per input

Errors in temp_md/00_The_one_liner_research_agent.md

@PedramNavid PedramNavid force-pushed the test-notebook-diff-comment branch from a6c3815 to 9f98feb Compare November 11, 2025 19:33
@github-actions
Copy link

Notebook Changes

This PR modifies the following notebooks:

📓 claude_agent_sdk/00_The_one_liner_research_agent.ipynb

View diff
nbdiff claude_agent_sdk/00_The_one_liner_research_agent.ipynb (2c30b735259b795b18d93dc8e3b6ab24c8073720) claude_agent_sdk/00_The_one_liner_research_agent.ipynb (9f98feb6009427f37382a013559e5e0fa54c4f59)
--- claude_agent_sdk/00_The_one_liner_research_agent.ipynb (2c30b735259b795b18d93dc8e3b6ab24c8073720)  (no timestamp)
+++ claude_agent_sdk/00_The_one_liner_research_agent.ipynb (9f98feb6009427f37382a013559e5e0fa54c4f59)  (no timestamp)
## modified /cells/0/source:
@@ -1,4 +1,4 @@
-# Building a One-Liner Research Agent
+# Building a One-Liner Research Agent (TEST VERSION)
 
 Research tasks consume hours of expert time: market analysts manually gathering competitive intelligence, legal teams tracking regulatory changes, engineers investigating bug reports across documentation. The core challenge isn't finding information but knowing what to search for next based on what you just discovered.
 

## modified /cells/2/source:
@@ -1,3 +1,2 @@
-
 %%capture
-%pip install -U claude-agent-sdk python-dotenv
+%pip install -U claude-agent-sdk python-dotenv

## modified /cells/6/source:
@@ -7,4 +7,4 @@ async for msg in query(
     options=ClaudeAgentOptions(model="claude-sonnet-4-5", allowed_tools=["WebSearch"]),
 ):
     print_activity(msg)
-    messages.append(msg)
+    messages.append(msg)

## inserted before /cells/22:
+  markdown cell:
+    id: df21d185
+    source:
+      ## Test Addition
+      
+      This is a test change to demonstrate the notebook diff workflow.

Generated by nbdime

@PedramNavid PedramNavid force-pushed the test-notebook-diff-comment branch from 9f98feb to 1c0cb02 Compare November 11, 2025 19:35
@github-actions
Copy link

Notebook Review: 00_The_one_liner_research_agent.ipynb

What Looks Good

Strong Educational Structure

  • Clear learning objectives upfront
  • Progressive complexity from one-liner to stateful to production-ready
  • Excellent motivation section explaining why research agents matter
  • Prerequisites section sets proper expectations

Technical Quality

  • Working code examples with proper async patterns
  • Good visualization using helper functions
  • Includes discussion of stateless vs stateful agents
  • Production considerations addressed

Documentation

  • Inline explanations after major code blocks
  • Clear tool permission breakdown
  • Proper linking to next notebook

Suggestions for Improvement

  1. Output Verification - Consider noting when outputs were last run
  2. Missing Error Handling - Add discussion of failure modes and error handling
  3. Environment Setup - Add code cell to verify ANTHROPIC_API_KEY is set
  4. Cost Context - Brief note about cost optimization factors
  5. File Path Assumptions - Verify research_agent/ paths exist or link to code
  6. Incomplete Execution - Last 3 code cells show no output
  7. Test Addition Section - Remove leftover test content at end

Critical Issues (MUST FIX)

1. Remove Test Content

Location: Final markdown cell (id: df21d185)
The Test Addition section must be deleted - clearly test content not for production

2. Title Inconsistency

Location: First cell (id: 0d4a77a4)
Remove (TEST VERSION) from title - should not appear in published notebook

Overall Assessment

Quality Score: 8/10 (after removing test content)

Well-structured educational notebook with excellent progression and code quality. Needs cleanup to remove test artifacts before publication.

Recommendation: Request changes - Remove test content and title suffix

Quick Fixes Checklist

  • Remove (TEST VERSION) from title
  • Delete Test Addition section at end
  • Verify all code cells execute without errors
  • Confirm research_agent/ files exist
  • Consider adding error handling discussion

@github-actions
Copy link

Notebook Changes

This PR modifies the following notebooks:

📓 claude_agent_sdk/00_The_one_liner_research_agent.ipynb

View diff
nbdiff claude_agent_sdk/00_The_one_liner_research_agent.ipynb (2c30b735259b795b18d93dc8e3b6ab24c8073720) claude_agent_sdk/00_The_one_liner_research_agent.ipynb (1c0cb025b6c4da52986d520bec83b5058c2b056b)
--- claude_agent_sdk/00_The_one_liner_research_agent.ipynb (2c30b735259b795b18d93dc8e3b6ab24c8073720)  (no timestamp)
+++ claude_agent_sdk/00_The_one_liner_research_agent.ipynb (1c0cb025b6c4da52986d520bec83b5058c2b056b)  (no timestamp)
## modified /cells/2/source:
@@ -1,3 +1,2 @@
-
 %%capture
-%pip install -U claude-agent-sdk python-dotenv
+%pip install -U claude-agent-sdk python-dotenv

## modified /cells/6/source:
@@ -7,4 +7,4 @@ async for msg in query(
     options=ClaudeAgentOptions(model="claude-sonnet-4-5", allowed_tools=["WebSearch"]),
 ):
     print_activity(msg)
-    messages.append(msg)
+    messages.append(msg)

Generated by nbdime

@PedramNavid PedramNavid force-pushed the test-notebook-diff-comment branch from 1c0cb02 to d290da3 Compare November 11, 2025 19:37
@github-actions
Copy link

Notebook Changes

This PR modifies the following notebooks:

📓 claude_agent_sdk/00_The_one_liner_research_agent.ipynb

View diff
nbdiff claude_agent_sdk/00_The_one_liner_research_agent.ipynb (2c30b735259b795b18d93dc8e3b6ab24c8073720) claude_agent_sdk/00_The_one_liner_research_agent.ipynb (d290da31869dcffdedecfe6e083da4114d15cbe1)
--- claude_agent_sdk/00_The_one_liner_research_agent.ipynb (2c30b735259b795b18d93dc8e3b6ab24c8073720)  (no timestamp)
+++ claude_agent_sdk/00_The_one_liner_research_agent.ipynb (d290da31869dcffdedecfe6e083da4114d15cbe1)  (no timestamp)
## inserted before /cells/0:
+  code cell:
+    id: 94449849
+    execution_count: 1
+    source:
+      from dotenv import load_dotenv
+      from utils.agent_visualizer import print_activity
+      
+      from claude_agent_sdk import ClaudeAgentOptions, ClaudeSDKClient, query
+      
+      load_dotenv()
+    outputs:
+      output 0:
+        output_type: execute_result
+        execution_count: 1
+        data:
+          text/plain: True

## modified /cells/0/source:
@@ -1,22 +1,13 @@
-# Building a One-Liner Research Agent
+# 00 - The One-Liner Research Agent
 
-Research tasks consume hours of expert time: market analysts manually gathering competitive intelligence, legal teams tracking regulatory changes, engineers investigating bug reports across documentation. The core challenge isn't finding information but knowing what to search for next based on what you just discovered.
+PREFACE: We highly recommend reading [Building effective agents](https://www.anthropic.com/engineering/building-effective-agents) or [How we built our multi-agent research system](https://www.anthropic.com/engineering/built-multi-agent-research-system) in case you haven't. They are great reads and we will assume some basic understanding of agents! 
 
-The Claude Agent SDK makes it possible to build agents that autonomously explore external systems without a predefined workflow. Unlike traditional workflow automations that follow fixed steps, research agents adapt their strategy based on what they find--following promising leads, synthesizing conflicting sources, and knowing when they have enough information to answer the question.
+In this notebook we build our own (re)search agent, which is inherently a great use-case because of a few reasons:
+- The input to our system is not sufficient to produce an output, meaning there needs to be interaction with external systems (e.g., the internet)
+- There is no predefined workflow we can use since it is unclear what the agent will discover during its research
 
-## By the end of this cookbook, you'll be able to:
+Instead, a research agent requires the flexibility to explore unexpected leads and change direction based on what it finds. In its simplest form, a research agent can be an agent that simply searches the internet and summarizes it for you. 
 
-- Build a research agent that autonomously searches and synthesizes information with a few lines of code
+Below, we'll implement a basic research agent with just a few lines of code. We provide Claude with exactly one tool which the Claude Code SDK contains straight out of the box: [web search tool](https://docs.claude.com/en/docs/agents-and-tools/tool-use/web-search-tool). 
 
-This foundation applies to any task where the information needed isn't available upfront: competitive analysis, technical troubleshooting, investment research, or literature reviews.
-
-# Why Research Agents?
-
-Research is an ideal agentic use case for two reasons:
-
-1. **Information isn't self-contained**. The input question alone doesn't contain the answer. The agent must interact with external systems (search engines, databases, APIs) to gather what it needs.
-2. **The path emerges during exploration**. You can't predetermine the workflow. Whether an agent should search for company financials or regulatory filings depends on what it discovers about the business model. The optimal strategy reveals itself through investigation.
-
-In its simplest form, a research agent searches the web and synthesizes findings. Below, we'll build exactly that with the Claude Agent SDK's built-in web search tool in just a few lines of code.
-
-Note: You can also view the full list of [Claude Code's built-in tools](https://docs.claude.com/en/docs/claude-code/settings#tools-available-to-claude)
+> Check [here](https://docs.claude.com/en/docs/claude-code/settings#tools-available-to-claude) for a list of Claude Code's readily available tools

## deleted /cells/1-5:
-  markdown cell:
-    id: 301fb086
-    source:
-      # Prerequisites
-      
-      Before following this guide, ensure you have:
-      
-      **Required Knowledge**
-      
-      * Python fundamentals - comfortable with async/await, functions, and basic data structures
-      * Basic understanding of agentic patterns - we recommend reading [Building effective agents](https://www.anthropic.com/engineering/building-effective-agents) first if you're new to agents
-      
-      **Required Tools**
-      
-      * Python 3.11 or higher
-      * Anthropic API key [(get one here)](https://console.anthropic.com)
-      
-      **Recommended:**
-      * Familiarity with the Claude Agent SDK concepts
-      * Understanding of tool use patterns in LLMs
-      
-      
-      ## Setup
-      
-      First, install the required dependencies:
-  code cell:
-    id: ab9830f9
-    execution_count: 1
-    source:
-      
-      %%capture
-      %pip install -U claude-agent-sdk python-dotenv
-  markdown cell:
-    id: d88272cf
-    source:
-      Note: Ensure your .env file contains:
-      
-      ```bash
-      ANTHROPIC_API_KEY=your_key_here
-      ```
-      
-      Load your environment variables and configure the client:
-  code cell:
-    id: c41abcdf
-    source:
-      from dotenv import load_dotenv
-      
-      load_dotenv()
-      
-      MODEL = "claude-sonnet-4-5"
-  markdown cell:
-    id: 041415b8
-    source:
-      ## Building Your First Research Agent
-      
-      Let's start with the simplest possible implementation: a research agent that can search the web and synthesize findings. With the Claude Agent SDK, this takes just a few lines of code.
-      
-      The key is the query() function, which creates a stateless agent interaction. We'll provide Claude with a single tool, WebSearch, and let it autonomously decide when and how to use it based on our research question.

## replaced /cells/6/execution_count:
-  5
+  2

## modified /cells/6/outputs/0/text:
@@ -1,6 +1,4 @@
 🤖 Thinking...
 🤖 Using: WebSearch()
-🤖 Using: WebSearch()
-✓ Tool completed
 ✓ Tool completed
 🤖 Thinking...

## modified /cells/6/source:
@@ -1,10 +1,7 @@
-from claude_agent_sdk import ClaudeAgentOptions, query
-from utils.agent_visualizer import print_activity, print_final_result
-
 messages = []
 async for msg in query(
     prompt="Research the latest trends in AI agents and give me a brief summary",
     options=ClaudeAgentOptions(model="claude-sonnet-4-5", allowed_tools=["WebSearch"]),
 ):
     print_activity(msg)
-    messages.append(msg)
+    messages.append(msg)

## inserted before /cells/7:
+  code cell:
+    id: 293437f4
+    source:
+      print(
+          f"\nResult:\n{messages[-1].result if hasattr(messages[-1], 'result') and messages[-1].result else messages[-2].content[0].text}"
+      )

## deleted /cells/7-8:
-  code cell:
-    id: 8f57e1ec
-    execution_count: 6
-    source:
-      print_final_result(messages)
-    outputs:
-      output 0:
-        output_type: stream
-        name: stdout
-        text:
-          
-          📝 Final Result:
-          ## Brief Summary: Latest Trends in AI Agents (2025)
-          
-          Based on current research, here are the key trends shaping AI agents in 2025:
-          
-          ### 🚀 **Explosive Growth & Adoption**
-          - Market projected to reach **$7.38 billion by end of 2025** (doubling from $3.7B in 2023)
-          - **85% of organizations** have integrated AI agents into at least one workflow
-          - **99% of enterprise developers** are exploring or building AI agents
-          
-          ### 🎯 **Key Technical Trends**
-          
-          1. **Agentic RAG** - Goal-driven systems that combine retrieval, reasoning, and autonomy for smarter assistants
-          
-          2. **Multi-Agent Systems** - The "orchestra approach" where specialized agents collaborate on complex tasks
-          
-          3. **Industry Specialization** - Moving beyond general assistants to domain experts (AI lawyers, radiologists, etc.)
-          
-          4. **Enhanced Autonomy** - Agents with memory, planning, reasoning, and self-correction capabilities
-          
-          5. **Interoperability Standards** - New protocols like MCP (Model Context Protocol) and A2A (Agent2Agent) enabling cross-platform communication
-          
-          ### 💼 **Real-World Impact**
-          - **30-40% productivity gains** in early enterprise deployments
-          - Autonomous task execution freeing humans for higher-value work
-          - Voice-controlled conversational agents handling complex workflows
-          - Proactive problem-solving before issues arise
-          
-          ### ⚠️ **Important Considerations**
-          - Human oversight remains critical
-          - Challenges with reliability, error handling, and security
-          - Most organizations aren't fully "agent-ready" yet
-          - Technology expected to reach maturity in 5-10 years
-          
-          **Bottom line:** 2025 is being called the "decade of AI agents," with rapid evolution from simple chatbots to autonomous, specialized problem-solvers transforming enterprise workflows.
-          
-          📊 Cost: $0.13
-          ⏱️  Duration: 42.04s
-  markdown cell:
-    id: b965c2ee
-    source:
-      
-      ## What's happening here:
-      
-      - `query()` creates a single-turn agent interaction (no conversation memory)
-      - `allowed_tools=["WebSearch"]` gives Claude permission to search the web without asking for approval
-      - The agent autonomously decides when to search, what queries to run, and how to synthesize results
-      - `print_activity()` and `print_final_result` are helper functions that show the agent's actions in real-time and print the agent's final response along with cost and duration information.
-      
-      That's it! A functional research agent in 10 lines of code. The agent will search for relevant information, follow up on promising leads, and provide a synthesized summary.

## modified /cells/9/source:
@@ -1,24 +1,7 @@
-The query() function creates a stateless agent interaction. Each call is independent—no conversation memory, no context from previous queries. This makes it perfect for one-off research tasks where you need a quick answer without maintaining state.
+And that's all it takes! Just like that we have a research agent that can go and browse the web to answer (to the best of its ability, at least) any question you throw at it.
 
-**How tool permissions work:**
+Note that in our query we provided the argument `options`. Here we define the configuration, the capabilities and limitations of our agent. For example, we provide our agent with the ability to search the web by passing ```allowed_tool=["WebSearch"]```.
 
-The `allowed_tools=["WebSearch"]` parameter gives Claude permission to search without asking for approval. This is critical for autonomous operation:
+More specifically, `allowed_tools` is a list of tools that Claude will be able to use without any approvals. The rest of the tools are still available, but Claude will ask for approval to use them. That said, certain tools like `Read` and other base read-only tools are always allowed. If you want any tool to be removed from Claude's context, add it to `disallowed_tools` instead.
 
-- `Allowed tools` - Claude can use these freely (in this case, WebSearch)
-- `Other tools` - Available but require approval before use
-- `Read-only tools` - Tools like Read are always allowed by default
-- `Disallowed tools` - Add tools to disallowed_tools to remove them entirely from Claude's context
-
-**When to use stateless queries:**
-
-- One-off research questions where context doesn't matter
-- Parallel processing of independent research tasks
-- Scenarios where you want fresh context for each query
-
-**When not to use stateless queries:**
-
-- Multi-turn investigations that build on previous findings
-- Iterative refinement of research based on initial results
-- Complex analysis requiring sustained context
-
-Let's inspect what the agent actually did using the visualize_conversation helper:
+Now, to more closely inspect the actions our agent took, we have provided the ```visualize_conversation``` function.

## replaced (type changed from int to NoneType) /cells/10/execution_count:
-  7
+  None

## deleted /cells/10/outputs/0:
-  output:
-    output_type: stream
-    name: stdout
-    text:
-      
-      ============================================================
-      🤖 AGENT CONVERSATION TIMELINE
-      ============================================================
-      
-      ⚙️  System Initialized
-         Session: 6b742cec...
-      
-      🤖 Assistant:
-         💬 I'll research the latest trends in AI agents for you.
-      
-      🤖 Assistant:
-         🔧 Using tool: WebSearch
-            Query: "latest trends AI agents 2025"
-      
-      🤖 Assistant:
-         🔧 Using tool: WebSearch
-            Query: "AI agent developments autonomous systems 2025"
-      
-      
-      
-      🤖 Assistant:
-         💬 ## Brief Summary: Latest Trends in AI Agents (2025)
-      
-      Based on current research, here are the key trends shaping AI agents in 2025:
-      
-      ### 🚀 **Explosive Growth & Adoption**
-      - Market projected to reach **$7.38 billion by end of 2025** (doubling from $3.7B in 2023)
-      - **85% of organizations** have integrated AI agents into at least one workflow
-      - **99% of enterprise developers** are exploring or building AI agents
-      
-      ### 🎯 **Key Technical Trends**
-      
-      1. **Agentic RAG** - Goal-driven systems that combine r...
-      
-      ✅ Conversation Complete
-         Turns: 3
-         Cost: $0.13
-         Duration: 42.04s
-         Tokens: 1,833
-      
-      ============================================================
-      

## modified /cells/11/source:
@@ -1,12 +1,9 @@
-## From Prototype to Production: Three Key Improvements
+### Supercharging our agent
 
-Our one-line research agent works, but it's limited. Single queries without memory can't handle iterative research ("find X, then analyze Y based on what you found"). Let's explore three ways we can further improve our implementation.
+So far, we have laid out a very simple (maybe naive) implementation to illustrate how you can start leveraging the SDK to build a research agent. However, there are various ways we can improve our agent to turn it production ready. Let's cover a few of them:
 
-**1. Conversation Memory with ClaudeSDKClient**: Stateless queries can't build on previous findings. If you ask "What are the top AI startups?" then "How are they funded?", the second query has no context about which startups you mean. We can use `ClaudeSDKClient` to maintain conversation history across multiple queries.
+1. Notice how before we only sent one query? In many systems, a human will look at the output of the system, potentially assigning a follow up task. Just like text completions, if we want to send multiple queries to the agent (e.g., 1. analyze abc, 2. make xyz based on your analysis) we would have to copy over the entire analysis context in our second query. Instead, we can **[use the ClaudeSDKClient](https://docs.claude.com/en/docs/claude-code/sdk/sdk-python#1-the-claudesdkclient-class-recommended)** to maintain the conversation context for us.
 
+2. Another great way of steering the system is **providing a system prompt**, akin to a system prompt used for text completions. To learn how to write a good system prompt for a research agent, we recommend looking [here](https://github.com/anthropics/anthropic-cookbook/tree/main/patterns/agents/prompts).
 
-**2. System Prompts for Specialized Behavior**: Research domains often have specific requirements. Financial analysis needs different rigor than tech news summaries. Use the system prompt to encode your research standards, preferred sources, or output format. See our [agent prompting guide](https://github.com/anthropics/anthropic-cookbook/tree/main/patterns/agents/prompts) for research-specific examples.
-
-**3. Multimodal Research with the Read Tool**: Real research isn't just text. Market reports have charts, technical docs have diagrams, competitive analysis requires screenshot comparison. Enable the `Read` tool so Claude can analyze images, PDFs, and other visual content.
-
-Let's implement these three changes for our research agent.
+3. **Leveraging the `Read` tool** to enable multimodal input. This tool allows Claude to analyze charts, infographics, and complex system diagrams.

## replaced /cells/12/execution_count:
-  9
+  5

## modified /cells/12/outputs/0/text:
@@ -2,11 +2,13 @@
 🤖 Using: Read()
 ✓ Tool completed
 🤖 Thinking...
-🤖 Using: Glob()
+🤖 Using: Bash()
 ✓ Tool completed
+🤖 Thinking...
 🤖 Using: Read()
 ✓ Tool completed
 🤖 Thinking...
+🤖 Thinking...
 🤖 Using: WebSearch()
 ✓ Tool completed
 🤖 Thinking...

## modified /cells/12/source:
@@ -1,5 +1,3 @@
-from claude_agent_sdk import ClaudeSDKClient
-
 messages = []
 async with ClaudeSDKClient(
     options=ClaudeAgentOptions(

## deleted /cells/13:
-  markdown cell:
-    id: 6eb4ed21
-    source:
-      This example combines all three improvements: conversation memory via ClaudeSDKClient, a system prompt for AI research specialization, and the Read tool for analyzing visual content.
-      
-      In the first query call, the agent reads and analyzes a chart image using the Read tool. Next, the Agent searches the web for context about the chart's findings—and critically, it remembers what it saw in the chart from the first query
-      
-      The system prompt instruction helps the agent focus on relevant industry context.
-      
-      One key difference from the first example: The `async with ClaudeSDKClient()` context manager maintains conversation state. The second query inherits context from the first—the agent knows which chart and which insights to investigate.

## replaced (type changed from int to NoneType) /cells/14/execution_count:
-  10
+  None

## deleted /cells/14/outputs/0:
-  output:
-    output_type: stream
-    name: stdout
-    text:
-      
-      ============================================================
-      🤖 AGENT CONVERSATION TIMELINE
-      ============================================================
-      
-      ⚙️  System Initialized
-         Session: fa819270...
-      
-      🤖 Assistant:
-         💬 I'll read and analyze the chart image for you.
-      
-      🤖 Assistant:
-         🔧 Using tool: Read
-      
-      
-      🤖 Assistant:
-         💬 Let me search for the file in the research_agent directory to find the correct path.
-      
-      🤖 Assistant:
-         🔧 Using tool: Glob
-      
-      
-      🤖 Assistant:
-         🔧 Using tool: Read
-      
-      
-      🤖 Assistant:
-         💬 ## Analysis of the Chart: Types of Projects in Claude.ai and Claude Code
-      
-      This chart compares how different user groups utilize **Claude.ai** (blue dots) versus **Claude Code** (orange dots) across seven project categories. Here are the key insights:
-      
-      ### **Major Findings:**
-      
-      1. **Personal Projects** (30.2% vs 36.0%)
-         - Highest usage category for both platforms
-         - Claude Code slightly leads, suggesting users prefer the coding interface for personal work
-         - Small gap indicates similar appea...
-      
-      ✅ Conversation Complete
-         Turns: 4
-         Cost: $0.04
-         Duration: 35.30s
-         Tokens: 760
-      
-      ⚙️  System Initialized
-         Session: fa819270...
-      
-      🤖 Assistant:
-         🔧 Using tool: WebSearch
-            Query: "Claude.ai vs Claude Code usage startup work enterprise personal projects developer tools 2024 2025"
-      
-      
-      🤖 Assistant:
-         💬 ## Investigation Results: Validation of Chart Insights
-      
-      The web search confirms and expands on the patterns shown in the chart:
-      
-      ### **Why Startups Prefer Claude Code (32.9% vs 13.1%)**
-      
-      The massive gap for startup work is validated by these findings:
-      - **Automation-first approach**: Claude Code connects to command line, sees project files, modifies codebases, runs tests, and commits to GitHub autonomously
-      - **Rapid prototyping strength**: Can generate full MERN stack apps from high-level descri...
-      
-      ✅ Conversation Complete
-         Turns: 2
-         Cost: $0.15
-         Duration: 46.35s
-         Tokens: 662
-      
-      ============================================================
-      

## modified /cells/15/source:
@@ -1,11 +1,8 @@
-## Building for Production
-
-Jupyter notebooks are great for learning, but production systems need reusable modules. We've packaged the research agent into research_agent/agent.py with a clean interface:
-
-### Core functions:
+### The Research Agent leaves Jupyter
 
+Finally, to be able to use the agent outside our notebook, we must put it in a Python script. A lightweight implementation of our research agent can be found in `research_agent/agent.py`. We define three functions:
 - `print_activity()` - Shows what the agent is doing in real-time
-- `get_activity_text()` - Extract activity text for custom handlers, such as logging or monitoring
-- `send_query()` - Main entry point for research queries with built-in activity display
+- `get_activity_text()` - Extracts activity text for custom handlers
+- `send_query()` - Main function for sending and handlingqueries with built-in activity display
 
 This agent can now be used in any Python script!

## modified /cells/16/source:
@@ -1 +1 @@
-For independent questions where conversation context doesn't matter:
+First an example to test a one-off query to the agent:

## replaced (type changed from NoneType to int) /cells/19/execution_count:
-  None
+  8

## inserted before /cells/19/outputs/0:
+  output:
+    output_type: stream
+    name: stdout
+    text:
+      🤖 Thinking...
+      🤖 Using: WebSearch()
+      ✓ Tool completed
+      🤖 Thinking...
+      
+      -----
+      
+      Initial research: Anthropic is an AI safety and research company founded in 2021 by former OpenAI researchers, including siblings Dario and Daniela Amodei. The company develops Claude, a family of large language models (LLMs) designed to be helpful, harmless, and honest.
+      
+      **Key points:**
+      - **Mission**: Build reliable, interpretable, and steerable AI systems with a focus on AI safety
+      - **Main product**: Claude AI assistant (which you're currently using!)
+      - **Structure**: Public benefit corporation balancing profit with humanity's long-term benefit
+      - **Funding**: Backed by major investments from Amazon ($8B total) and Google ($2B)
+      - **Focus areas**: AI safety, natural language processing, human feedback, and responsible AI development
+      
+      Anthropic positions itself as a "safety-first" AI lab, emphasizing the responsible development of AI systems to serve humanity's long-term well-being.
+      

## modified /cells/21/source:
@@ -1,35 +1,9 @@
 ## Conclusion
 
-### What You Built
+We've demonstrated how the Claude Code SDK enables you to build a functional research agent in just a few lines of code. By leveraging the built-in WebSearch tool, we created an agent capable of autonomous information gathering and synthesis. We also explored how the
+ClaudeSDKClient maintains conversation context across multiple queries and how to incorporate multimodal capabilities through the Read tool.
 
-In this cookbook, you built three progressively sophisticated research agents:
+This foundation in basic agentic workflows prepares you for more sophisticated implementations. In the next notebook, we'll advance to building a Chief of Staff agent that coordinates multiple specialized subagents, implements custom output styles for different
+stakeholders, and uses hooks for governance and compliance tracking.
 
-- Stateless research agent - One-line queries for independent research tasks
-- Stateful agent with memory - Multi-turn investigations that build on previous findings
-- Production module - Reusable research functions for integration into applications
-
-### Key Takeaways
-
-**When to use stateless queries (query()):**
-
-- Independent research questions
-- Parallel processing of unrelated tasks
-- Scenarios requiring fresh context each time
-
-**When to use stateful agents (ClaudeSDKClient):**
-
-- Multi-turn investigations building on previous findings
-- Iterative refinement of research
-- Complex analysis requiring sustained context
-
-Research agents excel when information isn't self-contained and the optimal workflow emerges during exploration—competitive analysis, technical troubleshooting, literature reviews, and investigative journalism all fit this pattern.
-
-### Next Steps
-
-This foundation in autonomous research prepares you for enterprise-grade multi-agent systems. In the next notebook, you'll learn to:
-
-Orchestrate specialized subagents under a coordinating agent
-Implement governance through hooks and custom commands
-Adapt output styles for different stakeholders (executives vs. technical teams)
-
-Next: [01_The_chief_of_staff_agent.ipynb](01_The_chief_of_staff_agent.ipynb) - From single agents to multi-agent orchestration.
+Next: [01_The_chief_of_staff_agent.ipynb](01_The_chief_of_staff_agent.ipynb) - Learn how to orchestrate complex multi-agent systems with enterprise-grade features.

## modified /metadata/kernelspec/display_name:
-  anthropic-cookbook (3.12.12)
+  Python (cc-sdk-tutorial)

## modified /metadata/kernelspec/name:
-  python3
+  cc-sdk-tutorial

## modified /metadata/language_info/version:
-  3.12.12
+  3.11.13

Generated by nbdime

@PedramNavid PedramNavid force-pushed the test-notebook-diff-comment branch from d290da3 to 19353b5 Compare November 11, 2025 20:02
@github-actions
Copy link

Notebook Changes

This PR modifies the following notebooks:

📓 claude_agent_sdk/00_The_one_liner_research_agent.ipynb

View diff
New file

Generated by nbdime

Changes:
- Updated title to include (TEST VERSION)
- Added test markdown cell at the end

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <[email protected]>
@PedramNavid PedramNavid force-pushed the test-notebook-diff-comment branch from 19353b5 to e1e33bb Compare November 11, 2025 20:08
@github-actions
Copy link

Notebook Changes

This PR modifies the following notebooks:

📓 claude_agent_sdk/00_The_one_liner_research_agent.ipynb

View diff
nbdiff claude_agent_sdk/00_The_one_liner_research_agent.ipynb (72d1ebe2c721b4843ce5a3513a44d81c2b5b267f) claude_agent_sdk/00_The_one_liner_research_agent.ipynb (e1e33bb4f14e191d574079d25ebfdc1f5b15ccb1)
--- claude_agent_sdk/00_The_one_liner_research_agent.ipynb (72d1ebe2c721b4843ce5a3513a44d81c2b5b267f)  (no timestamp)
+++ claude_agent_sdk/00_The_one_liner_research_agent.ipynb (e1e33bb4f14e191d574079d25ebfdc1f5b15ccb1)  (no timestamp)
## inserted before /cells/0:
+  code cell:
+    source:
+      from dotenv import load_dotenv
+      from utils.agent_visualizer import print_activity
+      
+      from claude_agent_sdk import ClaudeAgentOptions, ClaudeSDKClient, query
+      
+      load_dotenv()
+    outputs:
+      output 0:
+        output_type: execute_result
+        execution_count: 1
+        data:
+          text/plain: True

## modified /cells/0/source:
@@ -1,22 +1,13 @@
-# Building a One-Liner Research Agent
+# 00 - The One-Liner Research Agent
 
-Research tasks consume hours of expert time: market analysts manually gathering competitive intelligence, legal teams tracking regulatory changes, engineers investigating bug reports across documentation. The core challenge isn't finding information but knowing what to search for next based on what you just discovered.
+PREFACE: We highly recommend reading [Building effective agents](https://www.anthropic.com/engineering/building-effective-agents) or [How we built our multi-agent research system](https://www.anthropic.com/engineering/built-multi-agent-research-system) in case you haven't. They are great reads and we will assume some basic understanding of agents! 
 
-The Claude Agent SDK makes it possible to build agents that autonomously explore external systems without a predefined workflow. Unlike traditional workflow automations that follow fixed steps, research agents adapt their strategy based on what they find--following promising leads, synthesizing conflicting sources, and knowing when they have enough information to answer the question.
+In this notebook we build our own (re)search agent, which is inherently a great use-case because of a few reasons:
+- The input to our system is not sufficient to produce an output, meaning there needs to be interaction with external systems (e.g., the internet)
+- There is no predefined workflow we can use since it is unclear what the agent will discover during its research
 
-## By the end of this cookbook, you'll be able to:
+Instead, a research agent requires the flexibility to explore unexpected leads and change direction based on what it finds. In its simplest form, a research agent can be an agent that simply searches the internet and summarizes it for you. 
 
-- Build a research agent that autonomously searches and synthesizes information with a few lines of code
+Below, we'll implement a basic research agent with just a few lines of code. We provide Claude with exactly one tool which the Claude Code SDK contains straight out of the box: [web search tool](https://docs.claude.com/en/docs/agents-and-tools/tool-use/web-search-tool). 
 
-This foundation applies to any task where the information needed isn't available upfront: competitive analysis, technical troubleshooting, investment research, or literature reviews.
-
-# Why Research Agents?
-
-Research is an ideal agentic use case for two reasons:
-
-1. **Information isn't self-contained**. The input question alone doesn't contain the answer. The agent must interact with external systems (search engines, databases, APIs) to gather what it needs.
-2. **The path emerges during exploration**. You can't predetermine the workflow. Whether an agent should search for company financials or regulatory filings depends on what it discovers about the business model. The optimal strategy reveals itself through investigation.
-
-In its simplest form, a research agent searches the web and synthesizes findings. Below, we'll build exactly that with the Claude Agent SDK's built-in web search tool in just a few lines of code.
-
-Note: You can also view the full list of [Claude Code's built-in tools](https://docs.claude.com/en/docs/claude-code/settings#tools-available-to-claude)
+> Check [here](https://docs.claude.com/en/docs/claude-code/settings#tools-available-to-claude) for a list of Claude Code's readily available tools

## deleted /cells/1-5:
-  markdown cell:
-    source:
-      # Prerequisites
-      
-      Before following this guide, ensure you have:
-      
-      **Required Knowledge**
-      
-      * Python fundamentals - comfortable with async/await, functions, and basic data structures
-      * Basic understanding of agentic patterns - we recommend reading [Building effective agents](https://www.anthropic.com/engineering/building-effective-agents) first if you're new to agents
-      
-      **Required Tools**
-      
-      * Python 3.11 or higher
-      * Anthropic API key [(get one here)](https://console.anthropic.com)
-      
-      **Recommended:**
-      * Familiarity with the Claude Agent SDK concepts
-      * Understanding of tool use patterns in LLMs
-      
-      
-      ## Setup
-      
-      First, install the required dependencies:
-  code cell:
-    source:
-      
-      %%capture
-      %pip install -U claude-agent-sdk python-dotenv
-  markdown cell:
-    source:
-      Note: Ensure your .env file contains:
-      
-      ```bash
-      ANTHROPIC_API_KEY=your_key_here
-      ```
-      
-      Load your environment variables and configure the client:
-  code cell:
-    source:
-      from dotenv import load_dotenv
-      
-      load_dotenv()
-      
-      MODEL = "claude-sonnet-4-5"
-  markdown cell:
-    source:
-      ## Building Your First Research Agent
-      
-      Let's start with the simplest possible implementation: a research agent that can search the web and synthesize findings. With the Claude Agent SDK, this takes just a few lines of code.
-      
-      The key is the query() function, which creates a stateless agent interaction. We'll provide Claude with a single tool, WebSearch, and let it autonomously decide when and how to use it based on our research question.

## modified /cells/6/outputs/0/text:
@@ -1,6 +1,4 @@
 🤖 Thinking...
 🤖 Using: WebSearch()
-🤖 Using: WebSearch()
-✓ Tool completed
 ✓ Tool completed
 🤖 Thinking...

## modified /cells/6/source:
@@ -1,10 +1,7 @@
-from claude_agent_sdk import ClaudeAgentOptions, query
-from utils.agent_visualizer import print_activity, print_final_result
-
 messages = []
 async for msg in query(
     prompt="Research the latest trends in AI agents and give me a brief summary",
     options=ClaudeAgentOptions(model="claude-sonnet-4-5", allowed_tools=["WebSearch"]),
 ):
     print_activity(msg)
-    messages.append(msg)
+    messages.append(msg)

## inserted before /cells/7:
+  code cell:
+    source:
+      print(
+          f"\nResult:\n{messages[-1].result if hasattr(messages[-1], 'result') and messages[-1].result else messages[-2].content[0].text}"
+      )

## deleted /cells/7-8:
-  code cell:
-    source:
-      print_final_result(messages)
-    outputs:
-      output 0:
-        output_type: stream
-        name: stdout
-        text:
-          
-          📝 Final Result:
-          ## Brief Summary: Latest Trends in AI Agents (2025)
-          
-          Based on current research, here are the key trends shaping AI agents in 2025:
-          
-          ### 🚀 **Explosive Growth & Adoption**
-          - Market projected to reach **$7.38 billion by end of 2025** (doubling from $3.7B in 2023)
-          - **85% of organizations** have integrated AI agents into at least one workflow
-          - **99% of enterprise developers** are exploring or building AI agents
-          
-          ### 🎯 **Key Technical Trends**
-          
-          1. **Agentic RAG** - Goal-driven systems that combine retrieval, reasoning, and autonomy for smarter assistants
-          
-          2. **Multi-Agent Systems** - The "orchestra approach" where specialized agents collaborate on complex tasks
-          
-          3. **Industry Specialization** - Moving beyond general assistants to domain experts (AI lawyers, radiologists, etc.)
-          
-          4. **Enhanced Autonomy** - Agents with memory, planning, reasoning, and self-correction capabilities
-          
-          5. **Interoperability Standards** - New protocols like MCP (Model Context Protocol) and A2A (Agent2Agent) enabling cross-platform communication
-          
-          ### 💼 **Real-World Impact**
-          - **30-40% productivity gains** in early enterprise deployments
-          - Autonomous task execution freeing humans for higher-value work
-          - Voice-controlled conversational agents handling complex workflows
-          - Proactive problem-solving before issues arise
-          
-          ### ⚠️ **Important Considerations**
-          - Human oversight remains critical
-          - Challenges with reliability, error handling, and security
-          - Most organizations aren't fully "agent-ready" yet
-          - Technology expected to reach maturity in 5-10 years
-          
-          **Bottom line:** 2025 is being called the "decade of AI agents," with rapid evolution from simple chatbots to autonomous, specialized problem-solvers transforming enterprise workflows.
-          
-          📊 Cost: $0.13
-          ⏱️  Duration: 42.04s
-  markdown cell:
-    source:
-      
-      ## What's happening here:
-      
-      - `query()` creates a single-turn agent interaction (no conversation memory)
-      - `allowed_tools=["WebSearch"]` gives Claude permission to search the web without asking for approval
-      - The agent autonomously decides when to search, what queries to run, and how to synthesize results
-      - `print_activity()` and `print_final_result` are helper functions that show the agent's actions in real-time and print the agent's final response along with cost and duration information.
-      
-      That's it! A functional research agent in 10 lines of code. The agent will search for relevant information, follow up on promising leads, and provide a synthesized summary.

## modified /cells/9/source:
@@ -1,24 +1,7 @@
-The query() function creates a stateless agent interaction. Each call is independent—no conversation memory, no context from previous queries. This makes it perfect for one-off research tasks where you need a quick answer without maintaining state.
+And that's all it takes! Just like that we have a research agent that can go and browse the web to answer (to the best of its ability, at least) any question you throw at it.
 
-**How tool permissions work:**
+Note that in our query we provided the argument `options`. Here we define the configuration, the capabilities and limitations of our agent. For example, we provide our agent with the ability to search the web by passing ```allowed_tool=["WebSearch"]```.
 
-The `allowed_tools=["WebSearch"]` parameter gives Claude permission to search without asking for approval. This is critical for autonomous operation:
+More specifically, `allowed_tools` is a list of tools that Claude will be able to use without any approvals. The rest of the tools are still available, but Claude will ask for approval to use them. That said, certain tools like `Read` and other base read-only tools are always allowed. If you want any tool to be removed from Claude's context, add it to `disallowed_tools` instead.
 
-- `Allowed tools` - Claude can use these freely (in this case, WebSearch)
-- `Other tools` - Available but require approval before use
-- `Read-only tools` - Tools like Read are always allowed by default
-- `Disallowed tools` - Add tools to disallowed_tools to remove them entirely from Claude's context
-
-**When to use stateless queries:**
-
-- One-off research questions where context doesn't matter
-- Parallel processing of independent research tasks
-- Scenarios where you want fresh context for each query
-
-**When not to use stateless queries:**
-
-- Multi-turn investigations that build on previous findings
-- Iterative refinement of research based on initial results
-- Complex analysis requiring sustained context
-
-Let's inspect what the agent actually did using the visualize_conversation helper:
+Now, to more closely inspect the actions our agent took, we have provided the ```visualize_conversation``` function.

## deleted /cells/10/outputs/0:
-  output:
-    output_type: stream
-    name: stdout
-    text:
-      
-      ============================================================
-      🤖 AGENT CONVERSATION TIMELINE
-      ============================================================
-      
-      ⚙️  System Initialized
-         Session: 6b742cec...
-      
-      🤖 Assistant:
-         💬 I'll research the latest trends in AI agents for you.
-      
-      🤖 Assistant:
-         🔧 Using tool: WebSearch
-            Query: "latest trends AI agents 2025"
-      
-      🤖 Assistant:
-         🔧 Using tool: WebSearch
-            Query: "AI agent developments autonomous systems 2025"
-      
-      
-      
-      🤖 Assistant:
-         💬 ## Brief Summary: Latest Trends in AI Agents (2025)
-      
-      Based on current research, here are the key trends shaping AI agents in 2025:
-      
-      ### 🚀 **Explosive Growth & Adoption**
-      - Market projected to reach **$7.38 billion by end of 2025** (doubling from $3.7B in 2023)
-      - **85% of organizations** have integrated AI agents into at least one workflow
-      - **99% of enterprise developers** are exploring or building AI agents
-      
-      ### 🎯 **Key Technical Trends**
-      
-      1. **Agentic RAG** - Goal-driven systems that combine r...
-      
-      ✅ Conversation Complete
-         Turns: 3
-         Cost: $0.13
-         Duration: 42.04s
-         Tokens: 1,833
-      
-      ============================================================
-      

## modified /cells/11/source:
@@ -1,12 +1,9 @@
-## From Prototype to Production: Three Key Improvements
+### Supercharging our agent
 
-Our one-line research agent works, but it's limited. Single queries without memory can't handle iterative research ("find X, then analyze Y based on what you found"). Let's explore three ways we can further improve our implementation.
+So far, we have laid out a very simple (maybe naive) implementation to illustrate how you can start leveraging the SDK to build a research agent. However, there are various ways we can improve our agent to turn it production ready. Let's cover a few of them:
 
-**1. Conversation Memory with ClaudeSDKClient**: Stateless queries can't build on previous findings. If you ask "What are the top AI startups?" then "How are they funded?", the second query has no context about which startups you mean. We can use `ClaudeSDKClient` to maintain conversation history across multiple queries.
+1. Notice how before we only sent one query? In many systems, a human will look at the output of the system, potentially assigning a follow up task. Just like text completions, if we want to send multiple queries to the agent (e.g., 1. analyze abc, 2. make xyz based on your analysis) we would have to copy over the entire analysis context in our second query. Instead, we can **[use the ClaudeSDKClient](https://docs.claude.com/en/docs/claude-code/sdk/sdk-python#1-the-claudesdkclient-class-recommended)** to maintain the conversation context for us.
 
+2. Another great way of steering the system is **providing a system prompt**, akin to a system prompt used for text completions. To learn how to write a good system prompt for a research agent, we recommend looking [here](https://github.com/anthropics/anthropic-cookbook/tree/main/patterns/agents/prompts).
 
-**2. System Prompts for Specialized Behavior**: Research domains often have specific requirements. Financial analysis needs different rigor than tech news summaries. Use the system prompt to encode your research standards, preferred sources, or output format. See our [agent prompting guide](https://github.com/anthropics/anthropic-cookbook/tree/main/patterns/agents/prompts) for research-specific examples.
-
-**3. Multimodal Research with the Read Tool**: Real research isn't just text. Market reports have charts, technical docs have diagrams, competitive analysis requires screenshot comparison. Enable the `Read` tool so Claude can analyze images, PDFs, and other visual content.
-
-Let's implement these three changes for our research agent.
+3. **Leveraging the `Read` tool** to enable multimodal input. This tool allows Claude to analyze charts, infographics, and complex system diagrams.

## modified /cells/12/outputs/0/text:
@@ -2,11 +2,13 @@
 🤖 Using: Read()
 ✓ Tool completed
 🤖 Thinking...
-🤖 Using: Glob()
+🤖 Using: Bash()
 ✓ Tool completed
+🤖 Thinking...
 🤖 Using: Read()
 ✓ Tool completed
 🤖 Thinking...
+🤖 Thinking...
 🤖 Using: WebSearch()
 ✓ Tool completed
 🤖 Thinking...

## modified /cells/12/source:
@@ -1,5 +1,3 @@
-from claude_agent_sdk import ClaudeSDKClient
-
 messages = []
 async with ClaudeSDKClient(
     options=ClaudeAgentOptions(

## deleted /cells/13:
-  markdown cell:
-    source:
-      This example combines all three improvements: conversation memory via ClaudeSDKClient, a system prompt for AI research specialization, and the Read tool for analyzing visual content.
-      
-      In the first query call, the agent reads and analyzes a chart image using the Read tool. Next, the Agent searches the web for context about the chart's findings—and critically, it remembers what it saw in the chart from the first query
-      
-      The system prompt instruction helps the agent focus on relevant industry context.
-      
-      One key difference from the first example: The `async with ClaudeSDKClient()` context manager maintains conversation state. The second query inherits context from the first—the agent knows which chart and which insights to investigate.

## deleted /cells/14/outputs/0:
-  output:
-    output_type: stream
-    name: stdout
-    text:
-      
-      ============================================================
-      🤖 AGENT CONVERSATION TIMELINE
-      ============================================================
-      
-      ⚙️  System Initialized
-         Session: fa819270...
-      
-      🤖 Assistant:
-         💬 I'll read and analyze the chart image for you.
-      
-      🤖 Assistant:
-         🔧 Using tool: Read
-      
-      
-      🤖 Assistant:
-         💬 Let me search for the file in the research_agent directory to find the correct path.
-      
-      🤖 Assistant:
-         🔧 Using tool: Glob
-      
-      
-      🤖 Assistant:
-         🔧 Using tool: Read
-      
-      
-      🤖 Assistant:
-         💬 ## Analysis of the Chart: Types of Projects in Claude.ai and Claude Code
-      
-      This chart compares how different user groups utilize **Claude.ai** (blue dots) versus **Claude Code** (orange dots) across seven project categories. Here are the key insights:
-      
-      ### **Major Findings:**
-      
-      1. **Personal Projects** (30.2% vs 36.0%)
-         - Highest usage category for both platforms
-         - Claude Code slightly leads, suggesting users prefer the coding interface for personal work
-         - Small gap indicates similar appea...
-      
-      ✅ Conversation Complete
-         Turns: 4
-         Cost: $0.04
-         Duration: 35.30s
-         Tokens: 760
-      
-      ⚙️  System Initialized
-         Session: fa819270...
-      
-      🤖 Assistant:
-         🔧 Using tool: WebSearch
-            Query: "Claude.ai vs Claude Code usage startup work enterprise personal projects developer tools 2024 2025"
-      
-      
-      🤖 Assistant:
-         💬 ## Investigation Results: Validation of Chart Insights
-      
-      The web search confirms and expands on the patterns shown in the chart:
-      
-      ### **Why Startups Prefer Claude Code (32.9% vs 13.1%)**
-      
-      The massive gap for startup work is validated by these findings:
-      - **Automation-first approach**: Claude Code connects to command line, sees project files, modifies codebases, runs tests, and commits to GitHub autonomously
-      - **Rapid prototyping strength**: Can generate full MERN stack apps from high-level descri...
-      
-      ✅ Conversation Complete
-         Turns: 2
-         Cost: $0.15
-         Duration: 46.35s
-         Tokens: 662
-      
-      ============================================================
-      

## modified /cells/15/source:
@@ -1,11 +1,8 @@
-## Building for Production
-
-Jupyter notebooks are great for learning, but production systems need reusable modules. We've packaged the research agent into research_agent/agent.py with a clean interface:
-
-### Core functions:
+### The Research Agent leaves Jupyter
 
+Finally, to be able to use the agent outside our notebook, we must put it in a Python script. A lightweight implementation of our research agent can be found in `research_agent/agent.py`. We define three functions:
 - `print_activity()` - Shows what the agent is doing in real-time
-- `get_activity_text()` - Extract activity text for custom handlers, such as logging or monitoring
-- `send_query()` - Main entry point for research queries with built-in activity display
+- `get_activity_text()` - Extracts activity text for custom handlers
+- `send_query()` - Main function for sending and handlingqueries with built-in activity display
 
 This agent can now be used in any Python script!

## modified /cells/16/source:
@@ -1 +1 @@
-For independent questions where conversation context doesn't matter:
+First an example to test a one-off query to the agent:

## inserted before /cells/19/outputs/0:
+  output:
+    output_type: stream
+    name: stdout
+    text:
+      🤖 Thinking...
+      🤖 Using: WebSearch()
+      ✓ Tool completed
+      🤖 Thinking...
+      
+      -----
+      
+      Initial research: Anthropic is an AI safety and research company founded in 2021 by former OpenAI researchers, including siblings Dario and Daniela Amodei. The company develops Claude, a family of large language models (LLMs) designed to be helpful, harmless, and honest.
+      
+      **Key points:**
+      - **Mission**: Build reliable, interpretable, and steerable AI systems with a focus on AI safety
+      - **Main product**: Claude AI assistant (which you're currently using!)
+      - **Structure**: Public benefit corporation balancing profit with humanity's long-term benefit
+      - **Funding**: Backed by major investments from Amazon ($8B total) and Google ($2B)
+      - **Focus areas**: AI safety, natural language processing, human feedback, and responsible AI development
+      
+      Anthropic positions itself as a "safety-first" AI lab, emphasizing the responsible development of AI systems to serve humanity's long-term well-being.
+      

## modified /cells/21/source:
@@ -1,35 +1,9 @@
 ## Conclusion
 
-### What You Built
+We've demonstrated how the Claude Code SDK enables you to build a functional research agent in just a few lines of code. By leveraging the built-in WebSearch tool, we created an agent capable of autonomous information gathering and synthesis. We also explored how the
+ClaudeSDKClient maintains conversation context across multiple queries and how to incorporate multimodal capabilities through the Read tool.
 
-In this cookbook, you built three progressively sophisticated research agents:
+This foundation in basic agentic workflows prepares you for more sophisticated implementations. In the next notebook, we'll advance to building a Chief of Staff agent that coordinates multiple specialized subagents, implements custom output styles for different
+stakeholders, and uses hooks for governance and compliance tracking.
 
-- Stateless research agent - One-line queries for independent research tasks
-- Stateful agent with memory - Multi-turn investigations that build on previous findings
-- Production module - Reusable research functions for integration into applications
-
-### Key Takeaways
-
-**When to use stateless queries (query()):**
-
-- Independent research questions
-- Parallel processing of unrelated tasks
-- Scenarios requiring fresh context each time
-
-**When to use stateful agents (ClaudeSDKClient):**
-
-- Multi-turn investigations building on previous findings
-- Iterative refinement of research
-- Complex analysis requiring sustained context
-
-Research agents excel when information isn't self-contained and the optimal workflow emerges during exploration—competitive analysis, technical troubleshooting, literature reviews, and investigative journalism all fit this pattern.
-
-### Next Steps
-
-This foundation in autonomous research prepares you for enterprise-grade multi-agent systems. In the next notebook, you'll learn to:
-
-Orchestrate specialized subagents under a coordinating agent
-Implement governance through hooks and custom commands
-Adapt output styles for different stakeholders (executives vs. technical teams)
-
-Next: [01_The_chief_of_staff_agent.ipynb](01_The_chief_of_staff_agent.ipynb) - From single agents to multi-agent orchestration.
+Next: [01_The_chief_of_staff_agent.ipynb](01_The_chief_of_staff_agent.ipynb) - Learn how to orchestrate complex multi-agent systems with enterprise-grade features.

Generated by nbdime

@PedramNavid PedramNavid added the wontfix This will not be worked on label Nov 19, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

wontfix This will not be worked on

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants