tutorial

How to Build an AI Research Agent with Claude Code in 2026

Step-by-step tutorial for building a multi-step AI research agent using only Claude Code, a project-level CLAUDE.md operating brief, and a tight permission allowlist. The example agent fetches web pages, extracts claims, cross-checks against a second source, and writes a structured Markdown report. Tested on Claude Sonnet as of April 2026.

Overview

Claude Code is Anthropic's official command-line interface for Claude. Although marketed primarily as a coding assistant, the CLI accepts arbitrary instructions and can read files, run shell commands, and call out to other tools, which makes it a practical scaffold for building lightweight research agents. This tutorial walks through building a multi-step AI research agent using only Claude Code, a project-level CLAUDE.md, and a small handful of permitted Bash commands.

The example agent fetches recent web pages on a topic, extracts key claims, cross-checks them against a second source, and writes a structured Markdown report. The same pattern extends to competitive intelligence, internal knowledge base updates, and audit-style reviews.

Prerequisites

  • Claude Code installed and authenticated (npm install -g @anthropic-ai/claude-code then claude auth)
  • An Anthropic account with API access or a Claude Pro/Max subscription that includes Claude Code
  • curl and pandoc available on the PATH (used to fetch and clean web pages)
  • A working directory dedicated to the agent's outputs

Step 1: Create the Project Skeleton

mkdir -p ~/agents/research-agent/{sources,reports}
cd ~/agents/research-agent
touch CLAUDE.md
mkdir .claude && touch .claude/settings.json

Step 2: Write CLAUDE.md

The CLAUDE.md file is the agent's standing brief. Claude reads it at the start of every session and treats its instructions as the project rulebook.

# Research Agent — Operating Brief

## Mission
Produce a balanced, citation-backed Markdown report on the topic the user names. 
Reports go in `reports/<slug>.md`. Source material goes in `sources/<slug>/`.

## Method
1. List 3-5 candidate URLs that cover the topic from different angles.
2. Use `curl` + `pandoc -f html -t plain` to fetch each URL into `sources/<slug>/<n>.txt`.
3. Read each file. Extract claims as a bullet list.
4. For every claim, cite the source URL inline.
5. Cross-check load-bearing claims against a second source. Flag any disagreement.
6. Write `reports/<slug>.md` with: Summary, Key Claims, Disagreements, Open Questions, Sources.

## Rules
- Do not invent URLs. If a fetch fails, report the failure and pick another candidate.
- Always include the date the page was fetched ("retrieved YYYY-MM-DD").
- Keep tone encyclopedic; avoid marketing language.
- Stop and ask the user before making more than 8 outbound HTTP requests.

Step 3: Lock Down Permissions

Put a tight allowlist in .claude/settings.json so the agent does not stop to ask before each safe command:

{
  "permissions": {
    "allow": [
      "Read",
      "Write",
      "Edit",
      "Bash(curl:*)",
      "Bash(pandoc:*)",
      "Bash(ls:*)",
      "Bash(mkdir:*)"
    ]
  }
}

Anything not on this list still prompts. That keeps the agent unable to run, for example, rm -rf without explicit consent.

Step 4: Run the Agent

From the project directory:

claude "Research the current state of open-source workflow engines. Slug: oss-workflow-engines."

Claude Code reads CLAUDE.md, drafts the candidate URL list, fetches each page through the allowlisted curl, and writes the cleaned text to sources/oss-workflow-engines/. It then reads the files back, extracts claims, cross-checks the load-bearing ones, and writes reports/oss-workflow-engines.md.

Step 5: Iterate

Refine CLAUDE.md after each run. Common upgrades:

  • Add a "Stop conditions" section that lists when to abort (paywalled site, geo-blocked content, contradictory data).
  • Add an "Output schema" section with the exact Markdown headings expected, so reports stay comparable.
  • Record the model and date at the top of each report so older outputs do not get mistaken for current ones.

Cost Notes

A single research run on Claude Sonnet using the agent above typically reads 5-8 pages, runs ~15 tool calls, and produces a 600-900 word report. As of April 2026 that costs roughly $0.10-$0.30 in API spend per run on the Anthropic API. Pro and Max subscribers running the same prompt against the included quota pay nothing per run, subject to the rate limit on their plan.

See the Claude Code tool page for the current entitlement matrix and the How to Set Up Claude Code with VS Code tutorial for the editor-side companion. For agent platforms with a hosted runtime, see the Best AI Agent Tools ranking.

Editor's Note: We use a slightly more elaborate version of this agent at ShadowGen for client weekly intelligence briefs. The biggest practical lesson was forcing the agent to record retrieval dates inline; without that the same report could not be revised three months later because no one could tell which claims were stale. The cheapest model that still produced reliable cross-checks in our testing was Claude Sonnet, not Opus, mainly because cross-checking does not need long-form reasoning, just careful comparison.

Last updated: | By Rafal Fila

Tools Mentioned

Related Guides

Related Rankings

Common Questions