How to Build an AI Research Agent with Claude Code in 2026
Step-by-step tutorial for building a multi-step AI research agent using only Claude Code, a project-level CLAUDE.md operating brief, and a tight permission allowlist. The example agent fetches web pages, extracts claims, cross-checks against a second source, and writes a structured Markdown report. Tested on Claude Sonnet as of April 2026.
Overview
Claude Code is Anthropic's official command-line interface for Claude. Although marketed primarily as a coding assistant, the CLI accepts arbitrary instructions and can read files, run shell commands, and call out to other tools, which makes it a practical scaffold for building lightweight research agents. This tutorial walks through building a multi-step AI research agent using only Claude Code, a project-level CLAUDE.md, and a small handful of permitted Bash commands.
The example agent fetches recent web pages on a topic, extracts key claims, cross-checks them against a second source, and writes a structured Markdown report. The same pattern extends to competitive intelligence, internal knowledge base updates, and audit-style reviews.
Prerequisites
- Claude Code installed and authenticated (
npm install -g @anthropic-ai/claude-codethenclaude auth) - An Anthropic account with API access or a Claude Pro/Max subscription that includes Claude Code
curlandpandocavailable on the PATH (used to fetch and clean web pages)- A working directory dedicated to the agent's outputs
Step 1: Create the Project Skeleton
mkdir -p ~/agents/research-agent/{sources,reports}
cd ~/agents/research-agent
touch CLAUDE.md
mkdir .claude && touch .claude/settings.json
Step 2: Write CLAUDE.md
The CLAUDE.md file is the agent's standing brief. Claude reads it at the start of every session and treats its instructions as the project rulebook.
# Research Agent — Operating Brief
## Mission
Produce a balanced, citation-backed Markdown report on the topic the user names.
Reports go in `reports/<slug>.md`. Source material goes in `sources/<slug>/`.
## Method
1. List 3-5 candidate URLs that cover the topic from different angles.
2. Use `curl` + `pandoc -f html -t plain` to fetch each URL into `sources/<slug>/<n>.txt`.
3. Read each file. Extract claims as a bullet list.
4. For every claim, cite the source URL inline.
5. Cross-check load-bearing claims against a second source. Flag any disagreement.
6. Write `reports/<slug>.md` with: Summary, Key Claims, Disagreements, Open Questions, Sources.
## Rules
- Do not invent URLs. If a fetch fails, report the failure and pick another candidate.
- Always include the date the page was fetched ("retrieved YYYY-MM-DD").
- Keep tone encyclopedic; avoid marketing language.
- Stop and ask the user before making more than 8 outbound HTTP requests.
Step 3: Lock Down Permissions
Put a tight allowlist in .claude/settings.json so the agent does not stop to ask before each safe command:
{
"permissions": {
"allow": [
"Read",
"Write",
"Edit",
"Bash(curl:*)",
"Bash(pandoc:*)",
"Bash(ls:*)",
"Bash(mkdir:*)"
]
}
}
Anything not on this list still prompts. That keeps the agent unable to run, for example, rm -rf without explicit consent.
Step 4: Run the Agent
From the project directory:
claude "Research the current state of open-source workflow engines. Slug: oss-workflow-engines."
Claude Code reads CLAUDE.md, drafts the candidate URL list, fetches each page through the allowlisted curl, and writes the cleaned text to sources/oss-workflow-engines/. It then reads the files back, extracts claims, cross-checks the load-bearing ones, and writes reports/oss-workflow-engines.md.
Step 5: Iterate
Refine CLAUDE.md after each run. Common upgrades:
- Add a "Stop conditions" section that lists when to abort (paywalled site, geo-blocked content, contradictory data).
- Add an "Output schema" section with the exact Markdown headings expected, so reports stay comparable.
- Record the model and date at the top of each report so older outputs do not get mistaken for current ones.
Cost Notes
A single research run on Claude Sonnet using the agent above typically reads 5-8 pages, runs ~15 tool calls, and produces a 600-900 word report. As of April 2026 that costs roughly $0.10-$0.30 in API spend per run on the Anthropic API. Pro and Max subscribers running the same prompt against the included quota pay nothing per run, subject to the rate limit on their plan.
See the Claude Code tool page for the current entitlement matrix and the How to Set Up Claude Code with VS Code tutorial for the editor-side companion. For agent platforms with a hosted runtime, see the Best AI Agent Tools ranking.
Editor's Note: We use a slightly more elaborate version of this agent at ShadowGen for client weekly intelligence briefs. The biggest practical lesson was forcing the agent to record retrieval dates inline; without that the same report could not be revised three months later because no one could tell which claims were stale. The cheapest model that still produced reliable cross-checks in our testing was Claude Sonnet, not Opus, mainly because cross-checking does not need long-form reasoning, just careful comparison.
Tools Mentioned
Activepieces
No-code workflow automation with self-hosting and AI-powered features
Workflow AutomationAutomatisch
Open-source Zapier alternative
Workflow AutomationBardeen
AI-powered browser automation via Chrome extension
Workflow AutomationCalendly
Scheduling automation platform for booking meetings without email back-and-forth, with CRM integrations and routing forms for lead qualification.
Workflow AutomationRelated Guides
How to Self-Host n8n on a VPS in 2026
Step-by-step tutorial for self-hosting n8n on a small Linux VPS using Docker Compose, a persistent volume, HTTPS via Caddy with automatic Let's Encrypt certificates, and basic auth on the editor. Tested on a Hetzner CX22 at €4.51/month as of April 2026.
How to Set Up a Zapier-to-Airtable Content Pipeline in 2026
A practical end-to-end tutorial for piping webhook content into a multi-table Airtable base via Zapier. Covers Airtable schema, webhook trigger, deduplication via External ID, error notifications, and 2026 cost estimates. Tested on Zapier Starter ($29.99/month) and Airtable Team ($24/seat) as of April 2026.
Anthropic Releases Claude Opus 4.7 (April 2026)
Anthropic released Claude Opus 4.7 in April 2026. The announcement retains the 1 million token context window, claims improved tool-use reliability and multi-file code editing, and reaches users via the Anthropic API, Claude.ai, Claude Code, Amazon Bedrock, and Google Vertex AI. Specific benchmark deltas and full pricing changes had not been confirmed in primary sources at the time of writing.
Related Rankings
Best No-Code Automation Platforms in 2026
A ranked list of no-code automation platforms in 2026. The ranking covers visual workflow builders that allow non-engineering teams to connect SaaS apps, route data, and add conditional logic without writing code. Entries cover proprietary cloud platforms (Zapier, Make, Pipedream, IFTTT) and open-source visual builders (n8n, Activepieces). Scoring reflects integration breadth, pricing accessibility, visual editor ease, reliability and error handling, and self-hosting availability.
Best Open-Source Workflow Engines for Engineers in 2026
A ranked list of the best open-source workflow engines for engineers in 2026. This ranking evaluates code-first workflow orchestration platforms that engineers can self-host, extend, and embed inside existing software stacks. The ranking differs from the broader Best Open-Source Automation 2026 list by focusing specifically on workflow engines intended for developers: platforms that prioritize SDK coverage, durable execution, scalability, and operational controls over visual SaaS-connector automation. It includes durable execution engines (Temporal), data and task orchestrators (Apache Airflow, Prefect), low-code workflow builders with strong self-host stories (n8n, Windmill, Activepieces), and historical agent-based tools (Huginn).
Common Questions
What are the best open-source automation tools in 2026?
The leading open-source automation tools in 2026 are [n8n](/tools/n8n/) (visual workflow builder with 400+ integrations and a fair-code license), [Activepieces](/tools/activepieces/) (MIT-licensed Zapier alternative), and [Windmill](/tools/windmill/) (developer-focused Python and TypeScript workflow engine).
What are the best open-source workflow engines in 2026?
The top open-source workflow engines in 2026 are [Temporal](/tools/temporal-workflows/) (durable execution with multi-language SDKs), [Apache Airflow](/tools/apache-airflow/) (the de facto data DAG orchestrator), and [Prefect](/tools/prefect/) (modern Python-first workflow framework).
Which ETL tool is best for data teams in 2026?
The leading ETL/ELT tools for data teams in 2026 are [Fivetran](/tools/fivetran/) (managed ELT with 500+ connectors), [Airbyte](/tools/airbyte/) (open-source ELT with self-hosted option), and [dbt](/tools/dbt/) (in-warehouse SQL transformation framework used by 40,000+ companies).
What is the best no-code automation platform in 2026?
The leading no-code automation platforms in 2026 are [Zapier](/tools/zapier/) (6,000+ integrations and the broadest connector catalog), [Make](/tools/make/) (operations-based pricing for multi-step workflows), and [n8n](/tools/n8n/) (fair-code visual builder with self-hosting).