LangChain Web Research Agent: Complete Tutorial
Step-by-step guide to building a LangChain agent that can research any topic on the web autonomously.
Sarah Kim
Developer Advocate

LangChain makes it easy to build AI agents with custom tools. This tutorial shows you how to create a web research agent that can search, read, and synthesize information from the web.
Architecture
User Query → Agent (GPT-4) → Tools (Search, Read) → Synthesis → Response
Step 1: Install Dependencies
npm install langchain @langchain/openai @langchain/core
Step 2: Create Tryb Tools
import { DynamicTool } from "langchain/tools";
const searchTool = new DynamicTool({
name: "web_search",
description: "Search the web for information. Input should be a search query.",
func: async (query: string) => {
const res = await fetch("https://api.tryb.dev/v1/search", {
method: "POST",
headers: {
"Authorization": `Bearer ${process.env.TRYB_API_KEY}`,
"Content-Type": "application/json"
},
body: JSON.stringify({ query, num_results: 5 })
});
const data = await res.json();
return JSON.stringify(data.results);
}
});
const readTool = new DynamicTool({
name: "read_webpage",
description: "Read the content of a webpage. Input should be a URL.",
func: async (url: string) => {
const res = await fetch("https://api.tryb.dev/v1/read", {
method: "POST",
headers: {
"Authorization": `Bearer ${process.env.TRYB_API_KEY}`,
"Content-Type": "application/json"
},
body: JSON.stringify({ url })
});
const data = await res.json();
return data.data.markdown;
}
});
Step 3: Create the Agent
import { ChatOpenAI } from "@langchain/openai";
import { AgentExecutor, createOpenAIFunctionsAgent } from "langchain/agents";
import { ChatPromptTemplate } from "@langchain/core/prompts";
const llm = new ChatOpenAI({ model: "gpt-4-turbo-preview" });
const prompt = ChatPromptTemplate.fromMessages([
["system", `You are a research assistant. Use the web_search tool to find relevant pages, then use read_webpage to get detailed content. Synthesize findings into a comprehensive answer.`],
["human", "{input}"],
["placeholder", "{agent_scratchpad}"]
]);
const agent = await createOpenAIFunctionsAgent({
llm,
tools: [searchTool, readTool],
prompt
});
const executor = new AgentExecutor({ agent, tools: [searchTool, readTool] });
Step 4: Run Research Queries
const result = await executor.invoke({
input: "What are the latest developments in quantum computing?"
});
console.log(result.output);
Advanced: Adding Memory
import { BufferMemory } from "langchain/memory";
const memory = new BufferMemory({
memoryKey: "chat_history",
returnMessages: true
});
const executorWithMemory = new AgentExecutor({
agent,
tools: [searchTool, readTool],
memory
});
Best Practices
- Limit tool calls to prevent runaway costs
- Cache frequently accessed URLs
- Add timeout handling for long-running research
- Log all tool calls for debugging
Full Source Code
Get the complete implementation: GitHub Repository

Sarah Kim
Developer Advocate at Tryb
Sarah helps developers build AI applications.


