Skip to content
Curriculum/Day 4: Function Calling & Tool Chains
Day 4Build AI Products

Function Calling & Tool Chains

You define REST endpoints with schemas. Function calling is the same — you define tools with Zod schemas and the LLM decides when to call them. Today you'll go beyond single tool calls: parallel execution, error recovery, and multi-step tool chains where one tool's output feeds another.

80 min(+30 min boss)★★★☆☆
🔧
Bridge:REST endpoints + ZodTool definitions + LLM routing

Use this at work tomorrow

Wrap your existing internal APIs as tools — let an LLM orchestrate them via natural language.

Learning Objectives

  • 1Define tools with Zod schemas (same library you already use)
  • 2Understand the LLM-as-router mental model — it picks the right tool
  • 3Handle parallel tool calls and multi-step tool chains
  • 4Implement tool error recovery and fallback strategies
  • 5Build an AI assistant that searches, calculates, and notifies

Ship It: AI assistant with live tools

By the end of this day, you'll build and deploy a ai assistant with live tools. This isn't a toy — it's a real project for your portfolio.

Before You Start — Rate Your Confidence

I can define tools with schemas, wire them into an LLM, handle parallel calls, and build error recovery so tool failures don't crash conversations.

1 = no idea · 5 = ship it blindfolded
Predict First — Then Learn

How does an LLM 'tool call' differ from a normal API call?

From Endpoints to Tools

You define REST endpoints like POST /api/send-email with a schema. Function calling is identical — you define tools with Zod schemas, and the LLM decides when to call them. The mental model shift: instead of the client calling your API, the LLM becomes an intelligent router that picks the right tool based on the user's intent.

💡Tool definitions = API endpoint definitions. Same Zod schemas. The LLM is the new 'client' that picks the right tool from intent.
Quick Pulse Check

What does the LLM use to decide which tool to call?

The LLM-as-Router Mental Model

Think of the LLM as a smart API gateway. It reads the user's request, looks at available tools (like an OpenAPI spec), picks the right one, generates valid parameters, and calls it. You don't write if/else routing logic — the LLM handles intent classification, parameter extraction, and orchestration in one step.

💡LLM = smart API gateway. It reads intent, picks the right tool, generates valid args. No if/else routing needed.
Quick Pulse Check

User says: 'What's 15% of $200?' — which tool gets called?

Predict First — Then Learn

When should the LLM call tools in parallel vs sequentially?

Parallel Tool Calls & Multi-Step Chains

Modern LLMs can call multiple tools in parallel (e.g., fetch weather AND search flights simultaneously). They can also chain tools — use the output of one tool as the input to another across multiple turns. This is the foundation for building agents (Day 5). The key pattern: tool results go back into the conversation, and the LLM decides what to do next.

💡Independent tools → parallel. Dependent tools → chain. maxSteps enables multi-turn tool use. This is the foundation for agents.
Predict First — Then Learn

A tool call fails (API timeout). What should happen?

Tool Error Handling & Recovery

Tools fail. APIs time out. Data is missing. The LLM needs to handle this gracefully. Pattern: wrap tool execution in try/catch, return structured errors, and let the LLM retry or use an alternative. Never let a tool failure crash the entire conversation — return an error message and let the LLM adapt.

💡Wrap tools in try/catch. Return structured errors to the LLM. Let it retry or adapt. Never crash the conversation.
Quick Pulse Check

What's the safest way to handle tool errors in an LLM system?

The Full Evolution

Watch one function evolve through every concept you just learned.

Production Gotchas

Tool descriptions matter more than you think — the LLM reads them to decide what to call. Vague descriptions = wrong tool calls. Always validate tool parameters with Zod BEFORE execution (don't trust LLM-generated args blindly). Set timeouts on tool execution (LLMs can't wait forever). Log every tool call for debugging — you'll need it when the LLM calls the wrong tool.

Code Comparison

Endpoints vs Tool Definitions

REST endpoint design vs AI tool definitions — same schema, different caller

REST EndpointTraditional
// Define a REST endpoint
app.post("/api/weather", async (req, res) => {
  const { city } = req.body;
  const data = await weatherAPI.get(city);
  res.json({
    temp: data.temperature,
    conditions: data.conditions,
  });
});

// Client calls it EXPLICITLY
const weather = await fetch(
  "/api/weather",
  {
    method: "POST",
    body: JSON.stringify({ city: "NYC" })
  }
);
Tool DefinitionAI Engineering
// Define a tool for the LLM
import { z } from "zod";

const tools = {
  getWeather: {
    description: "Get current weather for a city",
    parameters: z.object({
      city: z.string().describe("City name"),
    }),
    execute: async ({ city }) => {
      const data = await weatherAPI.get(city);
      return {
        temp: data.temperature,
        conditions: data.conditions,
      };
    },
  },
};

// LLM decides WHEN to call it
const { text } = await generateText({
  model: openai("gpt-4o"),
  tools,
  prompt: "What should I wear in NYC today?",
});
// LLM: calls getWeather("NYC") autonomously
// Then uses the result to answer about clothing

KEY DIFFERENCES

  • REST: client explicitly calls the endpoint with exact parameters
  • Tools: LLM autonomously decides WHEN to call and WITH WHAT args
  • Tool schemas = OpenAPI specs. Same Zod library, same pattern
  • The LLM is the 'smart router' — it picks the right tool from intent

Single Call vs Multi-Step Tool Chain

From simple API calls to autonomous multi-step workflows

Sequential API CallsTraditional
// Developer writes the orchestration
async function planTrip(destination: string) {
  // Step 1: Get weather
  const weather = await fetch(
    "/api/weather?city=" + destination
  ).then(r => r.json());

  // Step 2: Search flights
  const flights = await fetch(
    "/api/flights?to=" + destination
  ).then(r => r.json());

  // Step 3: Find hotels
  const hotels = await fetch(
    "/api/hotels?city=" + destination
  ).then(r => r.json());

  // Developer combines results
  return { weather, flights, hotels };
}
LLM Tool ChainAI Engineering
// LLM orchestrates the tools itself
const tools = {
  getWeather: { /* ... */ },
  searchFlights: { /* ... */ },
  findHotels: { /* ... */ },
  calculateBudget: { /* ... */ },
};

const { text } = await generateText({
  model: openai("gpt-4o"),
  tools,
  maxSteps: 5, // Allow multi-step
  prompt: "Plan a weekend trip to Paris.
           Budget is $2000.",
});
// LLM autonomously:
// 1. Calls getWeather("Paris")
// 2. Calls searchFlights + findHotels
//    in PARALLEL
// 3. Calls calculateBudget with results
// 4. Generates a complete trip plan

KEY DIFFERENCES

  • Traditional: YOU write the orchestration logic (order, conditions)
  • Tools: LLM decides the order, parallelism, and data flow
  • maxSteps allows multi-turn tool use (chain outputs to inputs)
  • The LLM can call multiple tools in parallel when they're independent

Bridge Map: REST endpoints + Zod → Tool definitions + LLM routing

Click any bridge to see the translation

Hands-On Challenges

Build, experiment, and get AI-powered feedback on your code.

Real-World Challenge

AI Assistant with Live Tools

Build and deploy a conversational AI assistant that can use real tools — search the web, check weather, perform calculations, and chain multiple tools together to answer complex questions. This is how production AI assistants work.

~3h estimated
Next.js 14+Vercel AI SDKOpenAI GPT-4oZodTailwind CSSVercel (deploy)

Acceptance Criteria

  • Define 3+ tools with Zod schemas using the Vercel AI SDK tool() pattern
  • Wire tools into generateText() with maxSteps for multi-step tool chains
  • Show tool calls and results inline in the chat UI (which tool was called, with what arguments, and what it returned)
  • Handle tool execution errors gracefully with fallback responses
  • Support multi-turn conversation with tool-call history
  • Add loading states that show which tool is currently executing
  • Deploy to a public URL (Vercel, Netlify, etc.)

Build Roadmap

0/6

Create a new Next.js app with TypeScript and Tailwind CSS. Set up the project with a chat interface page and an API route for tool-augmented generation.

npx create-next-app@latest ai-assistant --typescript --tailwind --app
Plan your API route at /api/chat that handles tool-augmented conversation

Deploy Tip

Push to GitHub and import into Vercel. If using external APIs for tools (weather, search), set those API keys in Vercel environment variables too. Consider rate limiting your endpoint to prevent abuse.

Sign in to submit your deployed project.

After Learning — Rate Your Confidence Again

I can define tools with schemas, wire them into an LLM, handle parallel calls, and build error recovery so tool failures don't crash conversations.

1 = no idea · 5 = ship it blindfolded