Sameer Singh

If you are learning LangChain, there is one concept you absolutely cannot skip: Chains.
Chains are so fundamental to this framework that the library itself is named after them. But what exactly are chains? Why do we need them? And how do you build pipelines that are sequential, parallel, or even conditional?
In this in-depth guide, you will learn:
RunnableParallelRunnableBranchWhether you are a developer just getting started with LangChain or someone looking to go deeper, this guide will give you a rock-solid understanding of one of the most important patterns in modern LLM application development.
Before diving into chains, let's briefly revisit the foundational building blocks of LangChain that you should already be familiar with:
| Concept | What It Does |
|---|---|
| Models | Lets you call LLMs like GPT, Claude, Gemini |
| Prompts | Structures the input you send to the model |
| Structured Output | Ensures the model returns consistent JSON |
| Output Parsers | Formats and extracts meaningful data from responses |
Chains are the next layer on top of all of these. They are the glue that connects these components into a working, end-to-end pipeline.
In the simplest possible terms, a Chain is a sequence of steps connected together so that the output of one step automatically becomes the input of the next.
Think of it like an assembly line in a factory. Each station does one specific job. The product moves from station to station without any worker having to manually carry it. At the end of the line, you get a finished product.
In LangChain, your "product" is the LLM's response, and the "stations" are components like prompts, models, and parsers.
A basic chain might look like this conceptually:
User Input -> Prompt Template -> LLM Model -> Output Parser -> Final Response
Each arrow represents data flowing automatically from one component to the next. You set up the chain once, and then call it with a single command.
Before chains existed, building even a simple LLM application required a lot of manual, repetitive code. Let's look at what the process looked like without chains:
# Without chains - manual approach
prompt = prompt_template.format(topic=user_input)
response = model.invoke(prompt)
text = response.content
parsed = parser.parse(text)
final_output = do_something_with(parsed)This is fine for a two-step process. But real applications rarely stay simple. As your app grows:
Managing all of this manually leads to messy, hard-to-maintain code.
Chains solve this by letting you:
|) syntaxLangChain uses a concept called LCEL (LangChain Expression Language) to build chains. The key operator is the pipe symbol |, which you can think of as "then".
Here is what it looks like in practice:
from langchain_core.prompts import PromptTemplate
from langchain_openai import ChatOpenAI
from langchain_core.output_parsers import StrOutputParser
prompt = PromptTemplate.from_template("Give me 5 interesting facts about {topic}.")
model = ChatOpenAI(model="gpt-4o")
parser = StrOutputParser()
# Building the chain
chain = prompt | model | parser
# Running the chain
result = chain.invoke({"topic": "cricket"})
print(result)When you call chain.invoke({"topic": "cricket"}):
PromptTemplate formats the prompt using the topicChatOpenAI modelStrOutputParserThat's the power of chains in a nutshell. Now let's explore the three major types.
A Sequential Chain is the most straightforward type. Steps execute one after another, in a fixed order. The output of Step 1 becomes the input of Step 2, and so on.
Use sequential chains when:
Imagine an app that takes a topic as input, generates a detailed report, and then produces a concise summary of that report.
Flow:
Topic -> [Report Generator] -> Report Text -> [Summarizer] -> Summary
Implementation:
from langchain_core.prompts import PromptTemplate
from langchain_openai import ChatOpenAI
from langchain_core.output_parsers import StrOutputParser
model = ChatOpenAI(model="gpt-4o")
parser = StrOutputParser()
# Step 1: Generate a detailed report
report_prompt = PromptTemplate.from_template(
"Write a detailed report on the topic: {topic}. Include key facts, history, and current relevance."
)
# Step 2: Summarize that report
summary_prompt = PromptTemplate.from_template(
"Summarize the following report in 3 concise bullet points:\n\n{report}"
)
# Build the sequential chain
report_chain = report_prompt | model | parser
summary_chain = summary_prompt | model | parser
# Connect them: output of report_chain feeds into summary_chain
full_chain = report_chain | (lambda report: {"report": report}) | summary_chain
# Run
result = full_chain.invoke({"topic": "Artificial Intelligence in Healthcare"})
print(result)What happens step by step:
report_prompt formats it into a detailed promptStrOutputParser extracts the textsummary_prompt as the {report} variableSequential chains are ideal for multi-stage reasoning tasks, content pipelines, document workflows, and anywhere you need one output to feed into the next task.
A Parallel Chain runs multiple independent chains at the same time and merges their results. Instead of waiting for one chain to finish before starting another, both (or all) chains execute simultaneously.
Use parallel chains when:
Imagine a study assistant app. A user uploads a document or piece of text and wants both:
Both tasks are independent. They take the same input (the document text) and produce different outputs. There is no reason to run them one after the other.
Flow:
Document Text -> [Notes Chain] -> Notes
-> [Quiz Chain] -> Quiz
|
[Merge Chain] -> Combined OutputImplementation:
from langchain_core.prompts import PromptTemplate
from langchain_openai import ChatOpenAI
from langchain_core.output_parsers import StrOutputParser
from langchain_core.runnables import RunnableParallel
model = ChatOpenAI(model="gpt-4o")
parser = StrOutputParser()
# Notes chain
notes_prompt = PromptTemplate.from_template(
"Create concise bullet-point study notes from the following text:\n\n{text}"
)
notes_chain = notes_prompt | model | parser
# Quiz chain
quiz_prompt = PromptTemplate.from_template(
"Generate 5 multiple-choice quiz questions based on the following text:\n\n{text}"
)
quiz_chain = quiz_prompt | model | parser
# Parallel chain - runs both at the same time
parallel_chain = RunnableParallel({
"notes": notes_chain,
"quiz": quiz_chain
})
# Merge the results
merge_prompt = PromptTemplate.from_template(
"Here are the study notes:\n{notes}\n\nHere is the quiz:\n{quiz}\n\nPresent this as a structured study package."
)
merge_chain = merge_prompt | model | parser
# Full pipeline
full_chain = parallel_chain | merge_chain
# Run
result = full_chain.invoke({"text": "Your document content here..."})
print(result)Why this is powerful:
notes_chain and quiz_chain run at the same time, cutting response timeRunnableParallel output is a dictionary, which the merge chain then uses to produce a unified resultParallel chains are especially useful in multi-model architectures, content generation systems, and research aggregation tools.
A Conditional Chain is a chain that takes different paths based on a condition. Instead of always running the same pipeline, the chain evaluates some logic and branches accordingly.
Think of it like an if-else statement inside your pipeline.
Use conditional chains when:
Imagine a customer feedback system. A user submits feedback. Your app needs to:
Flow:
Feedback -> [Sentiment Detector] -> "positive" OR "negative"
|
[RunnableBranch]
| |
[Positive Reply] [Negative Reply]Before we look at the code, let's address a common pitfall.
If your sentiment detector returns inconsistent output like:
"Positive""This is a positive sentiment.""The feedback is positive in nature."...your conditional logic will break unpredictably.
This is why you must use Pydantic-based structured output for any step that drives conditional logic. Pydantic enforces that the output is always one of the allowed values.
from pydantic import BaseModel
from typing import Literal
class FeedbackSentiment(BaseModel):
sentiment: Literal["positive", "negative"]By binding this schema to your model, you guarantee the output is always either "positive" or "negative" and nothing else.
from langchain_core.prompts import PromptTemplate
from langchain_openai import ChatOpenAI
from langchain_core.output_parsers import StrOutputParser
from langchain_core.runnables import RunnableBranch, RunnableLambda
from pydantic import BaseModel
from typing import Literal
model = ChatOpenAI(model="gpt-4o")
parser = StrOutputParser()
# Structured output schema for sentiment
class FeedbackSentiment(BaseModel):
sentiment: Literal["positive", "negative"]
# Sentiment detection chain (structured output)
sentiment_prompt = PromptTemplate.from_template(
"Analyze the sentiment of the following customer feedback and classify it as positive or negative.\n\nFeedback: {feedback}"
)
sentiment_chain = sentiment_prompt | model.with_structured_output(FeedbackSentiment)
# Positive reply chain
positive_prompt = PromptTemplate.from_template(
"Write a warm, genuine thank-you response to this positive customer feedback:\n\n{feedback}"
)
positive_chain = positive_prompt | model | parser
# Negative reply chain
negative_prompt = PromptTemplate.from_template(
"Write a sincere, empathetic apology and offer assistance for this negative customer feedback:\n\n{feedback}"
)
negative_chain = negative_prompt | model | parser
# Default chain (fallback)
default_chain = RunnableLambda(lambda x: "Thank you for reaching out. We will get back to you shortly.")
# Conditional branching
branch = RunnableBranch(
(lambda x: x["sentiment"].sentiment == "positive", positive_chain),
(lambda x: x["sentiment"].sentiment == "negative", negative_chain),
default_chain
)
# Full pipeline
def run_pipeline(feedback_text):
sentiment_result = sentiment_chain.invoke({"feedback": feedback_text})
reply = branch.invoke({"sentiment": sentiment_result, "feedback": feedback_text})
return reply
# Test it
print(run_pipeline("I absolutely loved your product! It changed my life."))
print(run_pipeline("The service was terrible and nothing worked as expected."))What happens:
FeedbackSentiment object with sentiment = "positive" or "negative"RunnableBranch evaluates the conditionConditional chains make your application intelligent and adaptive rather than rigid and one-size-fits-all.
| Chain Type | Structure | Best For | Key Tool |
|---|---|---|---|
| Simple Chain | One linear pipeline | Basic prompt-to-output tasks | |
| Sequential Chain | Step after step | Multi-stage tasks where each step depends on the previous | |
| Parallel Chain | Multiple branches running together | Independent tasks from the same input | RunnableParallel |
| Conditional Chain | Branching logic based on conditions | Routing, dynamic responses, sentiment-based flows | RunnableBranch |
Chains are not just a learning exercise. They appear in nearly every serious LLM application:
Retrieval-Augmented Generation (RAG): A sequential chain that retrieves documents, formats them into a prompt, and generates a grounded answer.
Multi-Agent Systems: Parallel chains where different agents work on different subtasks simultaneously.
Customer Support Bots: Conditional chains that route conversations based on intent classification.
Content Generation Pipelines: Sequential chains that outline, draft, edit, and format content in stages.
Data Extraction Workflows: Sequential chains that extract, validate, and transform structured data from unstructured text.
Document Analysis Tools: Parallel chains that summarize, classify, and extract key information from the same document simultaneously.
1. Not using structured output in conditional chains If your branching logic depends on a string comparison, make sure the model always returns exactly the expected string. Use Pydantic output schemas to enforce this.
2. Making chains too long without intermediate validation Very long sequential chains can fail silently in the middle. Add intermediate checks or logging when building complex pipelines.
3. Using parallel chains for dependent tasks Parallel chains work only when the tasks are independent. If Task B needs the output of Task A, it must be sequential, not parallel.
4. Ignoring error handling Chains do not automatically handle model failures, rate limits, or unexpected outputs. Wrap your chains in try-except blocks for production use.
Once you have a solid understanding of chains, the next topics to explore are:
Each of these builds directly on your understanding of chains.
Chains are the backbone of every real LangChain application. They transform scattered, manual code into clean, composable, and scalable pipelines.
Start by building a simple chain. Then add a sequential step. Then try running two chains in parallel. Finally, add conditional branching to make your app truly dynamic.
Once you understand all four chain types and when to use each one, you are fully equipped to build production-grade LLM applications with LangChain.
The road from beginner to builder starts here.
Master LangChain Runnables from scratch. Learn what they are, why they replaced chains, and how LCEL helps you build flexible AI pipelines with clean, modular code.
Sameer Singh
What if you could find the majority element without counting anything? The Boyer-Moore Voting Algorithm does exactly that, in O(n) time and O(1) space. Here is the full breakdown.
Sign in to join the discussion.
Rahul Kumar