|
|
LangChain Chains and Prompt Templates
Author: Venkata Sudhakar
LangChain is an open-source framework for building applications powered by Large Language Models. It provides a standard interface for connecting LLMs with external tools, data sources, memory, and other LLMs, making it much easier to build complex AI workflows than calling LLM APIs directly. LangChain has become one of the most widely adopted frameworks for building RAG pipelines, conversational agents, and document processing applications. The two most fundamental LangChain concepts are prompt templates and chains. A prompt template is a reusable, parameterised prompt with placeholders for variable inputs. Instead of building prompt strings manually with Python string formatting, prompt templates keep your prompts clean, testable, and composable. A chain is a sequence of steps where the output of one step becomes the input of the next. LangChain Expression Language (LCEL) is the modern way to compose chains using the pipe operator (|), making the data flow explicit and readable. The below example shows how to use LangChain prompt templates and LCEL chains to build a reusable document summariser and a follow-up Q&A chain.
It gives the following output,
Summary: Change Data Capture (CDC) is a pattern that reads database transaction logs
to capture real-time changes, with Debezium serving as an open-source tool that
publishes these changes to Apache Kafka topics. Downstream consumers can then
subscribe to these Kafka topics to react to database events without polling,
enabling use cases such as data warehouse loading and cache invalidation.
The below example shows a more advanced LCEL chain that performs two sequential LLM calls - first extracting key topics from a document and then generating a quiz based on those topics.
It gives the following output,
Q1: What data structure does Apache Kafka use to store messages?
A) Relational table B) Distributed commit log
C) Key-value store D) Binary heap
Correct Answer: B
Q2: How does Kafka distribute partitions in a consumer group?
A) Each consumer reads all partitions
B) Partitions are assigned randomly on each poll
C) Each partition is assigned to exactly one consumer in the group
D) All consumers share a single partition
Correct Answer: C
Q3: What strategy does a producer use when a message key is provided?
A) Round-robin across all partitions
B) Random partition selection
C) Key-based partition assignment (same key always goes to same partition)
D) Always partition 0
Correct Answer: C
Key LangChain LCEL concepts: Composability - Any LCEL runnable can be used anywhere in a chain. You can swap ChatOpenAI for Claude or a local Ollama model by changing one line, and the rest of the chain stays the same. Streaming - LCEL chains support streaming by default. Replace .invoke() with .stream() and LangChain will yield tokens as they arrive from the LLM, enabling real-time streaming responses in web applications. Parallel execution - Wrap multiple runnables in a dict (as shown in the sequential chain example) and LCEL runs them in parallel, reducing total latency when two independent LLM calls are needed.
|
|