LangChain Evolution Part 2: From Hardcoded Workflows to Dynamic Graphs
In Part 1 of this series, we celebrated a huge victory. We dismantled a fragile, do-everything prompt and replaced it with an intelligent “project manager” agent that uses a team of specialized tools. It was a massive leap forward in reliability and maintainability.
The agent, with its run()
method orchestrating each step, was a resounding success. It could reason, call tools sequentially, and even pause for human input. But after using it for a while, a new, more subtle problem emerged. I hadn’t slain the monolith; I had just turned it into a rigid, hardcoded script.
The agent’s “brain” was trapped inside a long chain of if/else
statements.
The New Bottleneck: The Tyranny of the run()
Method
My BlueprintAgent.run()
method was the heart of the system, but it was also its Achilles’ heel. It was a perfect, linear script for a perfect, linear world.
// A simplified look at the agent's rigid logicasync run(input) { // ... this.state = await this.orchestrator.executeStep("analyze_project", ...); const analysis = await this.planner.analyzeProject(this.state);
if (analysis.needsQuestions) { // Fork #1 this.state = await this.orchestrator.executeStep("generate_questions", ...); this.state = await this.orchestrator.executeStep("await_human_input", ...); }
const artifactSelection = await this.planner.selectArtifacts(this.state);
if (artifactSelection.artifacts.includes("requirements")) { // Fork #2 this.state = await this.orchestrator.executeStep("generate_requirements", ...); }
// ... many more `if` statements followed}
This approach, while a huge improvement over a single prompt, introduced its own set of pain points:
- Rigidity: What if I wanted to generate a “quick draft” of the technical spec before asking clarifying questions? That would require a messy rewrite of the core logic. The workflow was set in stone.
- Complexity Creep: Adding a new step or a feedback loop meant carefully inserting it into this delicate sequence, increasing the risk of bugs. The function was becoming a new kind of monolith.
- No Cyclical Behavior: The workflow was strictly a one-way street. It couldn’t loop. For example, I couldn’t implement a “review and refine” cycle where the agent refines an artifact, presents it for review, and then refines it again based on new feedback. A
while
loop inside thisrun
method felt like a hack. - Limited Autonomy: The agent wasn’t truly deciding what to do next. It was just following my handwritten script.
I realized I wasn’t just building an agent; I was manually coding a state machine. And there’s a far better tool for that job: LangGraph.
The Evolution: An Agent as a Navigable Graph
LangGraph is a library for building stateful, multi-actor applications by defining them as graphs. Instead of writing a script of if/else
statements, you define a set of possible steps (Nodes) and the rules for transitioning between them (Edges).
The agent no longer follows a fixed script. Instead, it navigates through defined states, with its current AgentState
determining what happens next.
The flexible, cyclical workflow enabled by LangGraph
Click to zoom
This graph-based model provides true flexibility and autonomy.
The Deep Dive: From Code to Graph
Migrating my orchestrated agent to LangGraph was surprisingly natural because I had already done the hard work of separating concerns. My existing components mapped almost perfectly to LangGraph’s concepts.
1. From a run()
Method to a Declarative Graph
The biggest change was deleting my complex run()
method and replacing it with a declarative graph definition.
Before: A hardcoded script
The logic was buried in imperative code (if
, await
, if
, await
…).
After: A declared structure
The logic is now represented as data—a graph. The WorkflowOrchestrator
is replaced by the LangGraph engine itself.
import { StateGraph } from "@langchain/langgraph";
// 1. Define the shape of our state// This is similar to how state management works in React - we define a central// state object that flows through our entire application. Each node can read from// and update this state, creating a predictable data flow pattern.const graph = new StateGraph({ channels: AgentStateSchema });
// 2. Add each step as a nodegraph.addNode("analyze", analyzeProjectNode);graph.addNode("generateQuestions", generateQuestionsNode);graph.addNode("selectArtifacts", selectArtifactsNode);graph.addNode("generateRequirements", generateRequirementsNode);// ...
// 3. Set the starting pointgraph.setEntryPoint("analyze");
// 4. Define the logic for transitions (edges)graph.addConditionalEdges("analyze", shouldAskQuestionsRouter);graph.addEdge("generateQuestions", "selectArtifacts");// ...
// 5. Compile the graph into a runnable applicationconst app = graph.compile();
The graph.compile()
method returns a standard LangChain Runnable. This is powerful because it means our entire complex agent workflow can now be invoked, streamed, and even batched just like a simple LLM chain, integrating it seamlessly into the broader ecosystem.
The Insight: My workflow logic is no longer buried in imperative code; it’s a declarative structure that an engine executes. This is more robust, easier to reason about, and infinitely more flexible.
2. From Logic-in-Code to Logic-in-Edges
In the old system, the AgentPlanner
made decisions, but the run()
method was responsible for acting on them. LangGraph cleans this up beautifully by moving that decision-making logic into conditional edges.
A conditional edge is a function (a “router”) that inspects the current state and returns the name of the next node to visit.
// This function replaces a giant `if/else` block in the old run() methodfunction artifactRouter(state: AgentState): string { const generatedArtifacts = Object.keys(state.artifacts); const artifactsToGenerate = state.plannedArtifacts;
// Find the next artifact that hasn't been created yet const nextArtifact = artifactsToGenerate.find( (a) => !generatedArtifacts.includes(a) );
if (nextArtifact) { // Tell the graph to go to the correct generation node return `generate_${nextArtifact}`; } else { // All artifacts are done, move to the review step return "review_and_refine"; }}
// Wire it into the graphgraph.addConditionalEdges("selectArtifacts", artifactRouter);
This encapsulates the agent’s decision-making power. The agent’s “brain” is no longer scattered across a function but lives in these focused, testable router functions.
3. Unlocking Cyclical Behavior (Loops!)
This was the holy grail. With LangGraph, creating loops is trivial. You just add an edge that points back to a previous node. Our “review and refine” loop is a perfect example.
- Node
review
: Ask the human for feedback. - Edge
should_refine_router
: A router that checks the human’s feedback.- If feedback exists, it returns
"refine"
. - If the human approves, it returns
"complete"
.
- If feedback exists, it returns
- Node
refine
: A tool call that refines the artifact based on feedback. - Edge from
refine
back toreview
: After refining, the graph automatically goes back to the human for another round of review.
This simple review -> refine -> review
cycle was nearly impossible to implement cleanly before but is a natural pattern in LangGraph.
// Define the review and refinement nodesgraph.addNode("review", reviewArtifactNode);graph.addNode("refine", refineArtifactNode);
// Add a conditional edge that creates the loopgraph.addConditionalEdges("review", (state: AgentState) => { if (state.humanFeedback && state.humanFeedback.needsRefinement) { return "refine"; } return "complete";});
// Create the loop: after refining, go back to reviewgraph.addEdge("refine", "review");
Real-World Example: Adding a New Feature
To illustrate the power of this approach, let me show you how easy it is to add a completely new feature to the agent.
Requirement: Add a “Security Review” step that only runs for web applications and can be skipped for simple CLI tools.
In the old system, this would require modifying the run()
method, carefully placing a new if
statement, and re-testing the entire workflow to ensure nothing broke.
With LangGraph, the change is straightforward:
// 1. Add the new node to the graphgraph.addNode("securityReview", securityReviewNode);
// 2. Modify the router after planning artifactsgraph.addConditionalEdges("selectArtifacts", (state: AgentState) => { // If it's a web app, go to the new security review step first if (state.projectType === "web_application") { return "securityReview"; } // Otherwise, proceed as normal return "generate_artifacts_router"; // The original artifact router});
// 3. Connect the new node back to the main flowgraph.addEdge("securityReview", "generate_artifacts_router");
That’s it. Three simple, declarative additions. The business logic inside the other nodes is untouched. The risk of regression is minimal. This is maintainable AI system design.
Comparing the Two Approaches
Let’s visualize the difference between the hardcoded orchestration and the graph-based approach:
Hardcoded orchestration vs. declarative graph
Click to zoom
The key difference is that in the graph approach, the workflow logic is data, not code. This makes it much easier to modify, visualize, and reason about.
The Payoff: What I Gained
Evolving to LangGraph felt like upgrading from an assembly line to a modern robotics workshop.
- True Flexibility: I can now re-wire the agent’s entire workflow by changing a few lines in the graph definition, without touching the core tool logic.
- Powerful Cyclical Logic: Self-correction, refinement loops, and retry mechanisms are now simple to build. The agent can work on a problem until it’s solved.
- Enhanced Visualization & Debugging: LangGraph integrates with LangSmith, giving me a visual trace of every step, decision, and state change. The agent’s “thought process” is no longer a log file; it’s an interactive diagram.
- Massive Scalability: Adding ten new tools doesn’t make my workflow code more complex. I just add new nodes and the edges to route to them.
- Better Separation of Concerns: Business logic (what each node does) is completely separated from control flow (how nodes connect). This makes the codebase dramatically more maintainable.
- Testability: Each node and router function can be tested in isolation. I no longer need to test a giant orchestration method with dozens of branches.
- Future-Proof Intelligence: As more powerful models are released, my agent automatically gets smarter. Since the agent decides its own path through the graph rather than following hardcoded logic, better reasoning capabilities translate directly to better decision-making—without changing a single line of code.
Adding New Features Made Easy
To illustrate the power of this approach, let me show you how easy it is to add a completely new feature to the agent.
Requirement: Add a security review step that only runs for web applications and can be skipped for simple CLI tools.
In the old system, this would require:
- Modifying the
run()
method to add the new logic - Carefully placing the new
if
statement in the right spot - Testing the entire workflow to ensure nothing broke
With LangGraph, I simply:
// 1. Add the new nodegraph.addNode("securityReview", securityReviewNode);
// 2. Add a router that decides when to use itgraph.addConditionalEdges("selectArtifacts", (state: AgentState) => { if (state.projectType === "web_application") { return "securityReview"; } return artifactRouter(state); // Use the existing router});
// 3. Connect it back to the main flowgraph.addEdge("securityReview", "generateRequirements");
That’s it. Three simple additions to the graph definition. The core logic in my nodes doesn’t change. The run()
method doesn’t exist anymore, so there’s nothing to break. The graph engine handles all the complexity.
When Should You Make This Evolution?
Not every agent needs to be a graph. If your agent workflow is truly linear and simple, the orchestration approach is perfectly fine. But consider migrating to LangGraph when you notice:
- Your
run()
method is growing beyond 100 lines - You’re adding multiple levels of nested
if/else
statements - You need to implement loops or retry logic
- You want to experiment with different workflow variations
- You need better visibility into what your agent is doing
- Multiple team members are working on the workflow logic
Conclusion: From System Design to Graph-Thinking
The journey from a monolithic prompt to an orchestrated agent was about AI system design. This next step—from an orchestrated agent to a dynamic graph—is about adopting graph-thinking.
I’m no longer defining a rigid procedure for the agent to follow. I’m defining a space of possibilities and giving the agent the intelligence to navigate it. The agent is now a truly autonomous entity that can plan, execute, loop, and self-correct on its path to achieving a complex goal.
If you’ve already built your first agent, take a look at its core logic. If you see a growing chain of if/else
statements, it might be time for your own evolution to the graph.
Our agent now has a flexible, powerful brain. But it’s still an ephemeral prototype that loses all its work if it crashes. In the final part of our series, we’ll make our system truly production-ready by tackling persistence, multi-agent collaboration, and real-time observability.
Next Up: Part 3: Production-Ready Agentic Systems
Series Navigation:
- Part 1: From Monolithic Prompts to Intelligent Agents
- Part 2: From Hardcoded Workflows to Dynamic Graphs (You are here)
LangGraph Agent Cheat Sheet
Here’s a quick reference for building your own LangGraph agents:
Core Concepts
Concept | Purpose | Example |
---|---|---|
State | Central data structure that flows through your graph | { messages: [], artifacts: {}, currentStep: "analyze" } |
Node | A function that processes state and returns updates | async function analyzeNode(state) { return { analysis: result }; } |
Edge | Defines transitions between nodes | graph.addEdge("nodeA", "nodeB") |
Conditional Edge | Dynamic routing based on state | graph.addConditionalEdges("router", routerFunction) |
Router | Function that returns the next node name | (state) => state.needsReview ? "review" : "complete" |
Common Patterns
// 1. Basic Linear Flowgraph.addNode("step1", step1Node);graph.addNode("step2", step2Node);graph.addEdge("step1", "step2");
// 2. Conditional Branchinggraph.addConditionalEdges("decision", (state) => { return state.condition ? "pathA" : "pathB";});
// 3. Loop Pattern (retry/refinement)graph.addNode("process", processNode);graph.addNode("validate", validateNode);graph.addConditionalEdges("validate", (state) => { return state.isValid ? "complete" : "process"; // Loop back});
// 4. Human-in-the-Loopgraph.addNode("humanInput", async (state) => { // This node will pause execution and wait for input return { humanFeedback: await getHumanInput() };});
State Management Best Practices
- Keep state flat and serializable - Avoid nested objects when possible
- Use typed schemas - Define your state shape with TypeScript or Zod
- Return partial updates - Nodes should only return the fields they modify
- Immutable updates - Never mutate state directly, always return new values
// Good: Return only what changedasync function myNode(state: AgentState) { const result = await processData(state.input); return { result }; // Only returns the 'result' field}
// Bad: Mutating stateasync function myNode(state: AgentState) { state.result = await processData(state.input); // Don't do this! return state;}
Debugging Tips
- Use LangSmith for visual trace debugging
- Add
console.log
statements in router functions to see decision paths - Test nodes in isolation before wiring them into the graph
- Use the
checkpointer
to save/restore state during development
Related Reading:
- AI Automation Business for Developers - Building agent systems with Python and LangGraph
Ready to Build with LLMs?
The concepts in this post are just the start. My free 11-page cheat sheet gives you copy-paste prompts and patterns to get reliable, structured output from any model.
Related Articles
LangChain Evolution Part 1: From Monolithic Prompts to Intelligent Agents
Learn how to migrate from a fragile, monolithic prompt to a maintainable, intelligent agent architecture using specialized tools, state management, and human-in-the-loop patterns with LangChain.
LangChain Evolution Part 3: Production-Ready Agentic Systems
With a dynamic graph in place, we now focus on making our agent robust, scalable, and ready for production. This post covers persistence, multi-agent collaboration, and advanced error handling.
Building Your AI-Powered Sales Engine Part 2: From Lead Generation to Client Acquisition
Technical deep dive into automating client acquisition with AI agents - complete code for lead generation, cold email personalization, and scaling your automation business
AI Automation Business for Developers - Part 1
How developers can dominate the automation market by building AI agents they'd actually use - and turning them into profitable services