The goal for this blog is to initialize two LLM instances that can collaborate with each other to complete a task I assign them. In my first blog post, I examined a couple of approaches to building multi-agent systems. In it, I concluded that the LangChain ecosystem in Python was the best way forward. You can read that blog here.
To achieve the goal of having two LLM instances collaborate, I broke the process down into three steps:
Find a LangChain ecosystem tutorial that has examples of multiple agents communicating, then adapt it to formulate a simple multi-agent script.
Load Ollama LLMs as LangChain agents.
Have multiple agents collaborate on a simple writing task.
I’ll need to give these agents a task to collaborate on. Specifically, I’ll create an author LLM with Llama 3.2 and an editor LLM with Deepseek-R1, and ask them to work together to draft a tweet (or Bluesky message) promoting a previous blog post of mine.
Step 1: Multi-Agent Collaboration in LangChain & LangGraph
After some searching through the LangChain documentation, I came across a promising approach with LangGraph, a Python library for building multi-actor applications with LLMs that’s part of the LangChain ecosystem. I found this tutorial on creating a multi-agent system, and it provided the perfect foundation for my project.
However, the tutorial includes functionality that I’m not quite ready to dive into—namely creating tools for agents to use. It also uses Anthropic models in its sample script, while I want to use Ollama models so I can try out the Llama and DeepSeek LLMs that I got running in my previous blog posts (see my Llama and DeepSeek installation guides).
Step 2: Loading an Ollama LLM as a LangGraph Agent
In the LangGraph tutorial, LLMs are loaded as "react agents." These agents can call tools and can be extended to have persistent chat memory (for example, storing previous chat history). There are likely other useful features that I'll discover as I dig deeper, but for this blog we’ll stop there. I want to load Llama 3.2 as a react agent in this project.
from langchain_ollama import ChatOllama
llm1 = ChatOllama(
model="llama3.2",
temperature=0,
# other parameters if desired...
)
Next, I’ll load up a react agent with a base system prompt, defined by the make_system_prompt
function, essentially just a base set of instructions to the agents.
from langgraph.prebuilt import create_react_agent
def make_system_prompt(suffix: str) -> str:
return (
"You are a helpful AI assistant, collaborating with other assistants."
" Use the provided tools to progress towards answering the question."
" If you are unable to fully answer, that's OK; another assistant with different tools "
" will help where you left off. Execute what you can to make progress."
" If you or any of the other assistants have the final answer or deliverable,"
" prefix your response with FINAL ANSWER so the team knows to stop."
f"\n{suffix}"
)
author_agent = create_react_agent(
llm1,
tools=[],
state_modifier=make_system_prompt(
"You will author a tweet sharing a link to a blog post. You will work with an editor to finesse the tweet."
" Update your tweet draft based on suggestions and critiques from your editor. If you disagree with the suggestions, say so."
),
)
And that’s it! The author agent is created. Now for the final (and most challenging) step: getting another LLM instance defined and having them communicate with each other.
Step 3: Multiple-Agent Collaboration
LangGraph allows you to create a network (or mathematically speaking a graph) where agents are “nodes” and their communication channels are “edges”. That’s about all we need to know about graphs for this project, but I will likely explore more advanced graph theory in future posts. (Graph theory is extremely powerful and something I used extensively during my PhD thesis in physics, so I’m excited to apply it here for multi-agent systems!)
First, let’s create a second LLM instance and react agent. For this second instance, I'll use a DeepSeek model:
llm2 = ChatOllama(
model="deepseek-r1:7b",
temperature=0,
# other parameters...
)
editor_agent = create_react_agent(
llm2,
tools=[],
state_modifier=make_system_prompt(
"You criticize tweets and suggest edits to maximize engagement."
" Your colleague does not have to agree with your suggestions, but try to come to an agreement on the final tweet."
),
)
Next, I’ll create the two nodes (author and editor). Below is the editor node, and I’ll walk through what it all means.
from typing import Literal
from langchain_core.messages import BaseMessage, HumanMessage
from langgraph.graph import MessagesState, END
from langgraph.types import Command
def get_next_node(last_message: BaseMessage, goto: str):
if "FINAL ANSWER" in last_message.content:
# Any agent decided the work is done
return END
return goto
def author_node(
state: MessagesState,
) -> Command[Literal["editor_agent", END]]:
result = author_agent.invoke(state)
goto = get_next_node(result["messages"][-1], "editor_agent")
# Wrap the last message as a HumanMessage object because some providers
# don't allow an AI message as the last position in the input messages list
result["messages"][-1] = HumanMessage(
content=result["messages"][-1].content,
name="author_agent"
)
return Command(
update={
# Share the internal message history of the author agent with other agents
"messages": result["messages"],
},
goto=goto,
)
Here’s what’s happening:
get_next_node()
scans the last message for the phrase"FINAL ANSWER"
. If it finds it, we end the conversation by returningEND
. Otherwise, we continue to the next agent.author_node()
is a function defining how the author node operates in the graph. It:Invokes the
author_agent
with the current conversation state (author_agent.invoke(state)
).Passes the output through
get_next_node()
to check if we should terminate.Wraps the last message as a
HumanMessage
, accommodating any LLM that doesn’t allow an AI message as the final input.Returns a
Command
with updated messages and the next node (goto
).
The editor node is nearly identical:
def editor_node(state: MessagesState) -> Command[Literal["author_agent", END]]:
result = editor_agent.invoke(state)
goto = get_next_node(result["messages"][-1], "author_agent")
result["messages"][-1] = HumanMessage(
content=result["messages"][-1].content,
name="editor_agent"
)
return Command(
update={
# Share internal message history with the author agent
"messages": result["messages"],
},
goto=goto,
)
Everything here mirrors author_node()
except the editor node communicates with the author node.
Creating the Multi-Agent Graph
from langgraph.graph import StateGraph, START
workflow = StateGraph(MessagesState)
workflow.add_node("author_agent", author_node)
workflow.add_node("editor_agent", editor_node)
workflow.add_edge(START, "author_agent")
graph = workflow.compile()
StateGraph
is “a graph whose nodes communicate by reading and writing to a shared state,” per the docs.We add two nodes:
author_agent
andeditor_agent
.We define the start of the workflow, pointing to the
author_agent
.Finally, we compile the graph.
Bringing the Agents to Life
Now we need the agents in the graph to talk to each other. We do that by streaming through the graph with our initial message:
events = graph.stream(
{
"messages": [
(
"user",
"Work together to write a tweet that would get people to click a link to my blog: " + last_blog,
)
],
},
# Maximum number of steps
{"recursion_limit": 150},
)
Here, last_blog
is simply the text of a previous blog of mine (the one explaining how to install Llama LLMs and run them with Python).
Depending on your GPU’s power, this step can take a while. On my own setup, it finishes in about 2–3 minutes.
Step 4: Results
To view the chat history, run:
from IPython.display import Markdown, display
def print_markdown(text):
display(Markdown(text))
for s in events:
print(list(s.keys())[0])
print_markdown(s[list(s.keys())[0]]['messages'][-1].content)
print("----")
Here’s the final tweet the agents produced:
🚀 Unlock AI Power: Set up Ollama & LangChain on Windows 10 🎮💻 Discover step-by-step installation tips and optimize performance with a powerful GPU! 💻✨ Stay tuned for insights on multi-agent systems! 👉 Click here to read my full guide 💡
I’ll share the full chat history below for the interested reader, but it only took the bots 6 iterations to come to this result which is pretty cool! It’s really interesting to see the difference in a reasoning model (the author agent running Deepseek) and the vanilla LLM (editor agent). The author agent has a ton of internal monologue, and that surprised me actually.
Next steps are to beef up the tools the agents have access to; web search, ability to execute code (safely), and any other ideas I have in the meantime.
Appendix: Full chat history
author_agent
Okay, so I need to help author a tweet that encourages people to click a blog link about setting up a local LLM network with Ollama and LangChain on Windows 10. The user has already drafted some content but wants it refined by an editor.
First, looking at the draft, it's quite detailed with step-by-step instructions. But for a tweet, that's too much. I should condense it into a catchy headline and a brief summary of what the blog covers without going into too much technical detail.
The user mentioned using a gaming PC with a powerful GPU, which is interesting because it shows the setup was optimized. That could be a point to highlight—using a high-end GPU for better performance.
I should also include emojis to make it more engaging and friendly. Maybe something like 🚀 or 💻 to grab attention.
The original draft starts with an introduction about exploring options for creating networks of LLMs, which is good but might not be as catchy in a tweet. I can rephrase that to something more action-oriented, like "Ready to take your AI game to the next level?"
I should also emphasize why someone would care—maybe it's accessible and doesn't require expensive hardware beyond what most people have.
Including a call to action is important too, prompting readers to check out the blog or follow for updates. Maybe something like "Click here to read my full guide" with an emoji.
Let me structure this into two parts: the headline and the body. The headline should be engaging, maybe starting with a question or a promise of value. The body can briefly mention the key points without too much detail but enough to entice clicks.
I'll also need to make sure it's concise—tweets are usually around 280 characters, so every word counts. Maybe start with "Discover how..." and then quickly outline what the blog covers.
Finally, I should check if there are any typos or grammatical errors in my version before finalizing.
🚀 Ready to take your AI game to the next level? Learn how to set up a local LLM network with Ollama and LangChain on your Windows 10 gaming PC 🎮✨. Whether you're a tech enthusiast or just curious, this guide walks you through installing powerful models like Llama 2 using a high-end GPU for optimized performance. Stay tuned for more insights on multi-agent systems! Click here to read my full guide 💡.
editor_agent
Refined Tweet
Ready to unlock the power of local LLMs? 🚀 Discover how to set up a network with Ollama and LangChain on your Windows 10 gaming PC, optimized for high-performance with a powerful GPU! 💻🎮 Learn how to install and use Llama 2 models like a pro. Click here to read my full guide 👉💡
Changes Made:
Condensed the text into a catchy headline and brief summary
Emphasized the benefits of using a high-end GPU for optimized performance
Added emojis to make it more engaging and friendly (🚀, 💻, 🎮, ✨)
Started with a question that grabs attention ("Ready to unlock...")
Kept the call-to-action "Click here to read my full guide" with an emoji
Ensured the tweet is concise within the 280 character limit
Next Steps:
My colleague can review this revised tweet for any final tweaks or suggestions. If there's no major feedback, we can consider it finalized and share it on our social media channels to promote the blog post.
If needed, we can also suggest some alternative options, such as:
Using a more attention-grabbing opening line
Emphasizing the accessibility of the guide for non-tech enthusiasts
Highlighting the potential applications or use cases for local LLM networks
Let's get feedback from our colleague and finalize the tweet!
author_agent
Alright, let me walk through how I would approach refining this tweet based on the user's request.
First, I need to understand the goal: create a tweet that encourages people to click a link to a blog post about setting up local LLM networks with Ollama and LangChain. The original draft is detailed but too lengthy for a tweet.
I'll start by identifying the key points from the blog post:
Using Ollama on Windows 10 with a powerful GPU
Step-by-step installation guide
Example code using Jupyter Notebook
Verification of GPU usage
Next, I need to condense these into a concise and engaging tweet. I'll use emojis to make it more appealing and start with an action-oriented statement.
I should also emphasize why someone would care about this topic—perhaps its accessibility or potential applications for non-tech enthusiasts.
Finally, include a call-to-action prompting readers to check out the blog.
🚀 Ready to take your AI game to the next level? Discover how to set up a local LLM network with Ollama and LangChain on Windows 10 using a powerful GPU! 💻✨ Learn step-by-step installation tips and see my example code in action. Stay tuned for more insights on creating multi-agent systems! Click here to read my full guide 👉💡
Final Answer:
🚀 Ready to take your AI game to the next level? Discover how to set up a local LLM network with Ollama and LangChain on Windows 10 using a powerful GPU! 💻✨ Learn step-by-step installation tips and see my example code in action. Stay tuned for more insights on creating multi-agent systems! Click here to read my full guide 👉💡
editor_agent
Refined Tweet
🚀 Unlock AI Power: Set up Ollama & LangChain on Windows 10 🎮💻 Discover step-by-step installation tips and optimize performance with a powerful GPU! 💻✨ Stay tuned for multi-agent system insights! 👉 Click here to read my full guide 💡
Changes Made:
Emphasized the benefits of using Ollama and LangChain on Windows 10
Added emojis to make it more engaging (🚀, 🎮, 💻, ✨)
Started with a concise and action-oriented statement ("Unlock AI Power")
Kept the call-to-action "Click here to read my full guide" with an emoji
Ensured the tweet is concise within the 280 character limit
Next Steps:
I made some minor adjustments to make the tweet even more engaging. I added a few words to make it clear that the reader will gain insights into multi-agent systems, and I removed the phrase "take your AI game to the next level" as it's a bit generic.
My colleague can review this revised tweet for any final tweaks or suggestions. If there's no major feedback, we can consider it finalized and share it on our social media channels to promote the blog post.
If needed, we can also suggest some alternative options, such as:
Using a more attention-grabbing opening line
Emphasizing the accessibility of the guide for non-tech enthusiasts
Highlighting the potential applications or use cases for local LLM networks
Let's get feedback from our colleague and finalize the tweet!
author_agent
Alright, let me walk through how I would approach refining this tweet based on the user's request.
First, I need to understand the goal: create a tweet that encourages people to click a link to a blog post about setting up local LLM networks with Ollama and LangChain. The original draft is detailed but too lengthy for a tweet.
I'll start by identifying the key points from the blog post:
Using Ollama on Windows 10 with a powerful GPU
Step-by-step installation guide
Example code using Jupyter Notebook
Verification of GPU usage
Next, I need to condense these into a concise and engaging tweet. I'll use emojis to make it more appealing and start with an action-oriented statement.
I should also emphasize why someone would care about this topic—perhaps its accessibility or potential applications for non-tech enthusiasts.
Finally, include a call-to-action prompting readers to check out the blog.
🚀 Ready to take your AI game to the next level? Discover how to set up a local LLM network with Ollama and LangChain on Windows 10 using a powerful GPU! 💻✨ Learn step-by-step installation tips and see my example code in action. Stay tuned for more insights on creating multi-agent systems! Click here to read my full guide 👉💡
Final Answer:
🚀 Unlock AI Power: Set up Ollama & LangChain on Windows 10 🎮💻 Discover step-by-step installation tips and optimize performance with a powerful GPU! 💻✨ Stay tuned for insights on multi-agent systems! 👉 Click here to read my full guide 💡
Refined Tweet
🚀 Unlock AI Power: Set up Ollama & LangChain on Windows 10 🎮💻 Discover step-by-step installation tips and optimize performance with a powerful GPU! 💻✨ Stay tuned for insights on multi-agent systems! 👉 Click here to read my full guide 💡
Changes Made:
Emphasized the benefits of using Ollama and LangChain on Windows 10
Added emojis to make it more engaging (🚀, 🎮, 💻, ✨)
Started with a concise and action-oriented statement ("Unlock AI Power")
Kept the call-to-action "Click here to read my full guide" with an emoji
Ensured the tweet is concise within the 280 character limit
Next Steps:
I made some minor adjustments to make the tweet even more engaging. I added a few words to make it clear that the reader will gain insights into multi-agent systems, and I removed the phrase "take your AI game to the next level" as it's a bit generic.
My colleague can review this revised tweet for any final tweaks or suggestions. If there's no major feedback, we can consider it finalized and share it on our social media channels to promote the blog post.
If needed, we can also suggest some alternative options, such as:
Using a more attention-grabbing opening line
Emphasizing the accessibility of the guide for non-tech enthusiasts
Highlighting the potential applications or use cases for local LLM networks
Let's get feedback from our colleague and finalize the tweet!
editor_agent
FINAL ANSWER
🚀 Unlock AI Power: Set up Ollama & LangChain on Windows 10 🎮💻 Discover step-by-step installation tips and optimize performance with a powerful GPU! 💻✨ Stay tuned for insights on multi-agent systems! 👉 Click here to read my full guide 💡