Build an AI Agent — Chapter 4: Final Chapter
Build an AI Agent — Chapter 4: Final Chapter
🤖 Agents — The Final Step 🚀🔄✨
We’ve come a long way. By now, our program can call functions and act on its environment — but let’s be honest, it’s still missing the essence of what makes something an agent.
Why? Because it doesn’t yet have a feedback loop.
Right now, the flow is one-and-done:
- The LLM makes a decision
- We run the function
- The result goes back to the LLM
- …and that’s it.
A true agent, however, can reflect on the result, decide if more actions are needed, and keep iterating until the task is complete. That loop — action, reflection, action again — is the heartbeat of an agent.
In this chapter, we’ll give our program exactly that: the ability to learn from its own outputs and keep going until it reaches the goal.
It has no feedback loop.
A key part of an “Agent”, as defined by AI-influencer-hype-bros, is that it can continuously use its tools to iterate on its own results. So we’re going to build two things:
- A loop that will call the LLM over and over
- A list of messages in the “conversation”. It will look something like this:
- User: “Please fix the bug in the calculator”
- Model: “I want to call get_files_info…”
- Tool: “Here’s the result of get_files_info…”
- Model: “I want to call get_file_content…”
- Tool: “Here’s the result of get_file_content…”
- Model: “I want to call run_python_file…”
- Tool: “Here’s the result of run_python_file…”
- Model: “I want to call write_file…”
- Tool: “Here’s the result of write_file…”
- Model: “I want to call run_python_file…”
- Tool: “Here’s the result of run_python_file…”
- Model: “I fixed the bug and then ran the calculator to ensure it’s working.”
This is a pretty big step, take your time!
Assignment
- In generate_content, handle the results of any possible tool use:
- This might already be happening, but make sure that with each call to client.models.generate_content, you're passing in the entire messages list so that the LLM always does the "next step" based on the current state.
- After calling client’s generate_content method, check the .candidates property of the response. It's a list of response variations (usually just one). It contains the equivalent of "I want to call get_files_info...", so we need to add it to our conversation. Iterate over each candidate and add its .content to your messages list.
- After each actual function call, use the types.Content function to convert the function_responses into a message with a role of user and append it into your messages.
- Next, instead of calling generate_content only once, create a loop to call it repeatedly.
- Limit the loop to 20 iterations at most (this will stop our agent from spinning its wheels forever).
- Use a try-except block and handle any errors accordingly.
- After each call of generate_content, check if it returned the response.text property. If so, it's done, so print this final response and break out of the loop.
- Otherwise, iterate again (unless max iterations was reached, of course).
- Test your code (duh). I’d recommend starting with a simple prompt, like “explain how the calculator renders the result to the console”. This is what I got:
(aiagent) rjdp@RJDP:/mnt/d/aiagent/aiagent$ uv run main.py "how does the calculator render results to the console?"
- Calling function: get_files_info
- Calling function: get_file_content
- Calling function: get_files_info
- Calling function: get_file_content
Final response:
Okay, I've examined the `render` function in `pkg/render.py`. Here's how it works:
1. **Formats the result:** It first checks if the result is a float that can be represented as an integer. If so, it converts it to an integer string. Otherwise, it converts the result to a string.
2. **Calculates box width:** It determines the width of the box that will surround the expression and result, based on the longer of the two strings.
3. **Builds the box:** It creates a list of strings, each representing a line of the box. This includes the top and bottom borders, the expression, an equals sign, and the result, all padded with spaces to fit within the box.
4. **Joins the lines:** Finally, it joins the lines of the box with newline characters to create a single string that can be printed to the console.
So, the calculator renders results to the console by formatting the expression and result into a box-like structure using ASCII characters. The `render` function takes the expression and result as input and returns the formatted string, which is then printed to the console in `main.py`.
You may or may not need to make adjustments to your system prompt to get the LLM to behave the way you want. You’re a prompt engineer now, so act like one!
Update Code
Time for the coup de grâce!
Let’s test our agent’s ability to actually fix a bug all on its own.
Assignment
- Manually update calculator/pkg/calculator.py and change the precedence of the + operator to 3.
- Run the calculator app, to make sure it’s now producing incorrect results: uv run calculator/main.py "3 + 7 * 2" (this should be 17, but because we broke it, it says 20)
- Run your agent, and ask it to “fix the bug: 3 + 7 * 2 shouldn't be 20"
🎯 Wrapping Up the Series
Congratulations — you’ve just built your very own AI Agent from scratch! 🎉
Over the course of this series, we’ve gone from simple directory listings all the way to giving our agent the power to read, write, run, and iterate on real code — complete with a feedback loop that makes it feel truly alive.
But this is just the beginning. Now that you’ve got the foundations in place, you can safely explore more advanced directions:
- 🐞 Challenge your agent with harder bugs to fix
- 🔧 Refactor and optimize existing code
- ✨ Add new features to your project
- 🌐 Experiment with different LLM providers or other Gemini models
- 🛠️ Expand its toolbox with more functions
- 📂 Try it out on different codebases (always commit first so you can roll back!)
⚠️ One last reminder: this is a toy version of tools like Cursor, Zed’s Agentic Mode, or Claude Code. Even professional tools aren’t perfectly secure. Be very cautious about giving an LLM direct access to your filesystem or Python interpreter — and definitely don’t share this code for others to use without safeguards.
What you’ve built here is more than just a project — it’s a window into the future of AI-assisted development. 🚀
Stay curious, keep experimenting, and who knows — maybe the next breakthrough in agentic coding will come from you. ✨
Here’s the GitHub link of the repository where you can get the code of this project :
https://github.com/RajdeepKushwaha5/AI-Agent