Building and running agents
Terminology
An "agent" is an LLM-powered program that is defined in code.
The persistent history of your interactions with an agent consitute a thread. The LLM context is preserved as long as the thread session continues. Once a new Thread is started then the context goes back to the initial state.
Each interaction of: user request -> agent thinking -> agent response is called a run. A single thread can include many runs.
As your agent operates, it may take multiple steps in order to complete a run. Generally each step will result in either a completion - the generation of some text, or a function call, or both together. Tracing the steps of your agent inside of a run is done by reviewing the logs generated by the agent.
Construct an Agent like this:
from agentic.common import Agent
def weather_tool():
return "The weather is nice today."
agent = Agent(
name="Basic Agent",
welcome="I am a simple agent here to help answer your weather questions.",
instructions="You are a helpful assistant.",
model="openai/gpt-4o-mini",
tools=[WeatherTool()],
)
The instructions
set the "system prompt" for the LLM. The "welcome message" is just a string
that can be displayed to the end user to help them use the agent.
Optional parameters to your agent include:
max_tokens - The maximum number of tokens to generate on each completion
memories - A list of facts to inject into the Agent's context for every Run
See models for information on using different models.
See tools for information on creating and using tools.
Secrets
When your agent runs it will likely need api keys for various services. You can set these keys in your environment, but this approach gets very unwieldy with lots of keys.
Agentic includes a simple system for managing secrets
. They are stored encrypted
in a local SQLite database file (inside ~/.agentic
).
agentic list-secrets - list your secrets
agentic set-secret <secret name> <value>
agentic get-secret <secret name>
All secrets are automatically injected into the environment when your agent runs,
but it is recommended to get values from the RunContext
using get_config
and get_secret
.
One nice feature is that secrets can be stored in a namespace
named after your agent, so
that you can manage multiple values across different agents.
Using AgentRunner and the REPL
The AgentRunner
class is a convenience utility for running a repl to interact with your
agent:
from agentic.common import Agent, AgentRunner
agent = Agent(...)
if __name__ == 'main':
AgentRunner(agent).run_repl()
By default it maintains a persistent Thread (session) with your agent, so that each turn is appending to the active thread.
% python examples/basic_agent.py
I am a simple agent here to help answer your weather questions.
[Basic Agent]> my name is scott
Hello, Scott! How can I assist you today?
[Basic Agent]> what is my name ?
Your name is Scott.
[openai/gpt-4o-mini: 1 calls, tokens: 12 -> 5, 0.00 cents, time: 0.73s tc: 0.00 c, ctx: 40]
The runner repl includes a set of "dot" system commands:
> .help
.agent - Show the state of the active agent
.run <agent name> - switch the active agent
.history - show the history of the current session
.debug [<level>] - enable debug. Defaults to 'tools', or one of 'llm', 'tools', 'all', 'off'
.settings - show the current config settings
.model - switch the active LLM model
.help - Show this help
.quit - Quit the REPL
Examples:
[Basic Agent]> .agent
Basic Agent
You are a helpful assistant.
tools:
WeatherTool
The .debug command is especially helpful to activate different kinds of tracing:
.debug tools - Shows logging for tool start/finish events
.debug llm - Shows all LLM completion calls
.debug agents - Only log events where an agent starts a run
.debug all - Logs everything
We often run with .debug tools
to track what our agents are doing. You can also set
the debug value via the AGENTIC_DEBUG
env var.
Things to note
We have used the convenience repl_loop
in AgentRunner
to interface to our agent.
But we can write our own loop (or API or whatever) to run our agent:
while True:
prompt = input("> ")
request_id = agent.start_request(prompt).request_id
for event in agent.get_events(request_id):
print("Agent event: ", event)
The get_events
function will keep emitting events until the current run of the agent is
complete. We can loop again and let the user request another task from the agent.
Because you are getting fine-grained events as the agent runs, you can choose to do other things in the middle, including things like modifying the agent by giving it more tools. Even though this interface looks like the agent is "running" some thread (like in Langchain), in fact the agent runs step by step, generating events along the way, but it can stop at any time.