The Hermes AI Agent is an incredibly powerful tool right in your terminal, capable of running commands, reading files, and interacting with your local machine. But what if you want to access its power from outside the
terminal? What if you could programmatically send tasks to your agent?
This is where a core, and sometimes overlooked, feature comes into play: the Hermes API Gateway.
The gateway transforms your personal AI assistant into a full-fledged reasoning engine that can be accessed over your network. It’s the key to unlocking scripted automations and building custom interactions. In this post,
I’ll provide a clear, step-by-step guide on how to enable and use it.
Why Do You Need a Gateway? The “Walled Garden” Problem
By default, your interaction with Hermes happens in one place: your terminal. This is great for direct, interactive use. However, this creates a “walled garden.” The agent can’t be easily controlled by a script, triggered
by an event, or integrated into a larger workflow.
The API Gateway breaks down these walls. By exposing a standard HTTP endpoint, it allows any application that can send a web request to start a conversation with your agent.
The Solution: A Doorway for Your AI
Think of the gateway as a secure, public-facing front door for your agent. Instead of having to be physically at the terminal to type a command, you can send a message to a URL. Hermes receives the message, does the work,
and sends a response right back.
This simple concept is incredibly powerful and opens the door to a new level of automation.
The Setup: A Step-by-Step Guide
Getting the gateway running is straightforward. It involves editing one configuration file and then knowing how to “knock” on the new front door.
Step 1: Configure and Enable the Gateway
First, you need to tell Hermes to start the gateway. This is done in the agent’s main configuration file, located at ~/.hermes/config.yaml.
Open this file in your favorite text editor. You will need to find (or add) the gateway section.
yaml
~/.hermes/config.yaml
gateway:
This is the master switch. Set it to true to enable the gateway.
enabled: true
The host IP the gateway will listen on.
'0.0.0.0' makes it accessible from other machines on your network.
'127.0.0.1' (localhost) restricts access to only the same machine.
host: 0.0.0.0
The port it will run on. 5000 is a common default.
port: 5000
SECURITY: This is the most important setting. The gateway is
unprotected by default. Set a strong, secret token here.
api_key: "your-super-secret-and-long-password-here"
Security is paramount. The api_key setting is not optional for any serious use. Without it, anyone on your network could access and control your agent. I recommend generating a long, random string to use as your key.
Step 2: Restart the Hermes Agent
The changes you made to config.yaml will only apply after you restart the Hermes agent. Stop your current session and start it again.
As Hermes boots up, you should see a new line in the logs confirming that the gateway is active and listening:
INFO | uvicorn.main | Started server on http://0.0.0.0:5000
This confirms your gateway is live.
Step 3: Send Your First API Request
With the gateway running, it’s time to test it. The easiest way is with a curl command from another terminal window. You will send a POST request to the /api/v1/chat endpoint.
Your request must contain two key things:
1. The Authorization header with your secret api_key.
2. A JSON payload with your message and some metadata.
Here is the command structure:
bash Let’s break down the JSON data:
curl -X POST http://127.0.0.1:5000/api/v1/chat \
-H "Content-Type: application/json" \
-H "Authorization: Bearer your-super-secret-and-long-password-here" \
-d '{
"platform": "api",
"chat_id": "api-test-session",
"user_id": "default-user",
"prompt": "Hello from the API! Please list the files in the current directory."
}'
platform: Identifies the source of the message.
chat_id: This is a crucial field. It groups messages into conversations. Using the same chat_id across multiple requests allows Hermes to remember context, just like in a normal chat.
user_id: Identifies the user sending the message.
prompt: The actual task or question for the agent. After running the command, you will get a JSON response from the agent containing its answer.
The Possibilities Are Now Open
You now have a fully functional API for your Hermes agent. This is the foundational building block for countless new applications. Whether you want to write a simple script to automate a repetitive task or build a more
complex system that leverages the agent’s reasoning abilities, you now have the key to do it. You’ve successfully turned your personal AI assistant into a programmable platform.