Unlocking Hermes: A Guide to Setting Up the API Gateway

The Hermes AI Agent is an incredibly powerful tool right in your terminal, capable of running commands, reading files, and interacting with your local machine. But what if you want to access its power from outside the
terminal? What if you could programmatically send tasks to your agent?

This is where a core, and sometimes overlooked, feature comes into play: the Hermes API Gateway.

The gateway transforms your personal AI assistant into a full-fledged reasoning engine that can be accessed over your network. It’s the key to unlocking scripted automations and building custom interactions. In this post,
I’ll provide a clear, step-by-step guide on how to enable and use it.
Why Do You Need a Gateway? The “Walled Garden” Problem

By default, your interaction with Hermes happens in one place: your terminal. This is great for direct, interactive use. However, this creates a “walled garden.” The agent can’t be easily controlled by a script, triggered
by an event, or integrated into a larger workflow.

The API Gateway breaks down these walls. By exposing a standard HTTP endpoint, it allows any application that can send a web request to start a conversation with your agent.
The Solution: A Doorway for Your AI

Think of the gateway as a secure, public-facing front door for your agent. Instead of having to be physically at the terminal to type a command, you can send a message to a URL. Hermes receives the message, does the work,
and sends a response right back.

This simple concept is incredibly powerful and opens the door to a new level of automation.
The Setup: A Step-by-Step Guide

Getting the gateway running is straightforward. It involves editing one configuration file and then knowing how to “knock” on the new front door.

Step 1: Configure and Enable the Gateway

First, you need to tell Hermes to start the gateway. This is done in the agent’s main configuration file, located at ~/.hermes/config.yaml.

Open this file in your favorite text editor. You will need to find (or add) the gateway section.

yaml
~/.hermes/config.yaml

gateway:
This is the master switch. Set it to true to enable the gateway.
enabled: true
The host IP the gateway will listen on.
'0.0.0.0' makes it accessible from other machines on your network.
'127.0.0.1' (localhost) restricts access to only the same machine.
host: 0.0.0.0
The port it will run on. 5000 is a common default.
port: 5000
SECURITY: This is the most important setting. The gateway is
unprotected by default. Set a strong, secret token here.
api_key: "your-super-secret-and-long-password-here"


Security is paramount. The api_key setting is not optional for any serious use. Without it, anyone on your network could access and control your agent. I recommend generating a long, random string to use as your key.

Step 2: Restart the Hermes Agent

The changes you made to config.yaml will only apply after you restart the Hermes agent. Stop your current session and start it again.

As Hermes boots up, you should see a new line in the logs confirming that the gateway is active and listening:

INFO | uvicorn.main | Started server on http://0.0.0.0:5000

This confirms your gateway is live.

Step 3: Send Your First API Request

With the gateway running, it’s time to test it. The easiest way is with a curl command from another terminal window. You will send a POST request to the /api/v1/chat endpoint.

Your request must contain two key things:
1. The Authorization header with your secret api_key.
2. A JSON payload with your message and some metadata.

Here is the command structure:

bash
curl -X POST http://127.0.0.1:5000/api/v1/chat \
-H "Content-Type: application/json" \
-H "Authorization: Bearer your-super-secret-and-long-password-here" \
-d '{
"platform": "api",
"chat_id": "api-test-session",
"user_id": "default-user",
"prompt": "Hello from the API! Please list the files in the current directory."
}'

Let’s break down the JSON data:
platform: Identifies the source of the message.
chat_id: This is a crucial field. It groups messages into conversations. Using the same chat_id across multiple requests allows Hermes to remember context, just like in a normal chat.
user_id: Identifies the user sending the message.
prompt: The actual task or question for the agent.

After running the command, you will get a JSON response from the agent containing its answer.
The Possibilities Are Now Open

You now have a fully functional API for your Hermes agent. This is the foundational building block for countless new applications. Whether you want to write a simple script to automate a repetitive task or build a more
complex system that leverages the agent’s reasoning abilities, you now have the key to do it. You’ve successfully turned your personal AI assistant into a programmable platform.

Installed Radicale3 for hosting my own calendar and contacts data

So I finally got around to installing something so I can host my own calendar and contacts so that data is not shared with the big four corporate companies.

As I just need basic calendar and contacts functionality I chose the open source project Radicale. As I have run Fedora to control the house and my private data since 1998 I firstly installed the Radicale rpm via DNF.

dnf install radicale

This should install radicale3 as that is the latest version at the time of writing this blog post.

I will be running it off a sub domain of my public domain so I have set that up in the DNS with my main domain hosting company.

I have used a sub domain with a reverse proxy rather than opening another port [5232] on the firewall. I initially set it up with the port open but then rethought my approach as its more secure than to open the port on the firewall.

One other good thing using a sub domain is that you just need to have the proxy settings for that host rather than using location settings with the pathing.

Apache virtual host looks like this

<VirtualHost *:443>
 
 ServerName <subdomain.domain>
 DocumentRoot <pathtodomain>

 ProxyRequests Off
 ProxyPreserveHost On

 <Proxy *>
 Order deny,allow
 Allow from all
 </Proxy>

 # Forward requests to the backend Radicale server
 ProxyPass / http://localhost:5232/
 ProxyPassReverse / http://localhost:5232/

 # Inform Radicale about its location behind the proxy
 RequestHeader set X-Script-Name /radicale

 SSLCertificateFile <path to public key>
 SSLCertificateKeyFile <path to private key>

</VirtualHost>

Set up the config file for radicale to use it locally running on localhost on port 5232 and I am using my dovecot server to authenticate the users.

Then you should just be able to surf to the subdomain and be presented with the login page of the radicale server. Login and add your collections for the calendars and contacts.

Then configuring all the clients should be pretty straight forward for PC, Mac, iOS and Android.

MMEncode is not available on Linux distros any more.

Not sure what happened to this or why but its no longer available. I used to have a script that used it to convert binary data to text for attachments on emails. Having searched high and low I came up with a new solution – use openssl.

You can do the same that you were doing with mmencode by doing the following

openssl base64 -e < $FILE

Enjoy.

Visual Studio 2015, Windows 10 under VMWare Fusion 7 on a Mac

Trying to run up my windows project using Xamarin for windows phone 8. Kept getting this error when the windows phone 8.1 emulator

"Failed to start the virtual machine because one of the Hyper-V components is not running"

Researched a bit and had a few goes and quite a few different solutions but had to manually edit .vmx file which is inside the vmware folders.

Had to add

hypervisor.cpuid.v0 = "FALSE"

Make sure you do this with the VM shutdown. And bobs your uncle it now runs up the windows 8.1 emulators. Took a while to run them up the first time but it did finally come up. Can now debug my Xamarin project on windows phone 8.1.

Enjoy

Xamarin: Unable to install new debug version of Android app on a Motorola Ultra

On running from the Xamarin studio, I could no longer deploy to the companie’s Motorola Ultra. From the deploy to device window I was seeing the following error: [INSTALL_FAILED_UPDATE_INCOMPATIBLE]

This normally means that there is an old version hanging around on the device which you can’t overwrite – usually because the older version is signed and the new version is not because its running as debug from the Xamarin studio. On most Android devices they show the package name of the app in the installed apps and you can just go and remove it by the usual un-install method.

For some reason on my companies Motorola Ultra this was not showing. Took me a while but this is how I did uninstall it. You have to run up the Android debugger for which on my mac is located in …

/Users/<loggedinuser>/Library/Developer/Xamarin/android-sdk-macosx/platform-tools

Then you issue the command

./adb uninstall com.<package>.<name>

And Bob’s your uncle the package is uninstalled and you can then run the new one up and it gets deployed to the phone with no issue.

Hope that helps someone.

Moving From One Computer To Another With Existing Checkouts In TFS (Team Foundation Server)

My laptop was corrupted and decided to move to another laptop at work. I had existing checkouts so I copied all the code involved to the new laptop. Then I couldn’t add the workspace with the same name as it kept saying the workspace exists on my old machine.

I looked around at posts on the internet and they were suggesting to use the tfs commands

I tried to use

tf workspaces [/updateComputerName:oldComputerName][workspacename]

Now I don’t know what was wrong but it didn’t work at all. So I took the brute force approach and updated the TFS database itself using the SQL management studio.

I used

UPDATE [TFSDBName].[dbo].[tbl_Workspace]

SET Computer = ‘NewMachineName’ WHERE Computer = ‘OldMachineName’

and that worked a treat.   If your worried then do a select first and make sure you know how many it should be updating first and do a begin transaction before the statement – just to make sure your updating the right amount.  Or even do a select afterwards to see the data changed and then issue a commit command.

Global Emergency Resources LLC – 57th Presidential Inauguration.

Over the last couple of weeks I was very privileged to have some of the software that I have worked on being used for the 57th Presidential Inauguration. It is probably my biggest career moment ever – so far !

My employer Global Emergency Resources LLC landed a contract to supply the first aid locations and command centers with their product – HC Standard.

Ex·cerpt from Global Emergency Resource’s website:

WASHINGTON, D.C. – The 2013 Presidential Inauguration brought landmark changes in emergency management and spectator safety. For the first time, inaugural personnel used a powerful situational awareness software suite to track medical emergencies; reunite lost family members; and provide real time information to event organizers. Emergency personnel from The District of Columbia, Maryland, Virginia, and the United States military integrated emergency data using HC Standard® – a patient tracking and critical asset software solution developed by Global Emergency Resources, LLC based in Augusta, Georgia.

HC Standard® allowed local, state and federal agencies, including the National Parks Service, US Secret Service, the Red Cross, and Homeland Security officials to have a common operating picture of major events during the Inauguration, including the Presidential Candlelight Reception; the Inaugural Parade; activities along the National Mall; the Commander in Chief Ball; the Inaugural Ball; and the Inaugural Prayer Service.

The DC Department of Health partnered with the Maryland Institute for Emergency Medical Service Systems (MIEMSS), the Northern Virginia Emergency Response System (NVERS), and the Maryland Department of Human Resources (MD DHS) to provide patient care and tracking throughout the event. Each partner used its own installation of HC Standard® to enter patient data with Motorola MC65 handheld devices. The data was aggregated and shared in all systems so that EMTs, first responders, and command center leaders could see the full picture of Inaugural events as they occurred.

During the Inauguration, HC Standard® tracked every emergency or first aid case and plotted it in each of the three emergency operations centers used for the event tracking and management. Additionally, family members who were lost, and those who were looking for them, had their information uploaded to a multijurisdictional database so they could be more easily reunited. Even the 100+ horses that carried the mounted police were part of the HC Standard® operating picture.

“Interoperability was key,” says Stan Kuzia, CEO and founder of Global Emergency Resources. “The EMS and Healthcare partners in the National Capital Region (NCR) have worked diligently over the years to eliminate information silos and enhance communication. This Presidential Inauguration demonstrated their hard work is paying off”. The various civilian agencies in the NCR also worked closely with their military counterparts to share a combined picture of patients and missing persons being treated and handled during the entire event. HC Standard® helped to bridge the interoperability gaps on Inauguration Day as near real-time data was available to military responders just as fast as their civilian counterparts.

Original document can be found here: http://www.ger911.com/news-and-events/17-news/133-inauguration2013

MySQL pegging CPU after the leap second adjustment.

So I find that my MySQL database is running high on CPU all of a sudden. I optimize the tables etc and no difference. I hunt around the internet to find that there seems to be a problem with this years leap second adjustment which sends MySQL into orbit.

The solution reset the date

date -s "`date`"

and it dropped back to normal.

Credit goes to this lady here http://www.sheeri.com/content/mysql-and-leap-second-high-cpu-and-fix

BackupPC Has qw(…) Warnings Since Upgrading Perl

So since upgrading Perl I am presented with qw warnings coming out of the cron job checking that BackupPC is running.

Use of qw(...) as parentheses is deprecated at /usr/share/BackupPC/lib/BackupPC/Storage/Text.pm line 301.
Use of qw(...) as parentheses is deprecated at /usr/share/BackupPC/lib/BackupPC/Lib.pm line 1412.

The way to get rid of these warnings is to enclose qw in parentheses and Perl processes the foreach parameters without warnings.

Like so

Text.pm (Line 301)

#
# Promote BackupFilesOnly and BackupFilesExclude to hashes
#
foreach my $param (qw(BackupFilesOnly BackupFilesExclude)) {
next if ( !defined($conf->{$param}) || ref($conf->{$param}) eq "HASH" );
$conf->{$param} = [ $conf->{$param} ]
if ( ref($conf->{$param}) ne "ARRAY" );
$conf->{$param} = { "*" => $conf->{$param} };
}

Lib.pm (Line 1412)

foreach my $param (qw(BackupFilesOnly BackupFilesExclude)) {
next if ( !defined($conf->{$param}) );
if ( ref($conf->{$param}) eq "HASH" ) {

Zotac ION-ITX-F Wi-Fi Dual Core 1.6GHz Atom N330 Mini-ITX Motherboard with PCI Express x16 With Windows 7 Home Premium


Zotac ION-ITX-F

Originally uploaded by paulfarrow

After putting a power meter on my home entertainment system realised that my equipment was using a lot of power 350 watts at peak. So thought I would do some investigation into a recent low power mini-itx board from Zotac. I chose this one because it was dual core and it has a pci-express card on it which is good for a cable tuner I am looking at to use in it for when I am in the USA.

Ordered the board from mini-itx.com as they are the only people I could find doing this board. It came and everything looked fine except that the fan isnt fitted and there are no holes for it on the cpu heat sink. After talking with mini-itx they told me to screw it to the heatsink even though it doesnt have any predetermined holes. I also noticed that the manuals that came with it – the quick installation and the main manual werent for this board. Which was a great start.


Zotac ION-ITX-F

Originally uploaded by paulfarrow

I had already converted my old system from Vista to Windows 7. I then put the new Atom board into my case and started it up. Immediately I was getting checksum errors and just stalled. I was like oh dear must be memory related so twiddled about a bit with the memory until I found the culprit. From going from 1 board to the other, one stick of my 2Gb DDR2 had decided to die.

Not a real problem as really 2Gb of memory should be ok for what that board needs to do. So once over that everything came up fine. I use this as my media center so mainly playing music and watching tv (SD & HD blu-ray).

Initial reaction was that its great. So lets put it through its paces, watching blu-ray with fast moving action – didnt phase it at all everything was running fine at 17% cpu and wattage was about 270 watts so it was already saving me about 70 watts of power. I then decided to get it to record two tv stations and watch a recorded program at the same time.

It did this as well although cpu was about going from about 50 to 75%. And when navigating through the menu it was a bit sluggish although livable. All in all it is great because it will save me money and the Windows 7 media center is a little better than the old Vista one.

FURTHER UPDATE:  having recently bought UP on blu-ray I have noticed something that is important for this board.  Some of the later blu-ray’s are recorded in MPEG4 – I have tested UP and The Taking Of Pelham 123, they are both recorded in MPEG4 unlike I Am Legend which is VC1.

MPEG4 is currently running at around 47% cpu usage which is fine on its own but if (like I was at the time of trying to play UP) you are recording tv at the same time its just a bit too much for the little NVIDIA ION board which then runs at 100% and frames and audio drop on the play back of the blu-ray movie.  Its a shame because its a great board and just what I want when the machine is on all the time but its a trade off between performance and cost of running the equipment.

Overall I am still impressed even though there are these obvious trade offs.