Update AWS EC2 inbound security group rules when your IP address changes

By . Updated on

AWS IP Update, a Windows Tray application that automatically updates EC2 inbound security rules when an IP address change is detected.

I just released AWS IP Update, a Windows Tray application that updates inbound security group rules on AWS EC2 when your IP address changes.

This has been vaguely on my to-do list for years. I didn't bother because I knew how tedious it would be from that time I pulled Azure metrics into Google Data Studio (now Looker) via Apps Script. This whole thing was banged out by Claude Code in five volleys, and I think I wasted those because it could probably have single-shotted it. I did not write a character of code, and it was faster to create than the way I used to get access.

I have a monthly sys admin day where I patch all the things, pull a Google photos archive and run an old fashioned backup to an external hard drive. The hardest part of this psychologically has been getting access to AWS to patch by blog server and pull a backup. My IP address has changed, and I need to log into AWS, find the right settings, look up my external IP address (Google Search used to just show this but it's been broken for ages) and update the EC2 security group. Every other part of the routine is easy, the access part always bums me out. So this is a quick AI tool that not only saves a few minutes a month, it also helps with mood and blood pressure.

Seervo, an LLM Powered Robot

By . Updated on

Seervo LLM AI Robot

I just released Seervo, an open-source LLM-powered robot. The GitHub repo contains source code, a shopping list and 3D files to print the chassis.

Seervo sends an image from its camera to GPT 5.4. The LLM can decide to change the colors of four LEDs and to drive the motors. It has the objective of finding and entertaining humans while avoiding pets at all costs. The video below shows it mostly trying to escape from my dog:

The robot is based on an ESP32 microcontroller with a camera, some motors and a battery. The client code is MicroPython and it talks to an ASP.NET core web service that handles the LLM control calls. You could do everything on the ESP32 but it's easier to tweak prompts and see where it's going wrong with a local server. The server additionally stores memories so the robot can remember what it has been doing recently, and handles memory compaction so any really useful knowledge is retained in the context window.

The code was all written using Claude Opus 4.6. The chassis was designed in OpenSCAD using ChatGPT - something that has been a struggle before but GPT 5.4 can iterate on a 3D model with pretty vague directions.

Let me know if you build one!

Updated 2026-04-05 23:22:

Added an HC-SR04 ultrasonic distance sensor so the robot can now tell how much clear space there is in front of it. Tuned the instructions to use this data and also with formulae to convert distance and rotation to approximate motor run time. This all makes the robot a lot more confident in its movement.

Workspace Studio and Read-only Sheets

By . Updated on

A Google Workspace Studio Flow

As an Apps Script addict I was excited to experiment with Google Workspace Studio. It's a no code automation tool in the typical flowchart style with the addition of Gemini, so you can use AI for decisions and text manipulation. Unfortunately it failed hard on my first task.

I have a spreadsheet that pulls in Google Fit data, and another one that combines that with other goals to create an overall lifestyle score. Occasionally I copy data from one sheet to the other and automating this has been on my todo list for years with absolutely no chance of getting to done. Workspace Studio should have made this easy.

Building the flow was straightforward, but the steps that write new data were flagged as being in an error state, although no actual errors were flagged in the configuration for the steps. Opening and closing the flow unsettlingly cleared the errors. I started the flow and hoped for the best, but got this error:

"Couldn't complete. Check that the spreadsheet is private and doesn't use the IMPORTRANGE function."

At first I thought this must be inverted and the sheet needed to be shared in some way... but no, it's true, you can only update a private sheet. Which is useful in a trees falling with no one watching kind of way.

I share this mostly because googling the error came up short (and the AI overview is unhelpfully talking about sharing the sheet). Workspace Studio is only a few months old and hopefully this limitation will be fixed. There are some nice features in preview, like webhooks, raising the prospect of handing over to apps script if the flow can't do quite what you need. This should be a nice piece of the AI automation puzzle as it matures.

Blog Engine Upgrade

ITHCWY has been running on ASP.NET 4.8 for a long time. I've been putting off the upgrade as the official Microsoft documentation says something close to 'your funeral'. Visual Studio Copilot added a modernization agent. It claimed to have generated a plan, but the file it insisted it had just written was hallucinated. So I rolled up my sleeves and did it the hard way.

Hard is an overstatement. I did some much needed refactoring and jettisoned a bunch of dead code. The regular Copilot (via Claude Sonnet 4.5) was a big help on things that didn't exist any more or needed to be done differently. The new OutputCache refused to disengage until I entirely killed the default and I need to spend some more time there. It seems to matter which order you enable server features which is moderately terrifying but probably doesn't need to be touched often. It's certainly better than poking around in web.config and hoping for the best. If you're reading this then it has been served by asp.net core 10.

Probably some subtle things are broken and it usually takes a while to mop everything up after a migration this big. If you run into any problems please get in touch.

Aurora, the Raspberry Pi Smart Assistant

By . Updated on

Aurora, the Raspberry PI AI Assistant

Aurora is an open source smart assistant designed for the Raspberry Pi and functional on Windows, Mac and anything else that can run Python.

Aurora uses the OpenAI realtime API, and is pretty similar to the advanced voice mode in ChatGPT. The idea is a replacement for Amazon Alexa and Google Home that anyone can build and extend.

I have had this running for nearly a year. It was functional but the code was a mess. OpenAI just promoted their realtime API from beta, and that prompted me to refactor the project and get it into shape for a public release.

While Amazon is slowly rolling out an LLM version of Alexa, I'm not buying another device that just wants to show me ads. I also don't want an assistant that has been engineered to minimize costs. Aurora is an exercise in what is possible using the best models available combined with helpful tools.

The project has a plug-in architecture for tool support, so you can pick and choose what makes sense. This first release supports timers, Perplexity Sonar, Todoist and a kid conflict resolution system that in my house is known as 'cheese night'. I'll be adding more tools over time, and the architecture makes it easy to contribute (please do!).

When I first designed Aurora I used an Adafruit BrainCraft HAT which has a small display and some nice LEDs. This combined with a Raspberry Pi 4 and a couple of speakers makes for a compact assistant. The project contains interface code for this specific setup, and also a generic version that will run anywhere. As with tool support the interface can be extended to support other devices.

Check out Aurora on GitHub, contribute if you come up with a cool tool or UI, and please send me a photo if you build an assistant.

Updated 2025-09-13 21:56:

Added support for the Bay Area 511 API to provide arrival times. You configure an agency and stop ID and then can ask 'when is the next L' and get arrival times. This is really useful for me. There is a new tool calling image state for the Raspberry Pi version and times are now poems based on the timer name.

Updated 2025-09-21 19:44:

Recipe support added. Ask Aurora to cook with you and she will guide you step by step through a recipe.

LIFX light bulbs are also now supported. Add names and IDs to settings and then Aurora can switch lights on and off.

Also added OpenSCAD and STL files for 3D printing a case for the BrainCraft version.

Updated 2026-02-28 18:46:

Upgraded to the new gpt-realtime-1.5 model. Also added a setting so it's easy to go back to the original gpt-realtime if that's what you prefer.

Annual Android Antics

Developer shoots a laptop

Android Then: Why not add a splash of color and personality to the status bar?

Android Now: I see a red door and I want it painted black. / No colors anymore, I want them to turn black...

Some people have to go to the doctor for a heart stress test. I just upgrade Android apps when Google forces me to. Today was supposed to be a simple exercise in bumping the target framework up to 36 / Android 16. Instead the status bar disappeared.

Google often deprecates things and this time for me it's setting the status bar color. Not the end of the world, but for my app the helpful default is white icons and text on a white background. Apparently if you really, really want to you can fuck around with something called the WindowInsets API but life is too short. I added a version dependent tweak to the status bar and moved on.

Catfood Earth and Fortune for Android are both rolling out over Google Play right now. They have a slightly worse look and feel and zero additional functionality but at least, for now, I'm still allowed to use some international orange in my part of the window. I'm pretty sure the next iteration of Material Design will involve the National Guard somehow.

Get an email when a new OpenAI model is available via the API

By . Updated on

Notification of a new OpenAI ModelMeta: Google Apps Script code to send an email alert whenever a new OpenAI model is available through the API.

Here's a simple Google Apps Script to monitor for new OpenAI models:

To get this running, paste into a new Apps Script project and enable the Gmail API under services. Add your email address and OpenAI API Key at the top. Run a couple of times and you should get an email with all the current models and then nothing the second time unless you get lucky. On Triggers set a scheduled execution with whatever frequency feels good (I use daily) and you're all set.

Set Todoist Label Colors Automatically Using OpenAI Embeddings

By . Updated on

An abstract painting of the Todoist color palette

From the department of things I wouldn't have bothered with a year ago, here's a python script to set Todoist label colors.

Why? I like a productivity environment with some color and flare, and it also helps to visually recognize what a task relates to. But setting label colors is more clicks than I have patience for.

How? Just figure out embeddings for each available color and then for each label. Use cosine similarity to set the color that best suits each label. Colors will stay consistent for existing labels and new ones will get just a dash of semantic meaning in their assignments.

Here's the code (you need an OpenAI API key and a Todoist API token set as environment variables):

Updated 2026-03-07 03:28:

I have been wondering what a Cadillac version of this silly project would look like. The current version gives me colors, but not based on a lot of meaning. I also have to remember to find and run the python script when I add a new label. So I refactored this to run in Google Apps Script and to try and make those colors mean something. This is a two step process. First, the script uses GPT 5.4 to generate a description of each color including cultural significance and the kinds of tasks it might be associated with. Embeddings for these descriptions are cached. Second, the script loads three tasks for each label and finds the most similar color embedding for the sample tasks.

The script is scheduled to run weekly so I don't need to remember to do anything, always a big win.

It might end up being irritating as label colors will change over time. This might convey a subtle sense of what the label currently means, or it might just make it harder to remember the associations. Too soon to tell. In case this is ever helpful to anyone here's the code:

Vibe Coding a Vibe Video Editor

By . Updated on

Vibe Video Editor

I set myself the goal this morning of shipping a vibe coded web application in a day. Welcome to the world, vibevideo.xyz, and apologies if you own Adobe stock.

Vibe Video is a chat based video editor. There are two things that are pretty cool about it. First, it writes its own code. When you ask it to make an edit it uses GPT 4.1 to write a function for the edit and then runs that function. That's the Vibe Video part. It's a single page JavaScript application but the core functionality is hallucinated at runtime.

Second, it runs locally. It's based on ffmpeg.wasm, which I knew nothing about before this morning. The site is a Vite/Svelte SPA deployed to a Cloudflare site with no backend except the OpenAI API. Your videos never leave your computer, and all the editing happens in the browser thanks to WebAssembly. I hadn't touched Vite or Svelte before today either.

Both of those things are crazy. There are some limitations - as the great saying goes, "If you are not embarrassed by the first version of your product, you’ve launched too late.". Right now you need to bring your own OpenAI API Key. Maybe I'll add some free usage when I get a grip on how much it costs to run. It's fast when you don't need to reencode - so trimming the video is quick. but changing resolution or speed will take some time. And there is a 2GB limit on file sizes, a limitation of WebAssembly. My hope is to iterate and improve, so kick the tires and let me know what else it needs.

To build this I used Visual Studio Code with GitHub Copilot in agent mode using Claude 3.7. The only code that's hand crafted is changing GPT 4 Turbo to GPT 4.1 (Claude refused to admit that 4.1 exists) and repeatedly removing a call to a non-existent file size property. While I didn't have to write any significant code I did need to coax the LLM - an experience halfway between being a product manager and pair programming with a buddy who is a much faster typist and won't allow you to touch the keyboard. There was a fair bit of troubleshooting involved as well, some with the environment setup and some with runtime errors. Having the experience to help guide Copilot through these helped a lot.

I have been experimenting with a lot of different tools over the past few weeks. Cursor and Windsurf both worked well for me, Replit was pretty good as well. Cline impressed me with debugging an app after building it. V0 and Lovable struggled with my test use case. I used Copilot for this personal experiment as I already have a license and wanted to see how far it would go. All of these tools are going to continue to improve (and/or merge). The current state of the art is a big deal already. A few weeks ago the statement that a good chunk of Y Combinator startups were writing 95%+ of their code with AI sounded like bullshit. Now I think they might be sandbagging.

One bad smell is that the experience gets worse as you add code. The start of the project was unbelievably easy, but as features were added Copilot started to bog down. This is unsurprising as the more you load up the context window for an LLM the more it struggles with the details. Again, this will just improve over time as models and agentic implementations improve. Good architecture is also going to matter - and is no different here from freeing regular developers from needing to keep too much in their head at the same time.

Vernal (Spring) Equinox 2025

Spring Equinox 2025 in Catfood Earth

Spring has sprung (09:02 UTC, March 20, 2025) for the Northern Hemisphere and Autumn is here if you are south of the Equator. Rendered in Catfood Earth.