Autumnal Equinox 2023
Fall starts at 06:60 on September 23 UTC. Autumn if you're British. Spring if you're Australian. Rendered in Catfood Earth.
Related Posts
You Might Also Like
Catfood WebCamSaver 3.22
Catfood WebCamSaver 3.22 is available to download. This release updates the webcam list.
Related Posts
You Might Also Like
- Windows 11 Bluetooth Usability Crime Report
- Intelligence Squared Two-Party Debate
- ZoneInfo Update (tzdata for .NET)
Catfood Earth for Android 4.30
Catfood Earth for Android now supports random locations. The slice of Earth displayed will change periodically throughout the day. You can still set a manual location or have Catfood Earth use your current location. Install from Google Play, existing users will get this update over the next few days.
Related Posts
You Might Also Like
Summer Solstice 2023
Summer Solstice 2023 is at 14:58 UTC on June 21. The image above shows the exact moment of the Solstice as rendered in Catfood Earth. It's the official if not sartorial start of Summer in the Northern Hemisphere and Winter if you find yourself on the other side of the Equator.
Related Posts
You Might Also Like
Catfood WebCamSaver 3.31
Catfood WebCamSaver 3.31 is available to download. This includes the latest update to the webcam list.
Related Posts
You Might Also Like
Shipping a website in a day with Generative AI
It usually takes me a few weeks to get a new website up and running. Last weekend I tried an experiment with Cloudflare Pages and generative AI.
I have wanted to find an excuse to test Pages for a while. It's a pretty awesome product. I'm not doing anything too fancy with it - I have a local generator app that creates the pages for my site. Committing to the right branch in git automatically deploys to Cloudflare's edge network. It seems to do the right thing with all the file types I've thrown at it so far. My only complaint at this point is that it doesn't handle subdirectories. Everything needs to hang off the root unless you want to write some code. I think this is possible with Cloudflare Workers but that's for another day.
The generative piece is automatically writing content for review and publication. For each generated page I'm creating a prompt to write the post, and then another prompt to summarize it for meta descriptions and referencing it from other pages. I also create an embedding to use for interlinking related posts. Finally I create a third prompt to gin up an appropriate image. The site generator stitches these together into HTML and as soon as I commit, the updates are live.
The site is not yet a work of art, and there is plenty to optimize and add, but the basic thing was working in a few hours. It's all ridiculously cheap as well. I'm more than a little frightened for Google given how much of this must be going on right now. And then the next generation of LLMs will be trained on the garbage produced by the current crop.
My super rapid site is called Shop Stories, collecting / dreaming takes of ecommerce heroics. I'll report back if anyone goes there.
Related Posts
- OpenAGI, or why we shouldn't trust Open AI to protect us from the Singularity
- Upgrading from word2vec to OpenAI
- Predicting when fog will flow through the Golden Gate using ML.NET
You Might Also Like
Vernal (Spring) Equinox 2023
Spring for the Northern Hemisphere, and Autumn south of the Equator, starts right now - 21:25 UTC on March 20, 2023. The image above shows the exact moment of the equinox in Catfood Earth.
Related Posts
You Might Also Like
Predicting when fog will flow through the Golden Gate using ML.NET
I'd like to make a time lapse of the moment when fog enters the Golden Gate and flows under the Golden Gate Bridge. It's surprisingly hard to know when conditions will be just right though. Often the weather is pleasant at my house while the fog is sneaking through and there is very little chance of me checking a webcam or satellite image. I decided to fix this about a year ago and started collecting data. The best bet seemed to be GOES-West CONUS - Band 2 which is a high resolution daylight satellite image that shows clouds and fog. I put together a Google Apps Script project to save an hourly snapshot and left if running. Here's a video of the data so far, zoomed in for a HD aspect ratio and scaled up a bit:
It's pretty obvious to me when conditions are just right. Could an ML model learn that this was about to happen from an image that was three hours older?
The first step was dividing thousands of images into two classes - frames where the fog would be perfect in three hours and frames where this was not going to happen. I built a little WPF tool to label the data (I don't use this often these days and every time I do I marvel at how the Image control has defaults that won't show the image FFS). This had the potential to be tedious so I built in some heuristics to flag likely candidates and then knocked out the false positives. Because the satellite images include clouds there is often white in the Golden Gate that is cloud cover rather than fog. At the end of the process I had two subfolders full of images to work with.
My goal this weekend was to get something working, and then refine every few months as I get more data. Right now I have 18 images that are in the Fog class and 7,539 that are NoFog. I also wanted this running on my blog, which is .NET 4.8 and will stay that way until I get a couple of weeks of forced bed rest. ML.NET says that it's based on .NET Standard and so should run anywhere.
Having local automl is very cool once you get it working. For large datasets this might not be a great option, but not having to wrangle with the cloud was also very appealing for this project.
Getting GPU training configured involved many gigabytes of installs. Get the latest Visual Studio 2022. Get the latest ML.NET model builder. Sign up for an NVIDIA developer account and install terrifyingly old and specific versions of CUDA and cuDNN. This last part was the worst because the CUDA installer wanted to downgrade my graphics driver, warned directly that this would cause problems and then claimed that it couldn't find a supported version of Visual Studio. I nervously unchecked everything that was already installed, and so far model builder has run fine and I don't seem to have caused any driver problems.
For image classification settings you can choose micro-accuracy (the default), macro-accuracy, logarithmic loss, or logarithmic loss reduction. Micro-accuracy is based on the contribution of all classes and unsurprisingly it's useless in this case as just predicting 'no' works very well overall. Maco-accuracy is the average of the accuracy of each class and this produced reasonable results for me. Possibly too good, I probably have some overfitting and will spend some time on that soon.
After training the model builder has an evaluate tab which is pretty worthless, at least for this model/case. You can spot check the prediction for specific images, and then there is one overall number for the performance of the model. I'm used to looking at precision and recall and it looks like I'll have to spend some time building separate tooling to do this. Hopefully this will improve in future versions.
At this point I have a .NET 6 console application that can make plausible looking predictions. Overall I'm very impressed with how easy it was to get this far.
Integrating with my blog though was very sad. After a lot of NuGet'ing and Googling I came to realize that ML.NET will not play nice with .NET 4.8, at least for image classification. Having dared to anger the NuGet gods I did a git reset --hard and called out to a new .NET 6 process to handle the classification. For my application I'm only running the prediction once per hour so I'm not bothered by performance. That .NET Standard claim proved to be unhelpful and I could have used just about anything.
The model is now running hourly. I have put up a dedicated page, Golden Gate Fog Prediction, with the latest forecast and plan to improve this over time. If this would be a useful tool for you please leave a comment below (right now it emails me when there is a positive prediction, it could potentially email a list of people).
Updated 2023-03-12 23:24:
After building some tooling to quantify this first model I have some hard metrics to add. Precision is 23%. This means there is a high rate of false positives. Recall is 78%. This means that when there really is fog the model does a pretty good job of predicting it. Overall the f1 score is 35% which is not great. In practice the model doesn't miss the condition I'm trying to detect often but it will send you out only to be disappointed most of the time. I'm not that surprised given how few positive cases I had to work with so far. My next steps are collecting more training data and looking more carefully at the labeling process to make sure I'm not missing some reasonable positive cases.
Related Posts
You Might Also Like
Catfood.Shapefile 2.00
I just released Catfood.Shapefile 2.00, my .NET parser for ESRI Shapefiles.
The big change is that I have migrated to .NET Standard 2.0. This makes it possible to use from .NET Core as well as classic .NET Framework from 4.6.1 up. If you need to use an older version of .NET Framework then you'll want to stick with Catfood.Shapefile 1.60.
Catfood.Shapefile is now also available via NuGet. This is the recommended way to install. The source code is still available on GitHub.
Related Posts
You Might Also Like
Upgrading from word2vec to OpenAI
In 2018 I upgraded the related posts functionality on this blog to use word2vec. This was hacked together by averaging the vectors for interesting words in each post together and then looking for the closest vectors. It worked quite well, but the state of the art has moved on just a little bit since then.
OpenAI has an embeddings API and recently released a cheaper model called text-embedding-ada-002. The vectors have 1,536 dimensions, a pretty significant increase from the 300 I was using with word2vec. Creating vectors for all my posts took a few minutes and cost $0.11 which is pretty affordable. As you'd expect those related posts are now significantly more related and useful. Thanks OpenAI!
I shared some code previously for the word2vec hack. This is a lot more straightforward - call the API with the post text and then compare the vectors with cosine distance to find the most related. It works well for search too.
Related Posts
- Better related posts with word2vec (C#)
- OpenAGI, or why we shouldn't trust Open AI to protect us from the Singularity
- Shipping a website in a day with Generative AI