This was a random preflight download for me from Netflix. It's about a Belgian influencer who loses his mind after the arrival of a child (and the soundtrack nearly made me lose my own mind - the baby screams for most of the film and is only interrupted by things like eating soft fruit loudly). It was OK.
Mythic Quest Season 3
It's not quite as good as the first couple of seasons but still worth watching. I think this time around moving Ian and Poppy into their own studio was a mistake, but not as big as consummating the unconsumatable.
Poker Face Season 1
This is mostly talked about as a Colombo remake but I think Natasha Lyonne is channelling the Hoff because this really reminds me more of Knight Rider. Stranger comes to town, gets people out of jam, leaves. It's more about that vibe than the solving of any particular muder. Loved it.
The Last of Us Season 1
Having just complained bitterly about Dark Summer skipping the apocalypse foreplay I was happy to see The Last of Us revel in it. This is what a high concept zombie show looks like. I'm sorry, it's fungus rather than zombies of course. I never played the game it's based on but it seems like zombies? They bite you and you get infected. As bored as I am with zombies this was really good.
(All images included with ITCHWY reviews are the property of their respective owners and are used to illustrate reviews only.)
For a long time this blog has been black with some splashes of International Orange. The favorite icon and logo was some weird grid of dots (and yes, I give Google crap for Material Design). Now that Google has brought icons to the desktop search results as well as mobile I wanted a rounder favicon. Their direction is round icons all the way, and my weird dots don't look great in this format.
The new logo and favorite icon is a dynamic pie chart. This updates daily (the favorite icon will lag a bit due to caching) and shows the category distribution of posts over the last two years. The logo text is just a random permutation of the category colors. This is stupidly precise and geeky and also a lot more cheerful. I may add a few more splashes over time.
I started with some online tool for the palette and used International Orange as a starting point and its color wheel complement to pin the other end of the range. This looks terrible. I then had a long conversation with ChatGPT 4 and told it what I liked and didn't like about each palette it came up with. I was pretty high maintenance but the AI was patient and I'm pretty happy with the color scheme we ended up with.
I'd like to make a time lapse of the moment when fog enters the Golden Gate and flows under the Golden Gate Bridge. It's surprisingly hard to know when conditions will be just right though. Often the weather is pleasant at my house while the fog is sneaking through and there is very little chance of me checking a webcam or satellite image. I decided to fix this about a year ago and started collecting data. The best bet seemed to be GOES-West CONUS - Band 2 which is a high resolution daylight satellite image that shows clouds and fog. I put together a Google Apps Script project to save an hourly snapshot and left if running. Here's a video of the data so far, zoomed in for a HD aspect ratio and scaled up a bit:
It's pretty obvious to me when conditions are just right. Could an ML model learn that this was about to happen from an image that was three hours older?
The first step was dividing thousands of images into two classes - frames where the fog would be perfect in three hours and frames where this was not going to happen. I built a little WPF tool to label the data (I don't use this often these days and every time I do I marvel at how the Image control has defaults that won't show the image FFS). This had the potential to be tedious so I built in some heuristics to flag likely candidates and then knocked out the false positives. Because the satellite images include clouds there is often white in the Golden Gate that is cloud cover rather than fog. At the end of the process I had two subfolders full of images to work with.
My goal this weekend was to get something working, and then refine every few months as I get more data. Right now I have 18 images that are in the Fog class and 7,539 that are NoFog. I also wanted this running on my blog, which is .NET 4.8 and will stay that way until I get a couple of weeks of forced bed rest. ML.NET says that it's based on .NET Standard and so should run anywhere.
Having local automl is very cool once you get it working. For large datasets this might not be a great option, but not having to wrangle with the cloud was also very appealing for this project.
Getting GPU training configured involved many gigabytes of installs. Get the latest Visual Studio 2022. Get the latest ML.NET model builder. Sign up for an NVIDIA developer account and install terrifyingly old and specific versions of CUDA and cuDNN. This last part was the worst because the CUDA installer wanted to downgrade my graphics driver, warned directly that this would cause problems and then claimed that it couldn't find a supported version of Visual Studio. I nervously unchecked everything that was already installed, and so far model builder has run fine and I don't seem to have caused any driver problems.
For image classification settings you can choose micro-accuracy (the default), macro-accuracy, logarithmic loss, or logarithmic loss reduction. Micro-accuracy is based on the contribution of all classes and unsurprisingly it's useless in this case as just predicting 'no' works very well overall. Maco-accuracy is the average of the accuracy of each class and this produced reasonable results for me. Possibly too good, I probably have some overfitting and will spend some time on that soon.
After training the model builder has an evaluate tab which is pretty worthless, at least for this model/case. You can spot check the prediction for specific images, and then there is one overall number for the performance of the model. I'm used to looking at precision and recall and it looks like I'll have to spend some time building separate tooling to do this. Hopefully this will improve in future versions.
At this point I have a .NET 6 console application that can make plausible looking predictions. Overall I'm very impressed with how easy it was to get this far.
Integrating with my blog though was very sad. After a lot of NuGet'ing and Googling I came to realize that ML.NET will not play nice with .NET 4.8, at least for image classification. Having dared to anger the NuGet gods I did a git reset --hard and called out to a new .NET 6 process to handle the classification. For my application I'm only running the prediction once per hour so I'm not bothered by performance. That .NET Standard claim proved to be unhelpful and I could have used just about anything.
The model is now running hourly. I have put up a dedicated page, Golden Gate Fog Prediction, with the latest forecast and plan to improve this over time. If this would be a useful tool for you please leave a comment below (right now it emails me when there is a positive prediction, it could potentially email a list of people).
Updated 2023-03-12 23:24:
After building some tooling to quantify this first model I have some hard metrics to add. Precision is 23%. This means there is a high rate of false positives. Recall is 78%. This means that when there really is fog the model does a pretty good job of predicting it. Overall the f1 score is 35% which is not great. In practice the model doesn't miss the condition I'm trying to detect often but it will send you out only to be disappointed most of the time. I'm not that surprised given how few positive cases I had to work with so far. My next steps are collecting more training data and looking more carefully at the labeling process to make sure I'm not missing some reasonable positive cases.
This is a 4k sequel to the World Time Lapse movie I made many years ago. It also uses webcams from the Catfood WebCamSaver database. I used a Google Apps Script project to save frames, upscaled to 4K using Topaz Gigapixel AI, turned the upscaled frames into movies with ffmpeg, and finally edited the highlights together with DaVinci Resolve.
Open AI just dropped a pretty remarkable blog post on their roadmap for not destroying civilization with their imminent artificial general intelligence (AGI):
"As our systems get closer to AGI, we are becoming increasingly cautious with the creation and deployment of our models. Our decisions will require much more caution than society usually applies to new technologies, and more caution than many users would like."
Now, I'm around 98% sure that Open AI mostly answers the question: What if we allocated unlimited resources to building a better auto-complete? ChatGPT is an amazing tool but it's amazing at guessing which word (token) is likely to appear next. Quite possibly their blog post is just an exercise in anchoring - if they're 95% of the way to AGI then GPT4 must be pretty amazing and therefore worth a lot of money. If everyone realized that they're more like 2% of the way there, and the next 1% is going to be exponentially difficult, then some of the froth would blow off.
Their ideas for keeping us safe are a little disturbing:
"We think public standards about when an AGI effort should stop a training run, decide a model is safe to release, or pull a model from production use are important."
Given the lack of transparency around the inner workings of ML models, and the lack of knowledge around what intelligence even looks like, this is a pretty risible idea. And:
"Finally, we think it’s important that major world governments have insight about training runs above a certain scale."
We are facing down the prospect of a second Trump term while the UK has a Prime Minister who thinks that a homeless person might be 'in business'.
The most concerning part for me is:
"...we hope for a global conversation about three key questions: how to govern these systems, how to fairly distribute the benefits they generate, and how to fairly share access."
Creating AGI would be an amazing and terrifying accomplishment. Treating it as a slave feels like the most surefire way to usher in the most terrifying possible consequences, for us and for the AGIs.
Full disclosure: I use Open AI embeddings for related posts and site search. The words on this blog are my own though. I do occasionally generate a post image using Stable Diffusion like the rather strange one above.
We did this four mile out and back hike during a small gap between record breaking winter storms. There was snow, fog, mud, many ordinary cows and one very furious one. The first waterfall, Ravine Falls, was very pretty and the eponymous Phantom Falls lived up to its name. Will try again sometime. This is close to Oroville, California in the North Table Mountain Ecological Reserve.