"One in which we let groups of randomly selected citizens actually deliberate and govern. One in which we trust deliberation and diversity, not elections and political parties, to shape our ideas and to restrain our worst impulses."
This is very similar to what I've called legislative service, where a random jury of citizens would replace the Senate. In my vision you still have elected representatives who propose legislation and the panel of citizens acts to approve or deny. In open democracy you retain the benefit of a random selection of citizens presumably immune to corruption but they are debating and proposing laws as well. That's the gist I got from the interview, there is a book as well which I will read at some point.
Ezra raises some good objections, like voters feeling alienated from the decision of a panel that they didn't elect (less of an issue for legislative service than open democracy I think) and also the role of experts in the system (lobbyists as a positive force). I think he gets it wrong on California though:
"We have a pretty robust proposition process here. And I think the broad view is that it has been captured. Special interests get whatever they want on it whenever they want."
The problem is that Uber (or whoever) can pour money into marketing their proposition to the point where you feel you'd be letting down the puppy-saving firefighters if you vote against it (I'm possibly mixing up my ads here). With an adversarial jury style system you'd at least have a group of citizens looking at the actual pros and cons.
The interview is worth a listen, and I'll report back on the book when I read it.
The National Weather Service updated their weather radar API. The weather radar layer has changed a bit, you can enter one or more (comma separated) weather station IDs and Earth will show one hour precipitation for all of them. You used to be limited to a single station but with more options for the rader layer to display. Let me know if you love or hate the new version.
4.10 also includes the latest 2021a time zone database.
(I'm sure there are great reasons for it, but the 'new' NWS API is an XML document per station that links to a HTML folder listing of images where you can enjoy parsing out the latest only to download a TINY GZIPPED TIFF file FFS).
For no good reason I downloaded my gas and electricity consumption data by day for the last couple of years.
The electricity trend is unsurprising. At the start of the pandemic it jumps up and stays up. With work and school from home we're running four computers non-stop, burning lights and (the horror) printing things. Overall we used 24% more electricity in 2020.
Gas on the other hand is pretty flat. There are some different peaks at the start and end of the year, but our total gas consumption increased by 0.08%. This doesn't make any sense to me. Being at home doesn't make much of a difference to laundry but it should have had a big impact on everything else. The heating has been on way more, we're cooking breakfasts and lunches that would have occurred out of the house in 2019 and we must be using more hot water as well.
There is one strange difference between how electricity and gas are metered. Fractional kWh are distributed randomly between .00 and .99 as you'd expect. Fractional therms are totally different - we're apparently likely to use 1.02 or 2.03 therms but never 1.50. This feels like it must be some sort of rounding or other billing oddness but I can't find any reasonable explanation despite asking Google three different ways.
In a move that I might come to bitterly regret I have emailed PG&E to see if they can explain it. I'll update this post if I hear back. Or if you're a therm metering expert please leave a comment!
Updated 2021-02-20 13:51:
"Thank you for contacting our Customer Service Center. Gas usage is registered by recording therms usage. If you view your daily usage online, you will see that therms are only registered in whole units. The only pace that you will see therms not as whole units is when you review the average daily usage. The pandemic started in March 2020 and since then your gas usage is up slightly versus previous years. Most customers will see a larger increase in electric usage versus gas usage when staying home more than normal. The majority of customers set the tempatures of the their heaters to very similar temperatures year over year and your heater will work to keep your house at the temperature whether you are home or not at home."
So the fractional therms are some sort of odd rounding on the downloaded data. Fair enough.
The majority of customers use the same temperature setting? Really? So that might be a good explanation if you constantly heat your house to the same temperature, but I know for sure that isn't us. We have a Nest Learning Thermostat and as I've previously reported this doesn't so much learn as just constantly turn the heating off. So staying warm is a constant battle with the thing.
Maybe the difference is that the pandemic started around Spring when San Francisco is warm enough to not need much heating. I'll look again when I can just compare winter vs winter in a couple of months.
Photo sorter has been updated to skip metadata when comparing JPEG files.
I've been picking up some duplicates when I have both a local copy and a version downloaded from Google Photos. Google Photos knocks out some metadata and so the files look different even though the photo is the same. If you've used Photo Sorter before you'll need to run it over everything again to knock out any copies.
TensorFlow Hub has a great sample for transferring the style of one image to another. You might have seen Munch's The Scream applied to a turtle, or Hokusai's The Great Wave off Kanagawa to the Golden Gate Bridge. It's fun to play with and I wondered how it would work for a timelapse video. I just posted my first attempt, four different shots of San Francisco and I think it turned out pretty well.
The four sequences were all shot on an A7C, one second exposure, ND3 filter and aperture/ISO as needed to hit one thousandth of a second before fitting the filter. Here's an example shot:
I didn't want The Scream or The Wave, so I asked my kids to pick two random pieces of art each so I could have a different style for each sequence:
The style transfer network wants a 256x256 style image so I cropped and resized the art as needed.
The sample code pulls images from URLs. I modified it to connect to Google Drive, iterate through a source folder of images and write the transformed images to a different folder. I'm running this in Google Colab which has the advantage that you get to use Google's GPUs and the disadvantage that it will disconnect, timeout, run out of memory etc. To work around this the modified code can be run as many times as needed to get through all the files and will only process input images that don't already exist in the output folder. Here's a gist of my adapted colab notebook:
One final problem is that the style transfer example produces square output images. I just set the output to 1920x1920 and then cropped a HD frame out the middle of each image to get around this.
Here's a more detailed workflow for the project:
I usually shoot timelapse with a neutral density filter to get some nice motion blur. When I shot this sequence it was the first time I'd used my filter on a new camera/lens and screwing in the filter threw off the focus enough to ruin the shots. Lesson learned - on this camera I need to nail the focus after attaching the filter. As I've been meaning to try style transfer for timelapse I decided to use this slightly bad sequence as the input. Generally for timelapse I shoot manual / manual focus, fairly wide aperture and ISO 100 unless I need to bump this up a bit to get to a 1 second exposure with the filter.
After shooting I use LRTimelapse and Lightroom 6 to edit the RAW photos. LRTimelapse reduces flicker and works well for applying a pan and/or zoom during processing as well. For this project I edited before applying the style transfer. The style transfer network preserves detail very well and then goes crazy in areas like the sky. Rather than zooming into those artifacts I wanted to keep them constant which I think gives a greater sense of depth as you zoom in or out.
Once the sequence is exported from Lightroom I cancel out of the LRTimelapse render window and switch to Google Colab. Copy the rendered sequence to the input folder and the desired style image and then run the notebook to process. If it misbehaves then Runtime -> Restart and run all is your friend.
To get to video I use ffmpeg to render each sequence. For this project at 24 frames per second and cropping a 1920x1080 frame from each of the 1920x1920 style transfer images.
Then DaVinci Resolve to edit the sequences together. I added a 2 second cross dissolve between each sequence and a small fade at the very beginning and end.
Finally, music. I use Filmstro Pro and for this video I used the track Durian.