(Published to the Fediverse as:
Sunset #6 #timelapse#4k#sunset#video Timelapse of sunset looking west over The Pacific from West Portal in San Francisco.)
(Published to the Fediverse as:
Post Storm Sunset #photo#sanfrancisco#sunset Photo of a dramatic sunset after a winter storm from Grand View park in San Francisco, California.)
Photo sorter has been updated to skip metadata when comparing JPEG files.
I've been picking up some duplicates when I have both a local copy and a version downloaded from Google Photos. Google Photos knocks out some metadata and so the files look different even though the photo is the same. If you've used Photo Sorter before you'll need to run it over everything again to knock out any copies.
Stormy #timelapse#sanfrancisco#storm#video Time lapse of clouds developing and a storm sweeping in over the Sunset District in San Francisco, California.
Updated on Saturday, February 19, 2022
Clouds develop and a storm sweeps in looking west over the Pacific from San Francisco.
(Published to the Fediverse as:
Stormy #timelapse#sanfrancisco#storm#video Time lapse of clouds developing and a storm sweeping in over the Sunset District in San Francisco, California.)
Style Transfer for Time Lapse Photography #code#ml#tensorflow#drive#python#video How to apply the TensorFlow Hub style transfer to every frame in a timelapse video using Python and Google Drive.
Updated on Saturday, February 19, 2022
TensorFlow Hub has a great sample for transferring the style of one image to another. You might have seen Munch's The Scream applied to a turtle, or Hokusai's The Great Wave off Kanagawa to the Golden Gate Bridge. It's fun to play with and I wondered how it would work for a timelapse video. I just posted my first attempt, four different shots of San Francisco and I think it turned out pretty well.
The four sequences were all shot on an A7C, one second exposure, ND3 filter and aperture/ISO as needed to hit one thousandth of a second before fitting the filter. Here's an example shot:
I didn't want The Scream or The Wave, so I asked my kids to pick two random pieces of art each so I could have a different style for each sequence:
The style transfer network wants a 256x256 style image so I cropped and resized the art as needed.
The sample code pulls images from URLs. I modified it to connect to Google Drive, iterate through a source folder of images and write the transformed images to a different folder. I'm running this in Google Colab which has the advantage that you get to use Google's GPUs and the disadvantage that it will disconnect, timeout, run out of memory etc. To work around this the modified code can be run as many times as needed to get through all the files and will only process input images that don't already exist in the output folder. Here's a gist of my adapted colab notebook:
One final problem is that the style transfer example produces square output images. I just set the output to 1920x1920 and then cropped a HD frame out the middle of each image to get around this.
Here's a more detailed workflow for the project:
I usually shoot timelapse with a neutral density filter to get some nice motion blur. When I shot this sequence it was the first time I'd used my filter on a new camera/lens and screwing in the filter threw off the focus enough to ruin the shots. Lesson learned - on this camera I need to nail the focus after attaching the filter. As I've been meaning to try style transfer for timelapse I decided to use this slightly bad sequence as the input. Generally for timelapse I shoot manual / manual focus, fairly wide aperture and ISO 100 unless I need to bump this up a bit to get to a 1 second exposure with the filter.
After shooting I use LRTimelapse and Lightroom 6 to edit the RAW photos. LRTimelapse reduces flicker and works well for applying a pan and/or zoom during processing as well. For this project I edited before applying the style transfer. The style transfer network preserves detail very well and then goes crazy in areas like the sky. Rather than zooming into those artifacts I wanted to keep them constant which I think gives a greater sense of depth as you zoom in or out.
Once the sequence is exported from Lightroom I cancel out of the LRTimelapse render window and switch to Google Colab. Copy the rendered sequence to the input folder and the desired style image and then run the notebook to process. If it misbehaves then Runtime -> Restart and run all is your friend.
To get to video I use ffmpeg to render each sequence. For this project at 24 frames per second and cropping a 1920x1080 frame from each of the 1920x1920 style transfer images.
Then DaVinci Resolve to edit the sequences together. I added a 2 second cross dissolve between each sequence and a small fade at the very beginning and end.
Finally, music. I use Filmstro Pro and for this video I used the track Durian.
(Published to the Fediverse as:
Style Transfer for Time Lapse Photography #code#ml#tensorflow#drive#python#video How to apply the TensorFlow Hub style transfer to every frame in a timelapse video using Python and Google Drive.)
San Francisco New Year's Eve Timelapse 2020 #timelapse#sanfrancisco#video San Francisco New Year's Eve Timelapse 2020 (style transfer neural network version).
Updated on Saturday, February 19, 2022
A timelapse of San Francisco on New Year's Eve 2020:
Shot from Treasure Island, Corona Heights Park. Fort Baker and near Battery 129 in the Marin Headlands. Each sequence was transformed with a style transfer neural network (full details).
(Published to the Fediverse as:
San Francisco New Year's Eve Timelapse 2020 #timelapse#sanfrancisco#video San Francisco New Year's Eve Timelapse 2020 (style transfer neural network version).)
(Published to the Fediverse as:
Abrigo Valley #hike#map Briones Regional Park hike taking in Abrigo Valley Trail, Santos Trail and Briones Crest Trail. 3.5 miles.)