Posted on July 24, 2019 by Noon van der Silk | Back to recent posts

#### Scholarships for August Technical Deep Learning Workshop Close on Sunday!

Tags: announcement

Watch on Vimeo - Note on scholarships.

Ruth and I are very pleased to be offering scholarships for our workshop coming up on the 8th of August. We’ve already accept first-round scholarships (6 people), and so there are some remaining places.

We’re encouraging applications from anyone that has typically faced barriers in their career/learning in the tech industry. One of our big aims at Braneshop is to increase the representation in the AI industry, and this is one way we’re working towards that goal.

You can find the application form here: Scholarship for the 6 Week Technical Deep Learning Workshop, and of course more details about the workshop itself here: 6 Week Workshop.

We’ve also got a scholarship for the AI For Leadership Workshop, coming up in September. This is an important one for us as well, as, in order to see change broadly through-out the industry, we will need change in leadership positions.

Here at the Braneshop we’re creating a welcoming and supportive community for everyone to be involved in AI. We hope you join us!

#### Technical details about the video …

I had a lot of fun making the video above. You’ll notice, if you watch it, that my person is cut out, and a new weird background is inserted. I did this using deep-lab from TensorFlow. If you’re feeling adventurous you can try out their demo for yourself, on Google Colab.

With that in hand, I worked out how to extract all the frames of the video as individual images, then ran that network over each of them (there were ~1600 images, but it only took a few minutes on my laptop (which doesn’t have a GPU)). After that, I made the little background animation1, overlayed the images, and stitched them back together. I did all this using the “ffmpeg” and ImageMagick2.

Also, while looking around and eventually settling on deep-lab, I found a TensorFlow.js demo project that performs person-segmentation in the browser! It’s not amazingly high quality, but it runs in the browser! Pretty cool.

1. I used my “cppn-cli” program to do this.
# Having a pre-existing checkpoint
cppn existing sample --checkpoint_dir logs/cf9ecd76 --width 288 --height 513 \
--out out/vid-bg --z_steps 1614

2. Roughly, here are the commands I used (ignoring the ones I used to generate the animations):
# Extract images
ffmpeg -i original.mp4 images/img%05d.jpg -hide_banner
#
# In the "vid-bg" folder; make composite images.
for i in *.png; do convert $i ~/dev/deep-lab/masked-images/img$i \
-gravity center -compose over -composite ~/dev/deep-lab/with-bg/\$i.jpg; \
done;
#
# Learn the framerate of the original video
ffprobe -v 0 -of csv=p=0 -select_streams v:0 -show_entries stream=r_frame_rate \
original.mp4
#
# Make a video from images
ffmpeg -framerate 29.5 -pattern_type glob -i 'images/*.jpg' \
-c:v libx264 -pix_fmt yuv420p out.mp4
#
# Copy audio into a video, from a video (named here "audio.mp4")
ffmpeg -i video.mp4 -i audio.mp4 -c copy -map 0:0 -map 1:1 -shortest out.mp4