Eurovision Song Contest 2019 as MoviePrint

Ever wondered what the Eurovision Song Contest 2019 would look like as a MoviePrint? Admittedly, I have not optimised MoviePrint to handle clips with such a size (more than 4 hours and over 6000 shots), but it works. If you are patient that is, very patient 🙂 Want to try it out with your own movies? MoviePrint is free and available for Windows and Mac – Here in a more horizontal timeline view And here in grid view (interval based) And here in grid view (shot detection based)

Illusory Motion Reproduced by Deep Neural Networks

Predictive coding assumes that the brain’s internal models (which are acquired through learning) predict the visual world at all times and that errors between the prediction and the actual sensory input further refine the internal models. It is exciting and scary at the same time, that deep neural networks can even be trained to see the same illusory motion created by static optical illusions.

Computational Video Editing for Dialogue-Driven Scenes

Our system starts by segmenting the input script into lines of dialogue and then splitting each input take into a sequence of clips time-aligned with each line. Next it labels the script and the clips with high-level structural information (e.g., emotional sentiment of dialogue, camera framing of clip, etc.) This combined with introducing the knowledge of film editing idioms delivers an interesting approach for delivering a rough cut.

Googles Appsperiments: Exploring the Potentials of Mobile Photography

They rely on object recognition, person segmentation, stylization algorithms, efficient image encoding and decoding technologies, and perhaps most importantly, fun! E.g. Storyboard is a wonderful example of transforming videos into a single-page layout. Essential similar what I try to achieve with MoviePrint. Google has been faster again 🙂

Reverse image search engine in opencv

A common problem in managing large numbers of images is detecting slight duplicates. Using a library like OpenCV which is widely available across platforms and languages is a great way to detect these duplicates. Very relevant for my purpose. I hope that I eventually can implement something like that.

Improving YouTube video thumbnails with deep neural nets

Video thumbnails are often the first things viewers see when they look for something interesting to watch. A strong, vibrant, and relevant thumbnail draws attention, giving viewers a quick preview of the content of the video, and helps them to find content more easily. Better thumbnails lead to more clicks and views for video creators. Old article, but for MoviePrint very relevant.

To not forget

The other day I realised what drives my interest in making moving images easily perceivable. My memory generally works well, but it just needs a little lookup support. A trigger or an index helps me recall passed events. That is what I would like to offer. An index to trigger your memory.

Use of AI to unlock video insights

Video Indexer enables you to extract visual and speech metadata from your videos, which can be used to build enhanced search experiences in your existing apps. This time it is Microsoft offering a service to easily extract insights from your videos promising to make your content more discoverable. The AI delivers the following features: For Audio Transcript Translation Speaker indexing Keywords Brand mentions Sentiment analysis Telephony audio support Transcript customization Voice activity detection For Video Face detection Face identification Celebrity identification Visual text recognition Shot detection Keyframes extractions Content moderation Annotations

A Startup’s Neural Network Can Understand Video

The software created a timeline with graph lines summarizing when different objects or types of scene were detected. It showed exactly when “snow” and “mountains” occurred individually and together. The software can analyze video faster than a human could watch it; in the demonstration, the 3.5 minute clip was processed in just 10 seconds. Wonder if I ever get so far 🙂

When it all started

It is now over 10 years ago that I was writing my master thesis in architecture. Already back then I worked as a motion designer and was interested in film. While trying to find a related topic, I ended up writing about how film architecture integrates the viewer in the film space. As an example I analysed the movie GATTACA from Andrew Niccol and took around 1500 screenshots. When placing them in a grid, I liked the fact that you could get a feeling for the different shots, the colours used and a sense of timing. All this got me started on thinking about different ways to represent movies and reveal even more detailed aspects of a films mood, its content or its pace.