International Conference on MultiMedia Modeling (MMM), 2014

Author: Esra Acar

I recently attended the International Conference on MultiMedia Modeling (MMM) in Dublin, Ireland. The conference was held in Guinness Storehouse. It was a really nice venue for a conference!

MMM is an international conference for people both from academy and industry for sharing new ideas, original research results and practical development experiences from all MMM related areas such as multimedia content analysis, signal processing, multimedia applications and services.

I presented our paper on affective video analysis titled as Understanding Affective Content of Music Videos Through Learned Representations in which we present our method for the affective content analysis for music videos using deep learning techniques.

There were two keynotes at the conference. The first keynote was given by Dr. Anil Kokaram who is a Tech Lead in the Transcoder Group of YouTube. He was also awarded an Oscar for the development of visual effects software for the film industry. His talk was about video ingest challenges at YouTube. Sarah Massengale who is the Communications Manager of a multimedia tech startup called Narrative gave the second keynote talk at the conference. The talk was about lifelogging and she talked about how they designed the Narrative Clip camera and a related application which can be used to enable our memories to become searchable and shareable.

There were many quality papers and demos at the conference. I’d like to mention here some of them.

  • Learning to Infer Public Emotions from Large-scale Networked Voice Data: The authors try to infer public emotions from large-scale networked voice data using both acoustic features (e.g., energy, f0, MFCC, LFPC) and correlation features (e.g., individual consistency, time associativity, environment similarity).
  • FoodCam: A Real-time Mobile Food Recognition System employing Fisher Vector (Best Demo Award): This was a demo paper and the authors won the best demo award. They demonstrate a mobile food recognition system with Fisher Vector and linear one-vs-rest SVMs to enable recording our food habits. There is a prototype system provided as an Android-based smartphone application which is available at http://foodcam.mobi/ .
  • Affect Recognition using Magnitude Models of Motion: This paper was about the recognition of the affect state of a single person from video streams. The authors create a model based on motion-related features constructed from a set of point of interest tracked using optical flow. They then predict the state of the affective dimension using SVM.

Leave a Reply

You can use these HTML tags

<a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <s> <strike> <strong>