Monday, 22 June 2015

Introductory Post: What is OpenEMO?


Emotion Tagcloud


Hello Reader!

You must be wondering what this blog, "OpenEMO" is all about? If you thought that this is some sort of an Open Source project and that deals emotions, guess what? You are absolutely correct!

OpenEMO is an Open Source framework that Red Hen Lab is working on, that will be able to recognize emotions from audio streams. Under the guidance of Prof. Steen of Red Hen, I will be attempting to build this in these few weeks.

This Open Source project, hosted on github, is meant to be a generic tool for Emotion Recognition from Audio much like the openEAR project. However, the openEMO project will involve modules built to achieve its goal to detect emotions in news first and then go ahead to a more generic tool.

The ultimate aim of this project is to annotate generated news transcripts from the audio. For this,
the audio should be free of advertisements as we will not be measuring emotions over those. Also, transcripts were required to be force-aligned so that we may annotate them correctly based on the analysis of the audio stream.

Just to go ahead and articulate the thoughts that went into this project when we began working on it,
the questions that came up before we began planning the workflow included:
1. What is the set of emotions that we want to detect? Will they not be application specific?
2. How do we decide on the unit (in terms of time/speaker) over which emotion must be measured?
3. Are there not cases where multiple emotions can be expressed simultaneously? (For eg. Anger and Sarcasm together)

To get some insights into these, we decided to look into openEAR's implementation to understand what emotions it was built to recognize. We also agreed upon the output of the diarization module (audio segments) being used as the basic unit over which we will look to associate an emotion over. Currently, we would look to provide the single emotion that is best with the problem of multiple emotions to be considered once we can detect a single emotion satisfactorily.

The workflow that I had initially intended to implement can be found at this link. The code for this project can be found here. With each new post, I will go on to explain what work was done during a particular week/phase and how openEMO is progressing.

Do reach out if you have any more queries by commenting on the posts!



No comments:

Post a Comment