Late fusion of individual engines for improved recognition of negative emotion in speech - learning vs. democratic vote
The fusion of multiple recognition engines is known to be able to outperform individual ones, given sufficient independence of methods, models, and knowledge sources. We therefore investigate latefusion of different speech-based recognizers of emotion. Two generally different streams of information are considered: acoustics and linguistics fed by state-of-the-art automatic speech recognition. A total of five emotion recognition engines from different sites that provide heterogeneous output information are integrated by either simple democratic vote or learning `which predictor to trust when'. We are able to significantly outperform the best individual engine by fusion, and the so far best reported result on the recently introduced Emotion Challenge task.