Software indentifies emotions in human speech
Getting a computer to understand what a person is saying is one thing, but getting it to understand how they’re saying it is another. If we’re ever to see a system that truly comprehends the meaning behind the words – and not just the words themselves – then such systems will need to be able to put the words in context. Researchers from Spain’s Universidad Politécnica de Madrid are trying to achieve this by developing an application that allows computers to accurately recognize human emotions based on automated voice analysis.
The tool analyzes sound measurements in speech, as that speech is output by another purpose-built application. Using fuzzy logic, it is able to determine whether the speaker is happy, sad or nervous. In cases where the emotion is not obvious, the system will still specify how close the speaker is to a certain emotion, as expressed through a percentage.
The U Madrid application was created using a new programming tool called RFuzzy, which is reportedly well-suited to use in artificial intelligence, as it recognizes non-black-or-white (or "fuzzy," if you will) concepts such as high, low, fast and slow. It has also been used to program robots to play soccer, for the World Robot Soccer League. RFuzzy has flexible logical mechanisms, leaving the computer to interpret some of the data for itself via measurable parameters, such as speech pitch and rate – or in the case of soccer, the distance from the ball to the robot.
The research is being published in the journal Information Sciences.