The voice emotion analyzer (github code available soon) was a simple version of any kinds of voice analysis. The analyzer has about 30 percent of accuracy with a quiet background. With a low accuracy rate, the analyzer sometimes produces an incredibly inaccurate result but triggers a self-doubt human response.Rather than trusting themselves, this is precisely how I feel; the response is often “Is this just me?”
Survey Data Visualization
Each color represents a survey participants. The size of the circle is the intensity of their self-identified feeling.
Voice Emotion Prediction Visualization
The visualization below is running survey results against the machine learning model I built using Ryerson emotion dataset. The visualization did not take into consideration gender prediction. The four inner circles are self-identified emotion with four intensity levels. In the survey I asked participants to self-rate their emotions and did not limit them to just one emotion.The smallest circular area is intensity 1, with increasing intensity ratings represented in successively larger circles. The three outer circles are the results of each model’s emotion prediction. In this experiment, my hypothesis was that the most significant emotion in self-identified emotion would be the same as the result of the model prediction.
Thesis pdf upon request: email@example.com