The Emotion Analysis model will help you understand and interpret speaker emotions in a conversation or text. It is designed to understand human conversation in the form or free text or spoken text and is designed after the emotion wheel.
The Emotion wheel describes eight basic emotions: anger, anticipation, disgust, fear, joy, sadness, surprise, and trust. People can use the wheel to identify themselves and come to terms with how they are feeling and, ultimately, become more self-aware and self-compassionate.
Types of Emotions detected by enabling this setting in the Conversation API:
Anger
Anticipation
Disgust
Fear
Joy
Love
Optimism
Pessimism
Sadness
Surprise
Trust
Input Type Supported: Audio, Video
Model Dependency for: Speech to Text, Speaker Separation
emotion_analysis.enable
key is set to true
under the settings
objectTransaction ID
is returned in the JSON body once the processing job is launched successfully.
This Transaction ID
can be used to check the status of the job or fetch the results of the job once the metadata is computed{"status":true,"transaction_id":32dcef1a-5724-4df8-a4a5-fb43c047716b,"message": " Compute job for file-id: 32dcef1a-5724-4df8-a4a5-fb43c047716b launched successfully"}
Speech to Text
has to be enabled for Action Items
to be enabled){"status":false,"error":{"code":"MCST07","message":"DependencyError: emotion_analysis depends on speech_to_text"}}
curl --request POST 'https://api.marsview.ai/v1/conversation/compute' \--header 'appSecret: 32dcef1a-5724-4df8-a4a5-fb43c047716b' \--header 'appId: 1ZrKT0tTv7rVWX-qNAKLc' \--header 'Content-Type: application/json' \--data-raw '{"settings":{"speech_to_text":{"enable":true,"pii_detection":false,"custom_vocabulary":["Marsview" , "Bulbasaur"]},"speaker_separation":{"enable":true,"num_speakers":4},"emotion_analysis":{"enable":true,"sync_with_stt":true}}}'
Given below is a sample response JSON when the Status code is 200.
{"status":true,"transaction_id":32dcef1a-5724-4df8-a4a5-fb43c047716b,"message": " Compute job for file-id: 32dcef1a-5724-4df8-a4a5-fb43c047716b launched successfully"}
data
object returns the requested metadata if it is computed. The status
object shows the current state of the requested metadata. Status for each metadata field can take values "Queued"/"Processing"/"Completed
".
Shown below is a case where "emotion analysis" Job is in "Queued"
state and "Completed"
state. {"status":{"emotion_analysis":"Queued",}"data":{"emotion_analysis":{}}}
{"status":{"emotion_analysisEmotion":"Completed"}"data":{"emotion_analysis":{"chunks":[...{"start_time" : "174100.0""end_time" : "175100.0","emotions" : [{"emotion":"Happy","confidence":0.81},{"emotion":"Joy","confidence":0.17},]},{"start_time" : "174100.0""end_time" : "175100.0","emotions" : [{"emotion":"Neutral","confidence":0.97}]},...]}}}
Fields | Description |
| Starting time of the chunk in milliseconds |
| Ending time of the chunk in milliseconds |
| List of emotion objects for that particular chunk |
| Name Tag for the Type of emotion detected. |
| Confidence of the emotion (ranges from 0 to 1). Higher the better |