Capabilities
Expression Measurement
Custom Model API
Human values lie beyond words: in tones of sarcasm, subtle facial movements, cringes of empathic pain, laughter tinged with awkwardness, sighs of relief, and more.

The New Science of Expression
Transcribe, measure, and understand both language and nonverbal communication to predict human values, preferences, and needs
Transcribe and understand language
Detect, transcribe, diarize, & understand speech in any language
Measure vocal and facial expression
Capture hundreds of expression dimensions per second in audio or video
Build custom models
Combine language and expression to understand your users’ preferences and needs
Grow with our model ecosystem
Predict user satisfaction & frustration, health & well-being, comprehension & confusion, & much more
Deploy and share models
Deploy your model instantly through our API to build it into your application or share it
Bootstrap with no-code tools
Analyze data, build & deploy models, and visualize results before writing any code
Our models
Capture nuances in expression—subtle facial movements of love or admiration, laughter tinged with awkwardness, sighs of relief—and build custom expression-language models

Voice
Discover over 25 patterns of tune, rhythm, and timbre that imbue everyday speech with complex, blended meanings

Voice
Differentiate 28 kinds of vocal expression recognized as conveying distinct meanings, and the many ways they are blended together

Face + Body
Differentiate 37 kinds of facial movement that are recognized as conveying distinct meanings, and the many ways they are blended together

Voice
Explore vocal utterances by inferring probabilities of 67 descriptors, like 'laugh', 'sigh', 'shriek', 'oh', 'ahh', 'mhm', and more

Face + Body
An improved, automated facial action coding system (FACS): measure 26 facial action units (AUs) and 29 other features with even less bias than traditional FACS

Face + Body
Measure dynamic patterns in facial expression over time that are correlated with over 20 distinct reported emotions

Voice
Measure the toxicity of a sentence and the likelihood that it includes hate speech, an insult, an obscenity or a threat

Health & Wellness
Predicts the risk of Parkinson's from the voice with 32x fewer errors than a language model in data from a scientific study in Italy

Moderation
Identifies toxic speech by online gamers with a 36% lower error rate than a language-only model

Customer Service
Uses the voice to classify the quality of customer support calls with 2.8x fewer errors than language alone in data from Lawyer.com

Health & Wellness
Infers the mood of vloggers with a 2.4x lower error rate by incorporating expression compared to language alone

Safety
Classifies whether a driver appears to be alert or drowsy using publicly available video clips of real drivers and actors

Training & Edtech
Predicts whether someone on a Zoom call appears to be paying attention or distracted, trained on publicly available recordings
Built for product-defining developers
Here, we uploaded public video diaries by people who separately reported having a depressed mood during filming. Now we'll train a model that infers a user's mood from the video alone.
Request (Python)
With just a few lines of code, we've built a Custom Model that infers users' ratings from video. Now we can review how the model performs.
Request (Python)
You can now test your Custom Model on new data and build it into your application, simply by querying our API with your Model ID.
Request (Python)
By using this website, you agree to our use of cookies.