Towards a more empathetic and responsible AI:
Using AI to visualize the impact of online media on human emotions, behaviors, and interactions.
Summary
The goal of this project was to raise awareness about the emotional impact of online media on users. To do this, I used AI to analyze the text of news articles and to predict sentiment. Then I visualized predicted human reactions in a real-time video simulation.
​
My Role: End-to-End Design
​
Timeline: December 2017 - March 2018
​
How AI is gonna shape the future of media consumption & production?
Online media can distort reality by manipulating users’ emotions, propagating opinions and beliefs. In 2017, during a time when AI was emerging, I recognized the potential of this technology to predict, filter, moderate, personalize, and even generate online content. My goal was to initiate a conversation about responsible AI technologies that would shape the future of communication and media. Starting this conversation seemed especially urgent due to the escalating informational war between Ukraine and Russia.
As I worked on this project, I identified many potential use cases for this AI technology:
Design Challenges
-
How to visualize human emotions?
One of the primary challenges in this project was translating complex human emotions into an immersive multimedia experience for users. Human emotions are expressed through various cues, including body language, facial expressions, sounds, and speech.
​
To recreate the multi-sensory experience of human emotional reactions, I developed a 3D video simulation featuring a slightly hyperbolized Ukrainian family spanning three generations.
Each character is a non-player character (NPC), or "bot," that randomly wanders around a 3D kitchen, awaiting "news" to react to. I assigned each character a library of responses corresponding to eight primary emotions used in the project.
Example : The grandma character was equipped with animations for "anger," along with a library of sound effects and text comments. When the AI algorithm detected the "anger" emotion, it would randomly select a corresponding reaction from the library, bringing the simulation to life.
Annoyed
Scared
Happy
Indifferent
Sad
Amused
Angry
Inspired
Technical Challenges
Initial project prototype
AI’s capabilities in 2017 were much more limited than they are today which meant we had various constraints:
​
Speech-to-text limitations: Converting audio news into text for sentiment analysis was still in its early stages, and the process itself demanded great processing power, making the system expensive, and the possibility of crashes a high risk.
​
Streaming complications: Streaming live TV news raised legal concerns that could complicate the project implementation.
​
AI Model limitations: Natural language processing models were still in their infancy, making it nearly impossible to generate relevant commentary for the NPCs in a live simulation.
AI’s capabilities in 2017 were much more limited than they are today which meant we had various constraints:
​
Speech-to-text limitations: Converting audio news into text for sentiment analysis was still in its early stages, and the process itself demanded great processing power, making the system expensive, and the possibility of crashes a high risk.
​
Streaming complications: Streaming live TV news raised legal concerns that could complicate the project implementation.
​
AI Model limitations: Natural language processing models were still in their infancy, making it nearly impossible to generate relevant commentary for the NPCs in a live simulation.
To solve these challenges and to keep the project feasible, I implemented the following adjustments:
​
-
I decided to use text from news websites rather than live broadcasts to scrape the text from for the AI to analyze.
-
We used a lo-fi solution of a Scrolling LED display where the Headline of the news article would be running, as the news is being processed by the AI, and the emotion is sent to the simulation.
-
The AI algorithm would determine the top leading emotion.
-
We generated commentary and dialogue responses in advance, using keywords from already published news articles as a base for those texts. When we ran the project, we already had a library of pre-generated, somewhat relevant comments based on predicted emotional responses. This solution helped us simplify technical requirements and diminish costs of the project. Before launching the project, we analyzed 1000 articles by the AI Sentiment analysis algorithm, and then used labeled words from those articles to generate sentences for the dialogues in the simulation. Here’s the example of how those sentences were constructed:
Adding 2 other languages within a day!
Initially we only used English speaking websites for AI Analysis (as this AI algorithm was trained on English language texts only). However, as the project was showcased in Ukraine at the time of intensifying informational war with Russia, it became crucial to include Ukrainian and Russian websites to make this project more relevant. Unfortunately, there has not yet been an established history of AI training in Ukrainian and Russian languages.
With the help of my software engineering partner, we translated the text through Google Translate from the Ukrainian and Russian websites into English first, then fed it into the AI Algorithm to analyze it and determine the dominant human emotion. Once translated, the text could be processed by the AI model.
After a few days of the project running and on public display, Google blocked the service because it was running on an Amazon AWS server. We switched to using a local translation tool called StarDict, which is available on Linux. StarDict can only do word-for-word translations, which are often inaccurate, but it still worked well with the AI Dataset we used for the emotion prediction (DepecheMood dataset), which uses word-based emotion weights.
​
Final Project: The result & the impact
I created this project with the goal to explore how everyday people in Ukraine are impacted by the news they watch or read online, and how these emotions affect their interactions and relationships. I wanted to reveal how easily news can manipulate emotions and impact how users perceive reality. Inspired by Jonah Berger and Katherine L. Milkman's article “What Makes Content Go Viral?” 2011, I also aimed to show how emotionally charged news and content can quickly go viral, leading people to share and spread it, often without realizing it. In the video simulation, once one family member is affected, the emotion quickly spreads to others, and they all begin to act it out.
This project successfully generated meaningful discussion on the topic of responsible and empathetic AI. It was featured in Vogue Ukraine, and the press conference about the project attracted a large audience. During the event, we had an in-depth discussion about the future of media and AI, and the role AI can play in information wars. The project raised awareness on the subject and was followed by many articles, podcasts, books, and AI and tech initiatives. Viewers shared their feedback with me and the exhibition team, saying the project encouraged them to be more mindful and selective about the online content they consume.
​
How it all worked together
System Overview
This project focuses on a two-part system designed to analyze news articles and generate realistic behaviors in a 3D simulation based on detected emotions.
​
-
Emotion Detection from News Articles: An AI service analyzes text from news articles to identify key emotions, which are then stored for further use.
-
NPC Simulation: Based on the detected emotions, non-player characters (NPCs) in the 3D simulation exhibit behaviors and provide verbal commentary.​
​
Process Breakdown
The project was built as a following pipeline:
1.Web crawler → 2.Plain text extractor → 3.Sentiment analysis → 4.Store (to a database) → 5.Serve (as API to the Unreal Engine simulation)
​
-
Web Crawler: A program that automatically visits a list of predefined news websites to collect articles. It keeps track of visited pages to ensure it only pulls new content.
-
The text is broken into words, cleaned up, and simplified (using TextBlob and WordNetLemmatizer).Sentiment Analysis: This step involves analyzing the plain text of news articles to determine the primary emotion conveyed, such as anger, fear, or happiness.
-
Data Storage: The project utilizes MySQL, a widely used database system, to store the processed articles and their corresponding emotions.
-
Serving Data: An API (Application Programming Interface) connects the stored data to the simulation. It provides endpoints for accessing the latest news and emotional data, which the NPCs use to generate their responses. Meanwhile the Headline of the article is shown on the LED.
What I learned and how would this project look like today?
A lot has changed since 2017. Back then, we didn’t have today’s powerful technology like transformers (a deep learning method) and large language models (LLMs) that revolutionized how computers understand language. Nowadays, for this type of project, I would probably mainly use OpenAI's tools, though there are other strong options, like Meta's Llama models.
​
Today, I would also explore using sentiment analysis (determining emotions in content) for live or near-live TV content. There are amazing speech-to-text models now available to everyone, such as Whisper, which makes processing spoken language easier.
​
Looking back, I would add a more interactive feature to the project: a simple website that allows users to vote on how accurately the AI identifies emotions and predicts human reactions. Users could also share whether they’d share the content with their friends or family. This feedback would help continuously train and improve the AI Model:
​​
A simple website UI to allow users to provide feedback on the accuracy of the AI sentiment & human emotion predictions as well as vote on how likely or unlikely it would be for them to share this article with their friends or family - user feedback necessary to continue training and improving the AI Model.
​
This project could even be streamed online so people worldwide can participate as viewers, and also contribute to training the model. For example, after receiving at least 1,000 pieces of feedback, the AI would retrain itself overnight for improved accuracy. OpenAI also provides fine-tuning options, which means you can train the AI on new data to improve its performance for specific tasks. After fine tuning the sentiment service can switch to a new checkpointed model.