EmpathyAI
When Satya Nadella took over as the CEO of Microsoft in 2014, he reportedly handed out copies of the book Nonviolent Communication to the members of his senior leadership team at the first executive meeting. Apparently, he believed that empathetic listening could positively transform a workplace culture characterized by hostility, infighting, and backstabbing.
About nine years later on an earnings call, Nadella indicated the intention for artificial intelligence to be integrated into all of the companyâs applications: âOver time, obviously, I think every app is going to be an AI app.â
Lately, Iâve been working on selfempathy.app a web app based on concepts covered in the same book Nadella handed out in 2014. The app is originally just a simple interface walking a user through the self-empathy process, but I thought it would be interesting to see how the user experience could possibly be transformed by integrating AI.
Would making the self-empathy app into an AI app be an improvement, or just a gimmick?
A user would ideally pull up the self-empathy app when they are feeling triggered. They are ruminating on a complaint (âTheyâre being arrogant⌠defensive⌠manipulative⌠â) or a negative feeling (âIâm feeling boxed in⌠ignored⌠overworkedâŚâ) The first page of the app provides the user with a select box that can generate a list of either complaint or feeling words. At the end of each list is a â+â icon that allows users to enter their own word if it is not already on the list.
Iâve initialized the app with a dataset that provides six suggested initial feeling, underlying feeling, and need words for each complaint word. The suggestions are unique for each term. There was no rigorous process to come up with these suggestions. These are just my best guesses. Users can select which of the guesses seem to be a good fit for them, or they can add their own words.
In the standard version of the app, if a user clicks the â+â icon on the first page and adds their own word, the app will generate a new data object with six suggested initial feeling, underlying feeling, and need words taken at random from a word bank.
For example, the word âjudgmentalâ is not included in the initial dataset. So when I user adds this term, the app automatically generated the following words suggestions:
Complaint: Judgmental
Initial feelings: Neglected, blamed, interrupted, unheard, criticized, provoked
Underlying feelings: Uncomfortable, doubtful, empty, nervous, lost, smothered
Needs: Effectiveness, happiness, learning, love, relaxation, patience
You can immediately notice that some of the suggested words seem obviously unrelated to the initial complaint word. Still, I think âunheardâ and âcriticizedâ are decent initial feelings guesses. âUncomfortableâ and âemptyâ could be good starting points for thinking about underlying feelings. None of the need words seem on point, but this is subjective and the starter words could at least get the process of self-inquiry going in the right direction. Sometimes bad guesses are good at getting us to focus on what matters.
In the AI version of the app, I took the user input and sent this prompt to the open AI API:
let complaintObject =
{ [userInput]:
{
"complaint": userInput,
"initialFeelings": [],
"underlyingFeelings": [],
"needs": []
}
}
âComplete the âcomplaintObjectâ data object by adding an array of six relevant words to each of the fields âinitialFeelingsâ (feeling words that end in -ed and imply a sense of blame or victimhood), âunderlyingFeelingsâ (feeling words that take responsibility and do not imply blame), and âneedsâ.â
These are the results I got when I submitted the word âjudgmentalâ to the AI empathy app:
Complaint: Judgmental
Initial feelings: Offended, hurt, disrespected, misunderstood, attacked, accused
Underlying feelings: Uneasy, uncomfortable, uncertain, vulnerable, insecure, unsafe
Needs: Respect, empathy, understanding, acknowledgment, validation, support
Immediately, Iâm struck by how much better these suggestions are when compared to the randomly generated words. There is obviously some correlation here. The guesses are good. Nuanced enough and not totally obvious (i.e. ânon-judgmentâ as a need word). âUncomfortableâ was one of the better randomly generated underlying feeling words. It is on this list as well, but the other suggestions are much more on point while also being varied enough to really encourage self-reflection.
In the non-AI version of the app, a user goes through the process of identifying the initial feeling, underlying feeling, and need words that resonate with their situation. Then they click âdoneâ and end the process on a summary page. The whole point of the site is to encourage the self-empathy process, so the app just ends there.
In the AI version, Iâve extended things a bit by allowing the user to select âreceive empathyâ after the summary page.
At this point, the app has generated a data object that contains the userâs selected words. Iâve created a simple function that turns this data object into a string: âThey are being judgmental. My initial feelings are offended, disrespected. My underlying feelings are insecure, unsafe. My needs are meaning, purpose.â This sounds like the ârobot empathyâ that people learn when they are first exposed to nonviolent communication. But if I simply throw this sentence into ChatGPT and ask âWhat does this sentence remind you of?â I receive the following answer:
âThe above sentence seems to be related to Nonviolent Communication (NVC) modality, which emphasizes on identifying and expressing feelings and needs without blaming or criticizing others.â
So it looks like ChatGPT read that book that Nadella gave out in 2014!
I wanted to see if it was any good at giving empathy, so I asked it to give an empathetic response to my first sentence:
âIt sounds like youâre feeling offended and disrespected because someone was being judgmental towards you. Itâs understandable to feel that way because it can be hurtful when someone judges us without really knowing us. It seems like underneath those feelings, youâre also feeling insecure and unsafe, which can be really difficult to deal with. It sounds like you have a need for meaning and purpose in your life, and when someone is judgmental towards you, it can make you feel like that need is being threatened. Is there anything else youâd like to share about what youâre experiencing?â
This isnât bad. Iâve given workshops on empathetic listening for years. If someone gave an empathetic response like this in a workshop setting, Iâd say they were doing pretty well.
When people are just starting out with this modality, they all tend to sound robotic and alike. There is no personality in the response. Just a dry restating of the underlying feelings and needs. But the really good people find a way to add their personality back into it â and they just come off as more authentic.
So, in the AI version of the self-empathy app, I give users a choice of who they want to receive empathy from. I played around with the prompt engineering a bit and landed on a funny solution: Asking the AI to give an empathetic response in the style of a popular author tended to give the best results. The drop down menu lets users choose between Charles, Fannie, Tom, and Wallace (Charles Bukowski, Fannie Flagg, Tom Robbins, and David Foster Wallace).
The responses are much more interesting than the generic response â more varied as well.
Empathetic response from Fannie:
Oh sugar, Iâm sorry to hear that theyâve hurt your feelings like that. It sounds like they may not have realized the impact of their words. And I understand how it can make you feel, like your sense of clarity and purpose have been shaken. But let me tell you, you are valued and worthwhile, and nothing they say can take that away from you. It takes real courage to stay true to ourselves, especially when facing criticism. So take a deep breath, hold your head up high, and know that you are just as deserving of respect and love as anyone else.
Or an empathetic response from Charles:
Well, my friend, sometimes people canât help but judge. They measure us up against their own insecurities and shortcomings, and sometimes what they see doesnât quite add up. But let me tell you, your worth isnât determined by the judgments of others. You have purpose and meaning in this world, and no superficial criticism can ever take that away. Stay true to yourself and let your light shine bright, regardless of what anyone else may say.
These are good. Really good. They are totally different. Some people may have a string aversion to one of these styles, but the other one might really work for them. And if not, there are two more options to choose from.
Iâm still not exactly sure if this second AI feature is totally necessary for the app or if it introduces a very juicy feature that ends up derailing the whole app. The original purpose was to encourage self-inquiry. The first AI feature is subtle and simply provides better starting suggestions. This is less distracting than obviously bad random suggestions and clearly improves the user experience. But the AI empathy response is just so interesting that Iâm afraid it could take the focus away from the userâs self-inquiry process and place most of the attention on the final AI output.
Iâm still thinking this one through though. Let me know what you think!