EmpathyAI

When Satya Nadella took over as the CEO of Microsoft in 2014, he reportedly handed out copies of the book Nonviolent Communication to the members of his senior leadership team at the first executive meeting. Apparently, he believed that empathetic listening could positively transform a workplace culture characterized by hostility, infighting, and backstabbing.

About nine years later on an earnings call, Nadella indicated the intention for artificial intelligence to be integrated into all of the company’s applications: “Over time, obviously, I think every app is going to be an AI app.”

Lately, I’ve been working on selfempathy.app a web app based on concepts covered in the same book Nadella handed out in 2014. The app is originally just a simple interface walking a user through the self-empathy process, but I thought it would be interesting to see how the user experience could possibly be transformed by integrating AI.

Would making the self-empathy app into an AI app be an improvement, or just a gimmick?

A user would ideally pull up the self-empathy app when they are feeling triggered. They are ruminating on a complaint (“They’re being arrogant… defensive… manipulative… ”) or a negative feeling (“I’m feeling boxed in… ignored… overworked…“) The first page of the app provides the user with a select box that can generate a list of either complaint or feeling words. At the end of each list is a ”+” icon that allows users to enter their own word if it is not already on the list.

I’ve initialized the app with a dataset that provides six suggested initial feeling, underlying feeling, and need words for each complaint word. The suggestions are unique for each term. There was no rigorous process to come up with these suggestions. These are just my best guesses. Users can select which of the guesses seem to be a good fit for them, or they can add their own words.

In the standard version of the app, if a user clicks the ”+” icon on the first page and adds their own word, the app will generate a new data object with six suggested initial feeling, underlying feeling, and need words taken at random from a word bank.

For example, the word “judgmental” is not included in the initial dataset. So when I user adds this term, the app automatically generated the following words suggestions:

Complaint: Judgmental
Initial feelings: Neglected, blamed, interrupted, unheard, criticized, provoked
Underlying feelings: Uncomfortable, doubtful, empty, nervous, lost, smothered
Needs: Effectiveness, happiness, learning, love, relaxation, patience

You can immediately notice that some of the suggested words seem obviously unrelated to the initial complaint word. Still, I think “unheard” and “criticized” are decent initial feelings guesses. “Uncomfortable” and “empty” could be good starting points for thinking about underlying feelings. None of the need words seem on point, but this is subjective and the starter words could at least get the process of self-inquiry going in the right direction. Sometimes bad guesses are good at getting us to focus on what matters.

In the AI version of the app, I took the user input and sent this prompt to the open AI API:

let complaintObject =
    { [userInput]:
        {
        "complaint": userInput,
        "initialFeelings": [],
        "underlyingFeelings": [],
        "needs": []
        }
    }

“Complete the ‘complaintObject’ data object by adding an array of six relevant words to each of the fields ‘initialFeelings’ (feeling words that end in -ed and imply a sense of blame or victimhood), ‘underlyingFeelings’ (feeling words that take responsibility and do not imply blame), and ‘needs’.”

These are the results I got when I submitted the word “judgmental” to the AI empathy app:

Complaint: Judgmental
Initial feelings: Offended, hurt, disrespected, misunderstood, attacked, accused
Underlying feelings: Uneasy, uncomfortable, uncertain, vulnerable, insecure, unsafe
Needs: Respect, empathy, understanding, acknowledgment, validation, support

Immediately, I’m struck by how much better these suggestions are when compared to the randomly generated words. There is obviously some correlation here. The guesses are good. Nuanced enough and not totally obvious (i.e. “non-judgment” as a need word). “Uncomfortable” was one of the better randomly generated underlying feeling words. It is on this list as well, but the other suggestions are much more on point while also being varied enough to really encourage self-reflection.

In the non-AI version of the app, a user goes through the process of identifying the initial feeling, underlying feeling, and need words that resonate with their situation. Then they click “done” and end the process on a summary page. The whole point of the site is to encourage the self-empathy process, so the app just ends there.

In the AI version, I’ve extended things a bit by allowing the user to select “receive empathy” after the summary page.

At this point, the app has generated a data object that contains the user’s selected words. I’ve created a simple function that turns this data object into a string: “They are being judgmental. My initial feelings are offended, disrespected. My underlying feelings are insecure, unsafe. My needs are meaning, purpose.” This sounds like the “robot empathy” that people learn when they are first exposed to nonviolent communication. But if I simply throw this sentence into ChatGPT and ask “What does this sentence remind you of?” I receive the following answer:

“The above sentence seems to be related to Nonviolent Communication (NVC) modality, which emphasizes on identifying and expressing feelings and needs without blaming or criticizing others.”

So it looks like ChatGPT read that book that Nadella gave out in 2014!

I wanted to see if it was any good at giving empathy, so I asked it to give an empathetic response to my first sentence:

“It sounds like you’re feeling offended and disrespected because someone was being judgmental towards you. It’s understandable to feel that way because it can be hurtful when someone judges us without really knowing us. It seems like underneath those feelings, you’re also feeling insecure and unsafe, which can be really difficult to deal with. It sounds like you have a need for meaning and purpose in your life, and when someone is judgmental towards you, it can make you feel like that need is being threatened. Is there anything else you’d like to share about what you’re experiencing?”

This isn’t bad. I’ve given workshops on empathetic listening for years. If someone gave an empathetic response like this in a workshop setting, I’d say they were doing pretty well.

When people are just starting out with this modality, they all tend to sound robotic and alike. There is no personality in the response. Just a dry restating of the underlying feelings and needs. But the really good people find a way to add their personality back into it — and they just come off as more authentic.

So, in the AI version of the self-empathy app, I give users a choice of who they want to receive empathy from. I played around with the prompt engineering a bit and landed on a funny solution: Asking the AI to give an empathetic response in the style of a popular author tended to give the best results. The drop down menu lets users choose between Charles, Fannie, Tom, and Wallace (Charles Bukowski, Fannie Flagg, Tom Robbins, and David Foster Wallace).

The responses are much more interesting than the generic response — more varied as well.

Empathetic response from Fannie:

Oh sugar, I’m sorry to hear that they’ve hurt your feelings like that. It sounds like they may not have realized the impact of their words. And I understand how it can make you feel, like your sense of clarity and purpose have been shaken. But let me tell you, you are valued and worthwhile, and nothing they say can take that away from you. It takes real courage to stay true to ourselves, especially when facing criticism. So take a deep breath, hold your head up high, and know that you are just as deserving of respect and love as anyone else.

Or an empathetic response from Charles:

Well, my friend, sometimes people can’t help but judge. They measure us up against their own insecurities and shortcomings, and sometimes what they see doesn’t quite add up. But let me tell you, your worth isn’t determined by the judgments of others. You have purpose and meaning in this world, and no superficial criticism can ever take that away. Stay true to yourself and let your light shine bright, regardless of what anyone else may say.

These are good. Really good. They are totally different. Some people may have a string aversion to one of these styles, but the other one might really work for them. And if not, there are two more options to choose from.

I’m still not exactly sure if this second AI feature is totally necessary for the app or if it introduces a very juicy feature that ends up derailing the whole app. The original purpose was to encourage self-inquiry. The first AI feature is subtle and simply provides better starting suggestions. This is less distracting than obviously bad random suggestions and clearly improves the user experience. But the AI empathy response is just so interesting that I’m afraid it could take the focus away from the user’s self-inquiry process and place most of the attention on the final AI output.

I’m still thinking this one through though. Let me know what you think!

selfempathy.app

AI version