Emotional Support or Productivity? Understanding AI’s Boundaries
We did not design Claude to provide emotional support. Claude is primarily a work tool”
So often I see posts that seem to equivocate AI tool usage that is connected with well defined work-related outputs with those that are more emotionally focused, digging deep into into the “why” of the user, rather than the “what” of the work task.
Are these conversations that tools like Copilot, or ChatGPT, or Claude can have? Sure. But who’s looking into whether they are actually any good at them? In a recent blog post and accompanying video, Anthropic has highlighted the work they are doing around this field.
Increasingly the picture of whether AI chatbots can be successful in real world tasks lays less in greater model IQ and more in access to relevant data and tools. This has allowed AI vendors to focus their development in places that have the most impact - for example, the expansion of grounding data and tool options for Copilot declarative agents.
But how can we expand the capabilities of AI for emotional support purposes? The same context limitations apply, but whereas giving an agent access to your inbox is a purely technical problem, giving it access to the insights about your emotional state other humans would intrinsically pick up on is a far greater challenge.
For this reason I am very dubious of suggested use cases for AI that step into these fluffier and more delicate areas. I feel we need to tread carefully, so it is pleasing to see a company like Anthropic doing deliberate research to understand these types of use cases and promote a safe way of engaging with them.
First posted on Linkedin on 07/12/2025 -> Click here to view post