But can these robots actually be compassionate? This was one of the questions tackled at a workshop on the science of compassion I recently took part in, jointly organised by Edinburgh and Stanford Universities in California. Compassion is core to our shared humanity, and sees us respond to suffering by going out of our way to help our fellow humans.
My response is that – for now – robotic care is an oxymoron, a contradiction in terms. But there is a silver lining to that cloud: even current artificial intelligence can perhaps support human compassion in a valuable way.
Now with feeling
The main thing is that there’s a big problem with current robots. Their goal is to act compassionately: to perceive and respond suitably to emotional and physical needs. To do this, an artificial system does not need to actually have its own emotions.
The Paro robot seal is a smart cuddly toy, which can produce emotionally relevant responses, without having a complex emotional life of its own. Lots of other more complex robots are just like this: they do the right thing (usually), but they don’t have inner lives like ours.
And that’s the trouble. We rightly criticise humans – such as nurses and doctors and careworkers – if they just act as if they were compassionate, without feeling anything. In fact, we say that they are just “robots”, if they are going through the checklist without engaging. If “doing the right thing” isn’t enough for humans, how could it possibly be for artificial intelligences?
Devices that deceive
As a side note, we can see a real hazard already looming for designers of intelligent systems: the designers know that the artificial agents don’t care. So is it ever ethical to let other people think that they do? In my own work, I have built and tested systems with some aspects of simulated human personality. If computers can respond to human events and messages with style and character, then they have a kind of social presence.
However, I have been convinced – by Dr Joanna Bryson (MSc Knowledge Based Systems 1992, MPhil 2001), an Edinburgh alumna now working at the University of Bath – that playing up social presence risks implying not just social agency but moral agency. And that is wrong, because current robots are just tools. So they are not responsible for their actions: their designers are. Arguably, we should not play up personality and social presence for now.
Could we make artificial intelligences that really did give and receive compassion? We would need at least two things, both a little tricky.
First, we need devices with the internal analogues of emotions. Beyond that, they may have to be able to reflect on what having the emotion means. At the workshop in Stanford, Edinburgh’s principal, Professor Timothy O’Shea, who has long-standing interests in machine learning, coined the term “artificial compassion” to cover what this kind of system might have to achieve. It’s like human compassion in the same way that machine learning is similar to – but not identical with – human learning.
Secondly, to perceive and attribute compassion, we depend on a fundamental recognition of the joint humanity of carer and caree. My colleague Henry Thompson points out that to develop moral agency, we require co-participation in a range of social contexts, presupposing that moral agency is at least possible in principle.
It’s true that we allow children into these contexts as they grow up, as a means of teaching them moral values. But we do that because we know it works: we were once just like them, and we managed to become moral agents. So a robot would also have to be accepted into similar contexts, so that they can gain the right skills. And to be accepted, they would have to “look right”.
Looking right will help machines be accepted into normal human social contexts, so that they can then learn robust cognitive and moral skills. It’s not impossible to build robots that have artificial emotions and look human enough – Cynthia Breazeal’s Kismet robot at Massachusetts Institute of Technology was a great example.
But acceptance into the broader community may prove even harder than the technical challenges, because as recent events demonstrate, humans are still not great at extending a welcome to incomers who look a bit different. And that acceptance is critical.
So there is a long road from where we are now to robots that would be accepted as genuinely compassionate. But there is a consolation nearby: current generation artificial intelligence could be ready to help teach and cultivate compassion.
Robot as teacher
We already see intelligent tutoring systems with minimal intelligence being used to train individuals with social skills deficits, so as to learn how to deal better with other people. Such a system encodes best tutoring practice based on human experience, without having had any of those experiences itself.
On top of this, tutoring systems have been given the ability to recognise the emotions and moods of their tutees, to help guide their interventions. Putting these two elements together, even machines without true compassion could help teach people about compassion.
We can definitely develop intelligent tutoring systems that explore scenarios, advise, and make trainee human carers think harder, and feel harder. Such systems would be interactive compilations of human experience. And in that respect, they would follow in the footsteps of IBM’s Deep Blue and Watson, and Google DeepMind’s AlphaGo. Is the time ripe for Deep Compassion?