Home > Robotics > Rethinking Trust in Social Robotics

Rethinking Trust in Social Robotics

How soon do you think you will encounter an autonomous robot in your day-to-day life? 

The question of trust between robots and humans will have to become the basis of the entire social robotics. Image credit: epSos.de via Wikimedia (CC BY 2.0)

If your answer was not “never”, you probably expect human-robot interaction to start happening soon & its frequency to increase with time. When that happens, trust (of humans on robots) will be the most significant factor that would decide its adaption. Rachele Carli and Amro Najjar have discussed this in their research paper titled “Rethinking Trust in Social Robotics”, which forms the basis of the following text.

Importance of this research

Humans have a bias to accept things that are similar to them. Understanding human-human interaction (HRI) is vital to building an effective human-robot relationship of camaraderie. Trust has been identified as a critical aspect for humans to accept robots in their day-to-day lives. Understanding the trust factor in human-robot interaction could facilitate its adaption, which can open wide applications as robots could be efficient and reliable to the extent that humans cannot be. Robots could also become efficient companions, caregivers and entertainers, so HRI is a topic of broad interest to the research community.

Researchers have drawn a clear distinction between the two concepts of “Trust” and “Trustworthiness”. The researchers have defined trust as “Trust is the subjective probability by which an agent A expects that another agent B performs a given action on which its welfare depends”. Note that in the above case, trust is the perception of Person A towards Person B. Trustworthiness, on the other hand, could be a property of Agent B.

The paper also goes on to state that a robot would perform what it is programmed to do. So a human being is responsible for the behaviour exhibited by the robot. Thus, it becomes the responsibility of the person who creates (or programs) the robot to protect the physical and psychological integrity of people interacting with the robot.

Conclusion

The research paper by Rachele Carli and Amro Najjar discusses and presents insights for robot-creators to understand the role of trust in an HRI setting and thus derive the level of trust required in this specific setting. In the words of the researchers,

The identification of the minimum level of trust, necessary for (i) an effective and efficient use of robotic devices and (ii) a mitigation – or even prevention – of the side-effects that can affect the users. This will allow to both favour technological development and guarantee that science will put human beings – as a whole – at the centre in such a development.

Investing in material properties and quantitative analysis of social robots would imply making them more transparent, not just making them appear as such. This is a crucial aspect, considering that where transparency is incremented, the issues related with trust acquisition and trust maintenance in HRI could be more efficiently addressed. Actually, contrary to what the dominant research trend would suggest, trust and transparency are two alternative elements. Designing for transparency means designing for control, instead of relying on a concept that is grounded more on personal and emotional elements than on a rational and controlled choice. That does not mean to eliminate trust from the acceptability equation or to deny its relevance in robotics. It involves to suggest the possibility to rethink its role. Technical experts could focus on modulating the level of trust that guarantees the achievement of the settled goal for that technology, without undermining the protection of the user’s integrity.

Source: Rachele Carli and Amro Najjar “Rethinking Trust in Social Robotics”

Source

Subscribe
Notify of
0 Comments
Inline Feedbacks
View all comments
0
Would love your thoughts, please comment.x
()
x