We should delegate all repetitive work to AI algorithms in an ideal world. AI algorithms could accomplish some tasks faster and more efficiently than their human counterparts. It allows humans to dream big, which machines cannot do independently.
Image credit: Max Pixel, CC0 Public Domain
This philosophy to let algorithms liberate humans of repetitive tasks has caused exponential interest in Artificial Intelligence. However, sometimes these algorithms work as a ‘black box’. From a Domain Practitioners (DP) perspective, it is necessary to define roles, requirements, and responsibilities for system deployment of their AI algorithms.
Iain Barclay, Will Abramson have discussed this in their research paper titled “Identifying Roles, Requirements and Responsibilities in Trustworthy AI Systems” which forms the basis of the following text.
Importance of this research
Even in critical applications, such as recruitment, credit scoring, and policing, AI algorithms have broad applications. Since these applications hugely affect humans, it is necessary that professional domain practitioners carefully assess these AI algorithms before implementing them in reality. Ideally, the implemented algorithms should be transparent and at the same time also protect the users’ privacy. Since these two motivations are conflicting, it is imperative to identify roles, requirements, and responsibilities in AI systems. Defining roles and responsibilities for everyone involved in the implementation process, such as Domain Practitioners, Systems Integrator, ML Engineers and Data Scientists, would make the AI systems we intend to implement more trustworthy and increase their acceptance.
About the Research
The research paper has proposed a framework for Domain Practitioners to understand AI systems. This framework helps us identify potential tensions between stakeholders in the system, which has implications for designing AI systems in practice.
The research paper has discussed related works and analysis of different roles and responsibilities in an AI system in detail. The research paper also presents an analysis of different roles and responsibilities. Kindly refer to the image given below for reference.
Image outlining roles, requirements and responsibilities of Domain Practitioners. Systems Integrator, ML Engineer, Data Scientist for successfully implementing trustworthy AI systems. Source: https://arxiv.org/pdf/2106.08258.pdf
In the words of the researchers,
As AI systems are increasingly being implemented and integrated into the fabric of society, developing better practices around their development and deployment is critical. Despite a plethora of evidence pointing to biases encoded into some algorithms by their designers and the data used to train them, they are being used in hiring processes, credit scoring and policing. Protecting citizens from such systems is a shared responsibility, but on a day-to-day level, citizens have no choice but to place their confidence in the hands of DPs to have adopted suitable tools for their work. DPs have an unenviable task, but without reliable information on the constitution of AI systems, it is made almost impossible
Clarifying the different roles and the hierarchy of interlinked dependencies between these roles in an AI system helps to define expectations and establish a specification for an information sharing environment among actors, where often there is no direct relationship.
Central to this requirement is a need for the DP to be provided with support and tools to enable them to make assured decisions on the appropriate application, or abandonment, of a particular AI system for their domain, based on trustworthy and useful information being provided to them from all contributors to aid in their vital decision making
Source: Iain Barclay, Will Abramson’s “Identifying Roles, Requirements and Responsibilities in Trustworthy AI Systems“