While the European Commission today feels it is necessary to specify a set of rules, namely in terms of liability, transparency and accountability for AI and robotics companies, 220 experts in robotics and Artificial Intelligence ethics signed an open letter in April 2018 to dissuade the Commission from making robots legally liable for their acts or omissions.
Indeed, the idea of the robot’s legal liability is based on our fears and our fantasies, and not on the reality of AI and robotics today. Yes, autonomous drones do exist, and they can decide by themselves what path to take between points A and B, while avoiding any obstacles and travelling as efficiently as possible. Are they completely out of control or uncontrollable? Certainly not. The very same drone can, for example, transport either deadly weapons in times of war or first aid equipment to hard-to-reach areas following a natural disaster. The aim is determined by humans and is programmed into the machine in advance by humans. If we are to be afraid, we must be afraid of the ways that human beings may wish to use this technology, and not of the technology itself. It is as ridiculous as accusing the ax of injuring the lumberjack. By holding machines or technology liable, we are letting human beings off the hook, and this is not a good thing.
When we speak of robot liability, what robots are we talking about?
Until recently, robots were automatons that repeated the exact same movement again and again very quickly and precisely. They were locked in cages and had no information about their environment. “New” robotics is based on systems made up of 4 or 5 components:
1. Sensors to get information from the physical environment
2. A processor and software (AI or simple software) to analyze this data
3. Actuators that will carry out the planned task in the physical world
4. A Human-Machine interface for programming, improvement, task changes, etc.
5. Connectivity to communicate with other systems (data exchange).
Today, thanks to sensors, robots are out of their cages and are sharing working and living areas with human beings. Thus, robotics can now be implemented in new industries, in services and even right in our homes. This “new” robotics converges with AI and connects to networks to share and use data.
The issue of robot liability comes first from the “open sharing” of space between robots and humans (for example, if a robotic arm accidentally strikes and kills a human being). Laws and standards have been specified to govern this “cooperation”, based on assessment and risk reduction measures. The “robot” itself cannot be held liable. This liability falls upon the installer or the integrator in cases of normal operation, on the manufacturer in the event of a defective machine and on the user in the event they fail to comply with the conditions for use.
The question of liability is becoming potentially more “problematic” as innovation is putting AI front and center stage. When combined with Big Data, Machine Learning (the machine’s loop enabling it to learn by itself) can be unsupervised (resulting in the fear of loss of control) or “supervised” (with the human taking part in the learning loop, through imitation or reinforcement). Deep Learning, a type of Machine Learning, involves the use of processing power and statistical correlation software to analyze a mass of data. Examples include medical diagnostics using image recognition and analysis software on a large number of MRIs, or the analysis of legal documents to assist lawyers in their research.
Scientists strive above all to prove their models are relevant and their issue thus becomes “transparency”, i.e. whether or not the results obtained can be understood. This also involves “vetting” the input data. If you input “one-sided” or “biased” data to the software, the results at the output will be biased. We potentially reproduce the biases of our current world (racism, misogyny, etc.). Yet, correcting the algorithm to counterbalance or do away with bias seems faster or even easier than changing our culture.
Finally, automating certain processes using AI or simple informatics leaves less room for inputting “contextual” data, glitches, unforeseen circumstances or the special case (here again involving potential loss of control). At this time, sensitive, “artificially intelligent” robots that are autonomous and connected do not have the human being’s intelligence or ability to analyze context and prioritize information. For example, they are incapable of summarizing the meaning of a text.
“Protection” of users thus comes more from checking input data and the related algorithmic models used in unsupervised learning and perhaps from interfaces that enable users to override the robot in the event of “doubt” regarding the stated results or unforeseen situations requiring special processing. Here, the question is not liability, but rather the implementation of rules and standards for such devices.
As regards Machine Learning through reinforcement or imitation, it may potentially prove difficult to distinguish what comes from the manufacturer, the integrator or the user. Insurance professionals must think about this and make proposals, based on a requirement in the European Directive supervised by Ms. Maddy Delvaux Stehres.
One special case involves self-driving vehicles: “Zero-risk self-driving vehicles do not exist. The MEPs point out that harmonized rules are particularly necessary for driverless cars. They are calling for a mandatory insurance system and for a fund to ensure victims are fully compensated in the event of accidents caused by this type of vehicle.”
After Internet and Big Data – two major revolutions in the insurance ecosystem – what role will insurance play in supporting this robotics and AI transformation?
Catherine Simon / 03/01/2019