This international special issue explores the legal issues associated with the development and use of robotic technologies. Ballas, Pelecanos & Associates LPC has contributed and commented on the issues of liability related with the operation of autonomous thinking machines.
GREECE
Artificial Intelligence Today
Two months ago, IBM announced that Watson, a cognitive computer system, will be used in order to analyse medical insights and support clinical decision-making for therapists treating patients[1]. Watson was originally developed to answer questions on the quiz show Jeopardy!; in 2011, Watson beat the two greatest former Jeopardy! champions and it received the first place prize of 1 million dollars.
Communication with and use of Artificial Intelligence (AI) systems is part of everyday life in modern societies, examples being Apple’s Siri and Google’s Translate service. Signs of the AI boom are everywhere, as lot of big tech companies have been buying AI start-ups: Google has recently purchased at least 9 robotics and AI companies for sums totalling in the billions; DeepMind is one of these, for which Google is rumoured to have paid 400 million dollars.
Deep Learning
The current developments in the AI field are largely related with “deep learning”, a new area of “machine learning”, in which computer systems learn from experience and thus improve their performance over time, taking in data from all sorts of available sources (from research reports to YouTube videos and tweets), using algorithms that have the ability to “learn”[2]. A broad categorization of machine learning tasks distinguishes between “Supervised Learning”, where computers are given example inputs and their desired outputs and the goal is to learn a general rule that maps inputs to outputs[3], and “Unsupervised Learning”, which is true intelligence, where the learning algorithm is let loose on the data with no restrictions and permitted to draw whichever connections it wishes[4]. Given the practically unlimited information resources currently available (including Big Data), along with the constantly improving computing power (considering also the emergence of quantum computing[5]), it is fair to predict that machines using “Unsupervised Learning” will soon develop skills and powers of comprehension that will revolutionize the way decisions are made.
Liability Issues
Obviously, if machines have the capacity to “think”, draw conclusions and - most importantly - make decisions and take actions, it is likely that such actions could potentially be contrary to the applicable rules and ethics. Who is to blame when a machine breaks the rules and who is to be punished? For instance, how should the law apply: (a) in the case of a drone, which, based on the data it collects and processes using its AI skills, while delivering a package for Amazon causes an accident that damages its content; or (b) in case of a driver-less car, which attempts a manoeuvre in order to avoid a collision, but necessarily causes an injury to a pedestrian; or (c) in case of a UAV, which fails to identify a military target and instead kills civilians?
When it comes to discussing liability issues, David Vladeck, professor of Law at Georgetown University, draws the line between semi-autonomous and fully-autonomous machines[6]; a semi-autonomous machine will function and make decisions in ways that can be traced directly back to the design, programming, and knowledge humans embedded in the machine, whereas a fully-autonomous machine will essentially depend on and use deep learning algorithms, which allow it to decide for itself what course of action it should take.
- Semi-autonomous Machines
It is suggested that semi-autonomous machines are dealt as tools and instruments used by humans or legal entities, given that the way the said machines “think” and act is controlled by humans, it is predictable and it can easily and rationally be attributed to the machine’s programming and design by humans. In such case, the product liability law can apply, without probably any adaptation to new technologies being required. Liability could be based, for example, on manufacturing defect, a design flaw, poor programming, inadequate labelling and safety instructions, etc.
- Fully-autonomous Machines
From a legal point of view, things are getting interesting when liability issues related with fully-autonomous machines are under examination. A fully-autonomous machine will be able to “think” and act in ways wholly unattributable to the fault of the programmer/manufacturer or the user of the said machine; liability models suggested by legal scholars range from strict liability[7] models (i.e. when a person is legally responsible for damage caused by his acts or omissions regardless of culpability; thus there is no requirement to prove fault, negligence or intention) to liability assigned to the machine itself, attributing to it legal personhood[8].
The strict liability approach can be useful in instances when it is practically impossible to identify the actual party at fault or paradoxically liable seems to be the machine itself, which currently does not have legal capacity. Arguably though, it is for regulators a particularly complex exercise to choose and possibly share strict liability among involved actors. An alternative approach would be to attribute legal personhood to the machine and impose punishment on it. It is noted that legal personhood is a dynamic concept adapted to the social needs of each period of time, currently covering not only humans but also states, corporations, etc.
Conclusion
The first generation of fully-autonomous machines will “think” and act independently based on information they collect and analyse, making highly consequential decisions for people’s lives. AI experts and theorists have introduced the term “singularity” describing the time and consequences for human life, when AI will exceed human intelligence[9].
Today, it is crucial to understand this new technology and how it already affects and will affect society in the future, in order to be able to regulate it effectively, taking into account the vital interests of all actors involved and affected and eventually making the best for society out of it. Key elements to consider are, in our view, innovation and safety.
On the one hand, as was the case with previous innovative technologies (a major one being the Internet), regulation should not stifle innovation; on the contrary, it should incentivize responsible innovation in this well promising field of AI. On the other hand, consumer and user safety is, and should always be, a prime consideration for regulators. The liability model to be chosen will essentially shape the balance between the two. A predictable liability model along with a cost-spreading approach (i.e. a model where costs – including insurance costs – are reasonably absorbed by most actors involved in the machine production chain and also by the machine’s owner) can arguably better serve innovation than a liability regime depending on an unrealistic and impractical search for and assignment of fault[10].
This issue and all previous publications are available here.
[1] http://techcrunch.com/2015/05/10/ibms-watson-wants-to-help-pick-a-therapist-for-you/#.him15n:DE20
[2] Surden, Harry, Machine Learning and Law (March 26, 2014). Washington Law Review, Vol. 89, No. 1, 2014. Available at SSRN: http://ssrn.com/abstract=2417415
[3] http://en.wikipedia.org/wiki/Machine_learning
[4] Zimmerman, Evan Joseph, Machine Minds: Frontiers in Legal Personhood (February 12, 2015). Available at SSRN: http://ssrn.com/abstract=2563965 or http://dx.doi.org/10.2139/ssrn.2563965
[5] http://www.bbc.com/news/science-environment-32534763
[6] Vladeck, David C., Machines Without Principals: Liability Rules and Artificial Intelligence, Washington Law Review; Mar 2014, Vol. 89, Issue 1, p117. Available at: http://digital.law.washington.edu/dspace-law/bitstream/handle/1773.1/1322/89WLR0117.pdf?sequence=1
[7] Vladeck, supra note 6; Calo, Ryan, Robotics and the Lessons of Cyberlaw (February 28, 2014). California Law Review, Vol. 103, 2015; University of Washington School of Law Research Paper No. 2014-08. Available at SSRN: http://ssrn.com/abstract=2402972 or http://dx.doi.org/10.2139/ssrn.2402972; Kelley, Richard and Schaerer, Enrique and Gomez, Micaela and Nicolescu, Monica, Liability in Robotics: An International Perspective on Robots as Animals (January 01, 2010). 24 Advanced Robotics 13 (2010). Available at SSRN: http://ssrn.com/abstract=2271471
[8] Hallevy, Gabriel, The Criminal Liability of Artificial Intelligence Entities (February 15, 2010). Available at SSRN: http://ssrn.com/abstract=1564096 or http://dx.doi.org/10.2139/ssrn.1564096; Zimmerman, supra note 4.
[9] See among others: Kurzweil, Ray, The Singularity Is Near: When Humans Transcend Biology, 2006; James Barrat, Our Final Invention: Artificial Intelligence and the End of the Human Era, 2015.
[10] Vladeck, supra note 6.