Introduction
Fully autonomous artificial intelligence systems, such as robots, constantly take part in various science fiction movies and books, and, therefore have reached the minds of the vast majority of world‘s people. We are already used to them as to one of the most common „species“ in the popular culture. However, the dominant belief nowadays among IT specialists, law researchers and scientists, is that fully autonomous artificial intelligence systems are due to step out of the movie screens and become a part of the world‘s community in the near future. However, if creation of these subjects is one important task, another crucial aim is to determine the upcoming legal status of fully autonomous artificial intelligence subjects. The humanity deserves to know, whether artificial intelligence entities should be empowered with the same rights and responsibilities as human beings, or should they be granted less of them. In other words, community must be aware, if there are sufficient reasons to allow humans to make sales contracts with artificial intelligence subjects, to employ them, or even, to grant them such powers as to enact laws or to judge in the court, as well as many other rights, assigned only to human beings at present. In order to approach the answer to this question, there are several aspects to be taken into account. Firstly, the issues of morality and consciousness, attributed to capable natural persons, are ones of main importance, if we start a discussion about granting the legal capacity to a particular artificial entity. The second problem discussed in this article, will be the possibility of overtaking the control of the world by highly advanced artificial intelligence subjects, which might lead them to becoming the dominant race on the planet. And the third consideration discussed, is to be the criminal liability issues of the artificial intelligence subjects.
The aspect of morality
To begin with, the aspect of morality is ought to be taken into account in relation to the topic of this article. Starting with the definition of the term, The Oxford Dictionaries explain it as „[p]rinciples concerning the distinction between right and wrong or good and bad behavior“. There are many various understandings of what is „right“ and „wrong“ or „good“ and „bad“, so even the term itself is worth consideration. This leads us to following questions concerning morality. Firstly, how can we define it? Secondly, can this feature be a part of an artificial intelligence subject? And is the morality relevant at all, while discussing about granting legal rights to the artificial intelligence subjects?
Back in 1974, the famous scientist of robotics, a professor emeritus of the MIT(Massachussetts Institute of Technology) Joseph Weizenbaum answered these questions while stating, that some of the jobs should never be performed by artificial intelligence, because some particular positions, such as one of a judge, a doctor, a therapist require morality, which exclusively belongs to natural persons. Weizenbaum stated that we seek authentic feelings of empathy from people in these positions. If machines replace them, we will find ourselves alienated, devalued and frustrated. Artificial intelligence, if used in this way, represents a threat to human dignity. (Weizenbaum, 1974) To conclude professor‘s ideas in relation to the questions we have raised at the begining of this chapter, in the present case he defined morality as an ability to have feelings of empathy, which cannot be attributed to artificial intelligence subjects. Moreover, morality in Weizenbaum‘s opinion is relevant for granting advanced artificial intelligence subjects certain rights – particularly – a right to perform aforementioned jobs, because a lack of empathy will result other people being alienated, devalued and frustrated.
After 4 decades, the american lawyer and bioethicist, a Senior Fellow at the Discovery Institute’s Center on Human Exceptionalism, professor Wesley J. Smith traveled even further – he indicated that no artificial intelligence could ever gain any rights – these subjects must be limited to a status of things. His main argument is that only humans are moral beings by nature and that artificial intelligence would have no such inherent characteristics. The law professor also added that even though the existence of soul in human beings is not yet proven, artificial intelligence surely does not possess one. More to that point, Wesley J. Smith explained that „we [humans] don’t just make decisions based on raw data and logic. We are moral agents, who sometimes refuse to do the logical thing because we consider it wrong. We are emotional beings. We are impulsive. We are risk takers. We are so much more than mere computers, which is how some anti-human exceptionalists like to describe us.“ (Smith, 2015). Concluding this researcher‘s ideas and finding out, how they answer the questions at he beginning of this chapter, it can be said, that Wesley J. Smith defines morality as a refusal to do logical things in a situation, where a person considers these things wrong. The second and the third questions should be answered as following: artificial intelligence subjects can never be moral, and, yes, morality is relevant in order to grant legal capacity to the completely autonomous artificial intelligence subjects. Consequently, these subjects, in this professor‘s opinion should never be given the aforementioned rights.
As an opposition to these arguments, the law professor of Georgetown university Lawrence B. Solum argues that such a stance, that no one else can be granted the same legal capacity as natural persons, is immoral itself. The professor makes a historical parallel and states that such an opinion is the same as stating that slaves do not have certain rights just because they are not white. (Solum, 1992, 2008) Solum also has a counter-argument to an idea about artificial intelligence systems not having souls. He argues that the statement „artificial intelligence systems do not have souls“ is a religious and theological statement, which would fail in a legal area. Solum states that political and legal decisions must be justified on the grounds that are public. And public reason cannot rely on particular comprehensive religious or philosophical conceptions. (Solum, 1992, 2008) These ideas lead to the conclusion, that Lawrence B. Solum would answer our raised questions, regarding morality, differently than professors mentioned before. Firstly, the researcher does not explicitly tell what is moral, he rather explains the immorality of not granting legal capacity to the fully autonomous artificial intelligence subjects. He uses slavery as an example of immorality and equates it with the refusal to grant legal capacity to fully autonomous artificial intelligence subjects. Talking about the second question, the researcher is silent about machines‘ ability to possess morality, however, he does not notice any reason to talk about it in terms of having soul, because for professor Lawrence B. Solum, it is merely a theological or a religious consideration, which would fail on the legal ground. Regarding the third question, the researcher, while speaking about slavery, implies that morality itself is relevant.
The aspect of consciousness
Starting with the definition, consciousness is merely „the state of being aware of and responsive to one’s surroundings“, as explained by The Oxford Dictionaries. It is also one of the necessary features for a person to be legally capable. Without being aware of the surroundings and being responsive to them, a person is either dead or incapable of participating in legal relations.
Speaking about the consciousness of artificial intelligence subjects, Wesley J. Smith questions it and gives a quote of Stanford physician and bioethicist William Harbult, which reads as follows: „Human consciousness is not mere computation. It is grounded in our full embodiment and intimately engaged with the neural apparatus associated with feeling and action.“ „In other words“, as Smith states, „human thought arises from a complex interaction of reason, emotion, abstract analysis, experience, memories, education, unconscious motivation, body chemistry, and so on. That can never be true of artificial intelligence robots. Even if an artificial intelligence machine were to attain unlimited processing capacities, it wouldn’t be sentient, just hyper-calculating.“
Lawrence B. Solum has a response to these thoughts also. Firstly, the professor states, that if the consciousness is a product of brain, and processes of brain could be created, then it means that artificial intelligence might possess consciousness. Adding to that point, Solum also explains, that even if it turns out that only neurons in a human body can generate consciousness, there are still no guarantees that no legal rights and responsibilities should be prescribed to artificial intelligence. The professor gives an example of artificial intelligence subject, filing an action of emancipation, based on the thirteenth amendment of the US Constitution. Solum is convinced, that if the owner‘s attorney would argue that artificial intelligence is only a machine and has no consciousness and the artificial intelligence would exercise the opposite position, the turnout can be very various. But in Solum‘s opinion, the artificial intelligence should have an advantage in this situation, because another person lacks direct access into other subject‘s brain. Artificial intelligence may have a different type of consciousness, but since the expression of the will is the same as one of a natural person‘s, there is no reason, why rights and responsibilities shouldn‘t be granted to it[1]. To illustrate that, the professor gives a charming example about every single person, cannot be able to prove for sure that one‘s neighbour is not a zombie.(Solum, 1992, 2008).
The „overtaking“ argument
The argument explained and discussed in this chapter is the possibility of overtaking the control of the entire world by the fully autonomous artificial intelligence systems. The main idea is that if people are bound to manufacture other subjects, possessing the same or more advanced intellectual capabilities than humans, isn‘t there a threat for the latter to cease being the dominant race on the Earth?
The law professor of Vrije university of Amsterdam, Rob van den Hoven van Genderen states that such a danger exists and argues that granting legal capacity even to the most advanced artificial intelligence, might result in harmful consequences to the people. The professor explains: „[i]t is essential that we, as natural human being[s], keep control over the system. We would not want to be confronted with autonomous systems, collecting all kind of personal information to be used for their own purposes? We are better to use our electronic or better technology based servants to assist us in the practical executions of our tasks. The more intelligent the system is the more trustworthy will be its functionality.“
Lawrence B. Solum, whatsoever, has a counter-argument even in this case. He calls the „overtaking“ argument „the paranoid anthropocentric argument“ and opposes it with the following thoughts. He indicates, that it is impossible to treat this argument seriously, because if there is a chance that some robotic technology might pose danger to humans, the only solution is not to manufacture robots at all. More to that point, Solum believes, that this danger is remote, and it shouldn‘t be a criterion, which decides whether artificial intelligence should be granted legal capacity. (Solum, 1992, 2008)
One of the most famous authors of the modern democratic transhumanist movement, PhD James Hughes takes a place between these quite radical views, mentioned in this sub-chapter before. He states that „since the technologies will most likely not be stopped, democrats need to engage with them, articulate policies that maximize social benefits from the technologies, and find liberatory uses for the technologies. <…> The mission of the Left is to assert democratic control and priorities over the development and implementation of technology.“ It means, that Hughes talks about the control of technological advance, but on the other hand, he, as a democrat, finds it necessary to solidarize with this possible minority as with other ones, living in the world nowadays. He also uses a historical parallel and states as follows: „the post-human future will be as threatening to unenhanced humans as gay rights or women’s liberation have been to patriarchs and homophobes, or immigrant rights are to nativists. While libertarian transhumanists may imagine that they will be able to protect themselves if they are well-armed and have superior reflexes, they will be severely outnumbered. Nor is civil war an attractive outcome. Rather transhumanists must understand their continuity with the civil rights movements of the past and work to build coalitions with sexual, cultural, racial and religious minorities to protect liberal democracy. We need a strong democratic state that protects the right of avantgarde minorities to innovate and experiment with their own bodies and minds.“ (Hughes, 2002) To sum up, Hughes doesn‘t envision the advancement of artificial intelligence as a threat and calls for a solidarization with these subjects. However, in his view, the control of this development should be concentrated in the hands of human beings.
The criminal liability of fully autonomous artificial intelligence subjects
The law professor of Georgetown university David C. Vladeck draws attention to the fact, that one day robots may be independent and not controlled by humans. In that situation, if artificial intelligence subject commits a crime, someone must be liable for the resulted damages and should be punished. The main question is – who? The professor puts up an open consideration: „if no one controls the robot, no other person is responsible for damages. So, wouldn‘t it be fair, to punish the artificial intelligence subject?“ (Vladeck, 2014)
Gabriel Hallevy, the professor of high-tech law of the Ono Academic College in Israel, agrees with this thought and suggests that an artificial intelligence subject should be granted criminal liability if it can understand the actions he performs are against the law, existing in that particular country. Hallevy states that „when an artificial intelligence robot activates its electric or hydraulic arm and moves it, this might be considered an act, if the specific offense involves such an act. For example, in the specific offense of assault, such an electric or hydraulic movement of an artificial intelligence robot that hits a person standing nearby is considered as fulfilling the actus reus [external] requirement of the offense of assault. Attributing the internal element of offenses to artificial intelligence entities is the real legal challenge in most cases. Attributing the mental element differs from one artificial intelligence technology to the other. Most cognitive capabilities developed in modern artificial intelligence technology are immaterial to the question of the imposition of criminal liability. Creativity is a human feature that some animals possess, but creativity is a not a requirement for imposing criminal liability. Even the least creative persons are held criminally liable. The only mental requirements needed in order to impose criminal liability are knowledge, intent, negligence, etc., as required in the specific offense and under the general theory of criminal law.“
In this case, we might remember Wesley J. Smith‘s argument about consciousness of artificial intelligence subjects, and raise a question: „Whether „a slave of algorithms“, as the professor has stated, can ever have his own consciousness?“ And if the answer is no, then Smith strictly implies, that no legal capacity can be possible to be granted to the artificial intelligence, including the criminal liability. Adding to that point, the main purpose of criminal law would be negated, because a person would not realize, why is he sentenced, and would be unable to correct his behavior in the future.
Moreover, the philosopher of science and technologies Peter M. Asaro states his original thoughts about the liability of artificial intelligence: „[i]n the most straightforward sense, the law has a highly developed set of cases and principles that apply to product liability, and we can apply these to the treatment of robots as commercial products. As robots begin to approach more sophisticated human-like performances, it seems likely that they might be treated as quasi-agents or quasi-persons by the law, enjoying only partial rights and duties. A closely related concept will be that of diminished responsibility, in which agents are considered as being not fully responsible for their own actions.“ The main idea of this quote is similar to the thoughts of Gabriel Hallevy regarding the fact, that a certain sophistication of artificial inteligence systems is required in order to impose criminal liability on them. However, Peter M. Asaro envisions the gradual increase of rights and responsibilities gained by artificial intelligence. And this increase would mainly depend on the development of an artificial entity – the more advanced it is, the more rights and responsibilities it gains. What is more, even though this author admits, that the opportunity of granting full legal personhood is possible, he finds this scenario as very distant and unclear. Peter M. Asaro indicates: „We saw in the previous section that it is more likely that we will treat robots as quasi-persons long before they achieve full personhood.“
Moreover, the scholar‘s thoughts are in accordance with those of Wesley J. Smith, regarding the importance of morality, in this case – in the light of the artificial intelligence systems‘ criminal liability. Peter M. Asaro believes that being a moral agent is necessary to be criminally liable: „Moral agency is deeply connected to our concepts of punishment. Moral agency might be defined in various ways, but it ultimately must serve as the subject who is punished. Without moral agency, there can be harm but not guilt. Thus, there is no debt incurred to society unless there is a moral agent to incur it–it is merely an accident and not a crime.“ The scholar also includes the aspect of deterrence – one of the main functions of criminal law, and discusses it in relation with the importance of morality. Asaro indicates that „deterrence only makes sense when moral agents recognize the similarity of their potential choices and actions to those of another moral agent who has been punished for the wrong choices and actions–without this reflexivity of choice by a moral agent, and recognition of similarity between moral agents, punishment cannot possibly result in deterrence.“
Conclusions
Various qualified law and robotics professors cannot find a consensus concerning the legal capacity of robots, even of those, which are fully autonomous. While speaking about morality, one category of researchers strictly argues, that morality is exclusively a feature of natural humans and, furthermore, is a main aspect of a legal capacity. Another group of researchers, in opposite, think that if an entity is intelligent enough, it would be immoral not to grant rights and responsibilities to this subject. The next object of disputes is consciousness. One segment of prominent professors state, that the machine, operated by algorithms could never be conscious and understand it‘s actions and, because of that, shouldn‘t gain legal capacity. Others respond that it is impossible to check the principles how the consciousness of the artificial intelligence works. And since this check is impossible, the behaviour of an artificial intelligence becomes the most important issue. Speaking about the possibility of artificial intelligence overtaking the humanity and using their advanced features for their own good, the opinions also differ. One part of researchers, precisely – Lawrence B. Solum, calls these thoughts as „paranoid antropocentric“ views. Finally, a very sensitive topic of criminal liability of fully autonomous subjects comes into place, and, ideas, supporting criminal liability of artificial intelligence are much less complicated to detect. Mainly because of the principle, that every crime must have it‘s perpetrator and someone has to be punished for it. This principle, however, has a few exceptions. The most important one is that no person can be punished, if he doesn‘t understand his actions, there are also opinions that morality is relevant for imposing criminal liability. Clearly, these are the difficult issues to solve, while speaking about criminal liability of highly advanced artificial intelligence, because there is a lack of knowledge regarding the morality and understanding abilities of artificial intelligence subjects.
It is clearly visible, that a question of criminal liability of artificial intelligence and legal capacity in general would be much easier to answer, if one would have more information about the processes of thinking inside artificial intelligence subjects. However, even with the amount of information the humanity posesses at present, there is a wide spectrum of thoughts concerning this matter.
List of sources
- Asaro, Peter M. “Robots and responsibility from a legal perspective.” Proceedings of the IEEE(2007): 20-24.
- Hallevy, Gabriel. The criminal liability of artificial intelligence entities – from science fiction to legal social control. Akron intelectual property journal, Vol. 4:171, 2010, p. 187, 188, 199.
- Hughes, James. Democratic Transhumanism 2.0, 2002. [visited 2016-10-15] <http://www.changesurfer.com/Acad/DemocraticTranshumanism.htm>
- Smith, Wesley J. AI machines: things not persons. In First things [interactive], 2015, [visited 2016-11-04] <https://www.firstthings.com/web-exclusives/2015/04/ai-machines-things-not-persons>
- Solum, Lawrence B. Legal Personhood for Artificial Intelligences. North Carolina Law Review, Vol. 70, 1992. Article in Cribbet, John E. Illinois Public Law and Legal Theory Research Papers, Series No. 09-13, 2008., p. 1258-1266.
- Van den Hoven van Genderen, Rob. Robot Law, a Necessity or Legal Science Fiction? Machine Medical Ethics and What About the Law?, 2013. [visited 2016-11-12] <http://www.switchlegal.nl/robot-law-a-necessity-or-legal-science-fiction-machine-medical-ethics-and-what-about-the-law/>
- Vladeck, David C. Machines without principals: liability rules and artificial intelligence. Washington law review, Vol. 89:117, 2014, p. 122-123.
- Weizenbaum, Joseph. Computer power and human reason: from judgment to calculation. New York, San Francisco: W. H. Freeman and company, 1974., quoted in McCorduck, Pamela. Machines who think (2nd ed.), Natick, Mass.: A. K. Peters, 2004, p. 356, 374–376.
- Oxford dictionaries. [visited 2017-04-25] <https://en.oxforddictionaries.com/definition/morality>
- Oxford dictionaries. [visited 2017-04-28] <https://en.oxforddictionaries.com/definition/consciousness>
[1] Or him. We are not yet aware whether to treat fully autonomous artificial intelligence subject as a thing or as a human being.