Picture source: http://www.legaltechnology.com/latest-news/artificial-intelligence-in-law-the-state-of-play-in-2015/
Using artificial intelligence to create “thinking” machines is a controversial issue in today’s world. The topic sparks many ethical and legal issues that need to be clarified if AI are to be used widespread, on a large scale in global industry (Ramsey and Frankish, 2011). Therefore, in order to determine whether AI can be subjected to legal and ethical considerations, one must determine its nature and classification under the legal and ethical framework, such that if AI can be classified as a “person”, it may be subjected to different rights, guidelines and legislation under the legal courts and codes of ethics.
Hypothetical Example of a Bank Fraud Case by a Machine (Based on Ramsey and Frankish, 2011):
One of the major issues in
classification of AI machines is how to charge them accordingly, especially
when their whole “thinking” is essentially based upon a set of algorithms which
determines, at least currently, to a great extent the outcome of their actions.
The question of whether to charge the AI machine itself or its maker remains a
controversial issue, since the maker of the AI machine may not have been able
to predict such a negative outcome as the machine committing bank fraud, and at
the same time, for the time being, AI machines have to obey humans’ commands,
so in order to commit such fraud, it could be said that the AI must have been
following instructions from another, causing it not to be the entire fault of
the AI machine, making it difficult for it to plead guilty. Thus, the issue of
who to charge with the guilty pleading in order to solve such an issue would remain
unresolved, especially because if one were to go down the lines of how far the
thinking ability of an AI machine can stretch, bearing in mind the current
limitations of not being able to create an AI machine with human-level
intelligence, then the AI machine may not be able to be charged as such. Hence, unless a particular AI
machine, X, has moral status and is recognised as an entity or person in
itself, with the ability to do as it wishes for the sake of doing so, such an
entity cannot be charged (Kamm, 2007 cited in Ramsey and Frankish, 2011).
Bearing in mind the implications of essentially cutting AI machines loose on their own thinking, one may ask whether it is a wise, or even ethical decision to give AI “free will”. However, the issue boils back down to the fact that giving AI “free will” would mean their “thinking” must be entirely self-sufficient and not controlled by another. In discussing this, another issue arises. If AI machines were self-aware to the point of controlling their own thinking, and classified as an entity in themselves, with the ability to do, or not to do a particular action, entirely freely, i.e. not constrained by their current “fixed imperatives” and algorithms (Yudkowsky et al., 2010), then according to the courts, AI would need rights – to property, freedom of expression etc.- which could severely impact the way we currently see and use them as servants in the workplace, doing our manual labour. Essentially, our current view of AI rests on the “Star Wars” model of “friendly AI” (Yudkowsky et al., 2010) where AI are the replacement of humans for what would otherwise be considered dangerous labour. Thus, the issue of personification of AI leading to AI rights, and hence even ethics of allowing AI to “die” under dangerous work grounds causes many controversial problems in ethics and the law, since humans do not value AI as a “worthy life form” as such.
The Garden of Eden Argument
Picture source: https://painsight.wordpress.com/2014/11/26/the-garden-of-eden/
With “Free Will”, Will AI Subject Itself to Our Legal and Ethical Frameworks?
The other issue arising from
creating “intelligent” thinking machines is that they may or may not “choose”
to subject themselves to our legal and ethical frameworks, i.e. our courts and
other justice systems that are in place to protect humans. Essentially, AI may
find that all the laws and codes of ethics are currently written in favour of
humans, putting them at a disadvantage, which, if they are self-aware “thinking”
machines, they may find they do not wish to subjected to a jurisdiction system
where they are viewed as inferior, causing them to “believe” that they deserve
better, and perhaps then, a revolution may start. Whether this outcome would be
for better or worse, at current we would not be able to know since the
implications of developing full AI are still a mere prediction as held by
pioneers of the Science and Technology industry, including Hawking and Musk.


No comments:
Post a Comment