Ar dirbtinio intelekto programos autorius yra atsakingas už šio intelekto valdomo roboto sukeltą žalą?
Pocevičiūtė, Brigita |
Dirbtinio intelekto veikiami robotai plačiai naudojami įvairiose srityse jau daugelį metų. Jų specifiškumas išskiria juos iš kitų naudojamų sistemų tuo, kad tai sistema, kuri sugeba mokytis iš savo sukaupiamos patirties bei ją remiantis priimti savarankiškus sprendimus. Tokių sprendimų priėmimas suponuoja galimybę žalos atsiradimui. Tačiau nėra nustatytos tiek tarptautinėje tiek nacionalinėje teisėje kas turėtų atsakyti už robotų, kurie yra veikiami tokios sistemos sukeltą žalą. Pagrindinis šio darbo tikslas yra išanalizuoti dirbtinio intelekto programos autoriaus civilinės atsakomybės ribas už dirbtinio intelekto veikiamo roboto sukeltą žalą ir pasiūlyti atitinkamus reguliavimo sprendimus. Siekiant, kad būtų atsakytą į šio darbo tikslą, būtina atsakyti į darbe išsikeltus uždavinius. Darbe išsikeltas darbo objektas – dirbtinio intelekto autoriaus žalos atlygintinumo atsakomybė dėl tokio roboto veiksmų. Siekiant nustatyti ar dirbtinio intelekto sistemos autorius gali būti atsakingas už šio intelekto valdomo roboto padarytą žalą teoriniu aspektu, bus naudojamas aprašomasis teorinis ir lyginamasis teisiniai metodai atliekant Lietuvos ir užsienio teismų praktikos, mokslinės ir periodinės literatūros šaltinių analizė. Šis metodas pasirinktas, kadangi jis geriausiai atskleidžia išsikeltus darbo uždavinius. Kad būtų kuo tiksliau atsakytą į darbo temą pasirinkta keturių dalių darbo struktūra: pirmojoje dalyje pristatoma dirbtinio intelekto sąvoka bei tokios sistemos pritaikomumo reikšmė; antrojoje dalyje analizuojama dirbtinio intelekto kaip teisės subjekto galimybė norint išanalizuoti jo pačio galimybę atlyginti žalą, bei nagrinėjami dirbtiniai teisės subjektai (juridiniai asmenys) siekiant nustatyti ar dirbtiniu intelektu veikiami robotai galėtų tokiais būti laikomi; trečiojoje dalyje įvardijami trys teisiniai analogai (vergai, gyvūnai, nepilnamečiai) kurie skirtingais aspektais yra sutapatinami su dirbtiniu intelektu siekiant nustatyti tokių objektų sukeltos žalos atsakingus teisės subjektus; ketvirtojoje dalyje yra pristatoma nagrinėjama teisės šaka kurios aspektu yra nagrinėjamas žalos klausimas, bei analizuojama pačio autoriaus atsakomybė pasitelkiant darbdavio, naudotojo ir autoriaus santykį su dirbtinio intelekto veikiamu robotu. Išnagrinėjus surinktą medžiagą, bei atsakius į išsikeltus uždavinius bus priimta į klausimą atsakanti išvada bei siūlomos rekomendacijos tokios problemos išsprendimui.
Computers and robots have been used by humans for many years now. Just like the artificial intelligence program. As new technologies are being developed and expanded into the everyday life, the artificial intelligence is more and more adapted to a human life using robots. Due to its specificity, the artificial intelligence stands out from other modern programs that are used. Unlike other programs, the artificial intelligence is affected by algorithms as a result of which the program may understand its surrounding, learn from its own experience and then make independent decisions. Such specificity of the program opens up opportunities for finding answers to long unsolved scientific questions and reducing everyday problems, but also for saving lives not only of medical patients by preventing the spread of fatal diseases but also in the army by using them instead of soldiers. At the moment, the potential of the artificial intelligence program is unlimited and its developers are constantly trying to develop an improved program that would be better than the previous one. Unfortunately, unlike with other programs, actions of the artificial intelligence cannot be predicted. Such uniqueness of the system presupposes new legal problems the regulation of which is still in the idea’s stage. Therefore, when a robot which operates on the basis of such a program performs actions that cause damage, it is only natural to ask: who should be responsible for such damage? Should it be the author of the program, its owner or maybe the system itself? In accordance with the settled international and national legal guidelines, it is not stipulated who specifically should be responsible for the damage caused by such robots that operate on the basis of the artificial intelligence program.
Thus, in order to find the most targeted answer to the problem of the paper, the goal, subject and four tasks of the research are set on the basis of which it will be tried to answer, with the help of chosen research methods, which legal entity should be responsible for a tort caused by a robot that operates on the basis of the artificial intelligence program.
The main goal of the paper is to analyse the limits of civil liability of an author of the artificial intelligence program for the damage caused by a robot that operates on the basis of the artificial intelligence and to suggest appropriate regulatory solutions. In order to achieve the goal of the paper, it is necessary to complete the tasks set therein. The subject of the paper – liability of an author of the artificial intelligence to compensate for the damage caused by the actions of such a robot.
To determine whether an author of the artificial intelligence may be theoretically liable for the damage caused by the robot controlled by such intelligence, the descriptive theoretical and comparative legal methods shall be used by carrying out the analysis of Lithuanian and foreign case-law as well as scientific and periodical literature sources. This method is chosen because it allows the author of the paper to achieve the tasks set in the paper in the best possible way.
In order to answer the topic of the paper in the most accurate way, the paper consists of four sections: in the first section the concept of the artificial intelligence and the significance of the applicability of such a system are presented; in the second section the capability of the artificial intelligence as a legal entity is analyse with the aim to analyse its own capability to compensate for the damage and the artificial legal entities (legal persons) are analysed with the aim to determine whether robots that operate on the basis of the artificial intelligence could be considered as such; in the third section of the paper three legal analogues are presented which are equated with the artificial intelligence in different aspects in order to determine the legal entities responsible for the damage caused by such analogues. A slave with artificial intelligence is equated in the paper through the prism of intelligence. With respect to wild and domestic animals, they are equated from the aspect of training. In order to analyse this part in greater detail, wild circus animals, which are trained as domestic but are still wild, are also looked at. Minors, whose legal personality is considered to be full only after certain age, and although they are neither slaves, nor animals, the issue concerning the damage caused by them is regulated differently compared to adult persons; the fourth part introduces the area of law chosen to analyze the issue of damage and the responsibility of the creator through the relationship between employer, the user and the creator with the robot with artificial intelligence.
Having analysed the collected material, the author will try to complete the tasks set in the paper. Namely, having analysed the four tasks set in the paper using the chosen research method, a conclusion will be made with the aim to provide a detailed answer to the issue raised in the paper: should the author of a robot which operates on the basis of such system be responsible for the damage caused by such a robot? It is very important to find an answer to this question as nowadays such robots are used not only in narrow scientific fields, but are also easily available to ordinary people. Thus, it is important to have answers to any questions that may arise in case of any damage. The purpose of this paper is also to provide recommendations to legislators which would help to solve issues related to the damage caused by robots more effectively, quicker and more fairly for all the sides involved.