Ar dirbtiniam intelektui galima taikyti ja baudžiamosios atsakomybės doktrinas?
Rimkevičienė, Greta |
Dirbtinio intelekto autonomiškumas bei vaidmuo pasaulyje nuolatos didėja. Klausimai, susiję su dirbtiniu intelektu, pritraukia itin didelį visų pasaulio mokslininkų dėmesį. Šios priežastys lėmė baigiamojo darbo temą : ,,Ar dirbtiniam intelektui galima taikyti JA baudžiamosios atsakomybės doktrinas?“ Darbo tyrimo objektu buvo pasirinkta dirbtinio intelekto, kaip juridinio asmens, baudžiamoji atsakomybė. Darbo tikslas – ištirti, ar/ir kurias JA baudžiamosios atsakomybės doktrinas galima taikyti dirbtinio intelekto atsakomybei. Darbo tikslui pasiekti buvo išsikelti šie uždaviniai: išanalizuoti dirbtinio intelekto poveikį bei galimas grėsmes; atskleisti dirbtinio intelekto teisinio subjektiškumo elementus; atlikti juridinio asmens baudžiamosios atsakomybės doktrinų analizę; remiantis atliktu tyrimu pateikti išvadas dėl dirbtinio intelekto atsakomybės atitikimo juridinio asmens baudžiamosios atsakomybės nustatymui. Darbo tyrimui atlikti buvo pasitelkti šie tyrimo metodai: istorinis metodas (kuriuo siekiama atskleisti, kaip iki šiol buvo sprendžiamas dirbtinio intelekto teisinės atsakomybės klausimas), lingvistinis metodas (jį taikant atsirinkti teisiniai šaltiniai, naudojami darbe), sisteminis metodas (kuriuo siekiama atskleisti dirbtinio intelekto galimos teisinės atsakomybės problemą). Siekiant išsiaiškinti, ar DI gali būti taikomas juridinio asmens statusas, įskaitant baudžiamąją atsakomybę, buvo išnagrinėtos 4 pagrindinės juridinių asmenų baudžiamosios atsakomybės doktrinos: respondeat superior, alter ego, visumos ir kolektyvinės kultūros. Tyrimo rezultatai atskleidė, kad ateities perspektyvoje dirbtiniam intelektui būtų tikslinga suteikti juridinio asmens statusą, norint pritaikyti baudžiamąją atsakomybę. Tačiau iš keturių nagrinėtų doktrinų dirbtiniam intelektui galėtų būti taikytinos tik dvi – visumos ir kolektyvinės kultūros. Tokios išvados darytinos dėl to, kad respondeat superior ir alter ego doktrinos reikalauja identifikuoti nusikalstamą veiką padariusį fizinį asmenį, veikusį juridinio asmens statusu todėl, taikant šias doktrinas dirbtiniam intelektui baudžiamoji atsakomybė negalėtų būti pritaikyta.
Artificial intelligence was first discussed by McCarthy (1956) during the first academic conference on Artificial Intelligence. It was hypothesized that machines could think and perform different tasks as a human being. Since then, the issue of artificial intelligence has attracted increasing attention from scientists, and technology is developing at a very fast pace. There is still no common definition of artificial intelligence, but in a general sense it can be one of the alternative areas of study that describes machine learning opportunities and the ability to respond to certain behaviors. The autonomy and role of artificial intelligence in the world is constantly increasing. Therefore, the issues related to artificial intelligence attract the great attention of all the scientists of the world. These reasons led to the theme of the final thesis - can artificial intelligence be the subject to legal persons‘ criminal justice doctrines? The subject of work research was the criminal liability of artificial intelligence as a legal entity. The purpose of the thesis is to investigate: Should AI be the subject of legal persons criminal justice doctrines. The following objectives were set for the purpose of the work: to analyze the impact of artificial intelligence and the possible threats; to reveal the elements of legal personality of artificial intelligence; to perform an analysis of the legal person's criminal liability doctrines; on the basis of the research carried out, to present conclusions on the compliance of the responsibility of artificial intelligence with the determination of criminal liability of a legal person. Artificial intelligence has long been an area of information technology research, but with more and more objects occurring in human everyday life, more ethical and legal issues are beginning to be investigated by lawyers. The analysis of artificial intelligence through the prism of the legal person, according to the knowledge of the author of the written work, has not yet been carried out, so there is a need for more detailed research. The purpose of this work is to solve the scientific problem and answer the question: can the criminal liability doctrine of legal person be applied to artificial intelligence? It should be emphasized that the actions of systems and robots based on artificial intelligence require very detailed legal regulation that defines the legal responsibility of artificial intelligence. However, in order to achieve this goal, it is necessary to determine the subjectivity and other characteristics of artificial intelligence. The subjectivity of artificial intelligence is a prerequisite for any kind of legal liability. Legislation is not as fast with artificial intelligence technology. The fact that artificial intelligence-controlled technologies are developing faster and faster to the human routine does not mean that they do not make mistakes - as we know, the victims of a municipal car are already counted. That is why the issue of criminal responsibility for artificial intelligence becomes relevant. The current legal framework reveals a legal problem. It is still not adapted to the implementation of the legal responsibility for artificial intelligence. Hence, legal responsibility for artificial intelligence can still not be attributed to it because it lacks the capacity to act and the elements of justice from the point of view of civil law, as well as the criminal intention. The following research methods were used to perform the research: historical method (which aims to reveal the question of the legal liability of artificial intelligence before this), the linguistic method (by which the legal sources used in the work are selected), the systematic method (aimed at revealing the potential of artificial intelligence). legal liability). In order to ascertain whether the status of a legal person, including criminal liability, can be applied to AI, four main doctrines of criminal liability of legal persons were examined: respondeat superior, alter ego, aggregation and collective culture. Applying legal person and criminal responsibility to artificial intelligence may be possible in the future. It can be argued that criminal liability and granting legal personality at the present time would not provide the necessary efficiency, because criminal responsibility is formed in order to change the internal state of the human being in order to avoid repetition of his criminal activity. However, various sources point to the development of a super intelligence that will cover more than 50 percent of human thinking in 60 years. It must therefore be concluded that the granting of legal personality and the application of criminal liability are a realistic means of controlling the criminal acts of artificial intelligence. The results of the study revealed that in the future, it can be needful to give legal personality to artificial intelligence in order to apply criminal liability. However, of the four doctrines examined, only two - the aggregation and the collective culture - could be applied to artificial intelligence. Such conclusions are due to the fact that the respondeat superior and alter ego doctrines require the identification of a natural person who has acted in a criminal offense. Therefore, with artificial intelligence, these doctrines could not be applied to criminal liability.