Ar egzistuoja pareiga atskleisti, kad turinys buvo sukurtas dirbtinio intelekto pagalba?
Štareikaitė, Ugnė |
Europos Sąjungos DI aktas yra pirmasis istorijoje DI reglamentuojantis teisės aktas, kuriuo nustatoma reglamentavimo sistema ir siekiama užtikrinti, kad DI sistemos būtų saugios, jomis būtų laikomasi teisės aktų ir gerbiamos ES pagrindinės teisės bei vertybės. Tačiau šis aktas taikomas tik DI diegėjams ir kūrėjams arba jų DI sistemoms, o apie pareiga atskleisti DI naudojimą turinio kūrime nėra minima. DI naudojimas turinio kūrime gali kelti teisinių netikslumų bei pažeisti etinius aspektus, tokius kaip skaidrumas, sąžiningumas, pagarba žmogui, o DI neatskleidimas gali sukelti tam tikrų iššūkių ir problemų pvz.: mokslinėje literatūroje, teisėje, įmonėse, žurnalistikoje, mene ir t.t.. DI gali pateikti melagingą ir klaidinančią informaciją visuomenei, taip pat DI technologijos gali sekti asmens elgesį, analizuoti įpročius, nustatyti motyvacijas, kad galėtų pasiūlyti asmens poreikius atitinkančias paslaugas ar produktus. Daugelis DI valdymo iniciatyvų daugiausiai dėmesio skiria DI etikai, t. y. kaip įdiegti bei prižiūrėti DI sistemas, o ne aprašo kylančius žmogaus teisių pažeidimus ir kaip tinkamai naudoti DI sistemas. Teismai numato, jog DI sukurtas kūrinys, negali būti pripažintas kūriniu, jeigu nėra žmogiškos autorystės elemento. 2024 m. Juta priėmė didelį DI statutą, reglamentuojantį privačiojo sektoriaus DI naudojimą, pagal kurį jeigu įmonė ar fizinis asmuo naudoja DI, kad bendrautų su asmeniu, susijusiu su komercine veikla, kurią reguliuoja Jutos vartotojų apsaugos įstatymas, jis turi atskleisti asmeniui, kad jis bendrauja su DI, o ne su žmogumi. Atsižvelgiant į tai, DI naudojimo atskleidimas turinio kūrime turėtų būti reikalaujamas teisės aktų, taip sumažinant asmens privatumo ir vartotojų teisių pažeidimus.
The European Union's AI Act is the first ever AI legislation that sets out a regulatory framework and aims to ensure that AI systems are secure, comply with the law and respect the EU's fundamental rights and values. However, the Act only applies to AI implementers and developers or their AI systems, and there is no mention of the obligation to disclose the use of AI in the creation of content. The use of AI in content creation may raise legal uncertainties and ethical issues such as transparency, fairness, respect for human beings, and non-disclosure of AI may create certain challenges and problems in e.g. scientific literature, law, business, journalism, art, etc. AI can provide false and misleading information to the public, and AI technologies can track a person's behaviour, analyse their habits, identify their motivations, in order to offer services or products that are relevant to the person. Many AI governance initiatives focus on the ethics of AI, i.e. how to install and maintain AI systems, rather than describing the human rights violations that arise and how to use AI systems properly. Courts have held that a work created by an AI system cannot be recognised as a work unless there is an element of human authorship. In 2024, Utah enacted a major AI statute governing private sector use of AI, which requires that if a business or individual uses AI to communicate with a person in connection with a commercial activity regulated by the Utah Consumer Protection Act, the business or individual must disclose to the person that it is communicating with an AI , not with a human being. In this context, disclosure of the use of AI in content creation should be required by law, thereby reducing violations of personal privacy and consumer rights. The first part analyses the concept, development and legal framework of AI. The unauthorised collection of data violates the protection of personal data, raises ethical and security concerns, privacy violations and becomes an opaque solution to AI, and needs to be addressed. When it comes to new technologies, people are very interested in the opportunities and benefits they offer, but often less aware of the potential limitations. Therefore, AI technologies are an important topic to explore in today's environment in order to gain a better knowledge of AI systems and to address the aspect of missing legislation. In summary, AI black boxes can cause opacity, privacy and security breaches for people, but the legislation only provides legislation for AI implementers and developers. The developed AI ethical guidelines often focus on the ethics of AI, how to develop AI technologies in a transparent way, but the main problem of AI technologies is the use that causes human rights violations, according to the Unesco Guidelines, the European Commission Guidelines, the EU AI Act Transparency Act, the GDPR Act, the Utah and Jav Algorithmic Accountability Act, The novelty of the technology used may pose significant risks to the privacy or security of personal information of users, due to inaccurate, unfair, biased or discriminatory decisions affecting users, and it is therefore important to enforce legislation that specifies the lawful, fair and transparent use of the AI, not only for the implementers and developers, but also for other persons. The second part examines the use of AI in scientific literature, law, journalism, art and business, as well as copyright infringement. According to the ATTIA, a scientific work or literature produced by an AI application cannot be recognised as a scientific work because an AI is not a natural person. Researchers' studies indicate that AI can reduce the essential human interaction in education. Data privacy and security are threatened by the sheer volume of information being collected, and fake, fictional scientific articles from AI threaten public knowledge and trust in science. A legal regulation would therefore ensure that AI is used ethically and does not violate human rights and data privacy. As well as causing ethical problems, AI applications can also undermine public trust, as AI can create false information that can spread in the public domain. It also provides an opportunity for copyright abuse of journalists' original work, as AI applications can use text published by other journalists to create content. The use of AI applications in the arts raises serious concerns about copyright infringements, as the algorithms of AI systems rely on the data of the works created to create art. Currently, companies have to take care of their own users, customers and employees to protect the data they hold, so companies should adopt sustainable internal policies to prevent the leakage of company or customer personal information to third parties. Utah has already passed an AI law specifically for businesses, stating that they must disclose to the individual that the person is interacting with an AI company and not with a natural person. The decision-making of AI systems is not transparent, and includes issues of information leakage, false and fake information, which may violate the basic principles of accountability, ethics and integrity in the legal system. Based on the practice in the United States, Lithuanian courts could also require disclosure of the use of AI, such as the name of the AI system, the manner in which it was used, or the exact parts that were created or researched using AI. The last part deals with the consumer's right to know about artificial intelligence and copyright issues. AI systems are often prone to "inadvertently" plagiarise copyrighted works, doing so unethically and using other people's copyrights in a way that infringes copyrights. For example, the AI art programme Kurdma Art relies on "templates" of copyrighted artworks to create AI art. The use of AI systems in the arts, scientific literature, journalism, law and business can lead to copyright infringements due to their black box, "hallucinations" and data collection problems, by providing individuals with quotations from authors without disclosing that they are quotations. Nor does the law protect the public from the collection, misuse and private information of AI systems. This raises the possibility of leaks of confidential and personal information and breaches of user data, as AI systems do not delete information used in the system. It is therefore essential to ensure that the use of AI is disclosed. The aim of this research is to examine and assess whether there is a duty to disclose that content has been created by artificial intelligence.