Zurich
+41 435 50 73 23Kyiv
+38 094 712 03 54London
+44 203 868 34 37Tallinn
+372 880 41 85Vilnius
+370 52 11 14 32New York
+1 (888) 647 05 40These days, Artificial Intelligence (AI) is everywhere in human life; however, the law remains stuck in a transitional phase. That’s changed. Compliance is not optional anymore with the EU AI Act. Launched in November, the new rules of engagement ranging from transparency compliance and AI copyright to training data rights and intellectual property are now enforceable — but they come at a significant cost. These aren’t academic theories; these frameworks institutionalize the way we will construct, deploy, and rely on AI systems in practice.
The regulation is a response to actual risk. If AI is the lens, it is also a mirror — reflecting power imbalances, exploitation, and opacity. These EU institutions, including the AI Office, will then be succeeded by developers and deployers or other cultural actors. They must operate within this space of compliant, personality-understandable AI decision-making, especially as AI-generated content becomes central to digital expression and generation of new knowledge, media, and experiences.
The Act called for the creation of an EU AI Office as a central enforcement and coordination body; now this unit is shaping up to become the world’s regulatory General-Purpose AI (GPAI) front-runner. Of all the things up in the air for GPAI, its most important is perhaps its Codes of Practice for providers. Under this code, the minimum indicators of performance on transparency and copyright compliance are set by cultural stakeholders.
The objective is to establish definable and enforceable guidelines on how AI systems are meant to be trained, documented, and used — particularly when such AI interacts with protected content, user data, or public trust.
Generative AI has long been considered a black box by the law, and the AI Act now provides the legislative solution. Transparency is an obligation of Article 50, not just in the form of understanding how models work, but why they are what matters.
Key legislative demands include:
These aren’t mere checkboxes; these are minima in a larger legal ecosystem comprising licenses, contracts, and regulations that structure our cyber behavior.
Section 53 of the AI Act illustrates a stark reality: content created by artificial intelligence is copyright and shall be treated under intellectual property laws. However, it must fit in with the 2019 Copyright Directive (EU 2019/790).
The new compliance framework consists of two tiers:
The present system involves using robots to opt out. However, many rights holder groups are proposing opt-in systems, which would require creators to give permission for a work to be used for training. Without enforceable compensation mechanisms, the very foundation of our creative economies is at risk—particularly as AI begins to power everything from digital art to immersive virtual reality environments built on pre-existing cultural content.
There appears to be a growing recognition that creators should be rewarded for their work, even if contributions are less direct. Otherwise, we may enthrone AI systems that thrive on unpaid labor and innovate cultural production into oblivion.
This is more complex for cultural heritage institutions. Article 14 of the Copyright Directive safeguards the right to freedom of expression by guaranteeing that digital reproductions of public domain works are available freely. It proposes a new exception that brings museums and archives under the exception for text and data mining (TDM) for scientific research or preservation purposes.
Article 3 codifies this: research institutions and memory organizations can extract text and data from the works they have legally acquired without asking rightsholders beforehand. This provides legal wiggle room to develop new technologies around VR, content creation, and heritage analytics.
However, exempt institutions need to invest in documentation and transparency. Public access does not grant the right to tell your story. This should highlight essential practices around codifying good behavior at an institutional level and establishing rules for engaging with commercial platforms, as well as policies about the development of such AIs and compliance with consent frameworks.
Jet-setting through the peak of AI hype to AI governance, the EU AI Act brings latent legal risks inherent in every model, prompt, and dataset to life. It affects AI vendors, cultural institutions, public archives, and software developers.
This is not speculative; it’s structural.
Our law firm offers a full range of legal support for this new age, from industry-specific know-how to expertise in technology, regulations, and human rights issues across the board. We assist clients with:
We ensure that the software you use operates within the law, and we create trust circles to facilitate conversations about where software meets society. Whether you are deploying AI tools in the creative sectors or delving into generative applications within research, development, or commercial settings, this is an area of law too complex to navigate without guidance.
We are happy to assist and can be reached here for strategic legal assistance in these areas. It is as much a legal as it is a technical future of AI. We also ensure you are prepared for both aspects.
The international company Eternity Law International provides professional services in the field of international consulting, auditing services, legal and tax services.