Zurich
+41 435 50 73 23Kyiv
+38 094 712 03 54London
+44 203 868 34 37Tallinn
+372 880 41 85Vilnius
+370 52 11 14 32New York
+1 (888) 647 05 40Everywhere around us, engagement is getting disrupted, and Artificial Intelligence is at the center stage of everything. Governments, for their part, are scrambling to figure out how any new regime of regulations might emerge. Such a behavior standard would be the role of an AI Office in Europe. In America, meanwhile, some state lawmakers are debating where legal liability falls in evolving AI systems. One takeaway from these course corrections is the creaky, non-agile nature of the legal system and laws; it takes a long time for them to catch up with something as dynamic as Artificial Intelligence.
As already discussed, today’s powerful AI systems are based on foundation models — large and high-performing but versatile systems that can power a range of capabilities spanning from content automation tools to immersive virtual reality environments. Open-source models are open for innovation built on what is common; however, this same commonality also makes them prone to second-order harm. When everyone can see into the code and manipulate the building blocks, the creation of harmful content, malicious use or deviation from intended purpose becomes significantly harder to control.
Top tip: It is also important to keep the legal aspect in mind as this can vary from country to country. Our team of attorneys assists clients with all IP, regulatory compliance, and preventative mechanisms. In a software-oriented world, it is notable that compliance, exposure to legal liability, and hopefully liability protection be made clear.
Foundation models present an array of legal issues, as the very basis upon which they are founded is problematic. Risk assessments must account for:
Open models further complicate matters. Their convenience makes them available for misuse by others, who sometimes remove necessary protections from what the original creators had in mind. Attribution can be challenging, especially if training sets include data subject to GDPR or if the output of AI models violates user rights. This is not some abstract risk; this is future legal exposure.
There is more to law compliance than AI compliance. Our advice is centered around the following:
With every new regulatory layer, documentation must be done, enforcement strategies set, and immediate response mechanisms put in place so that, in case of irregularity, companies can act quickly to avoid the risk of penalization.
Foundation designs are not linear software; they grow over time. Downstream activities also incur legal exposure beyond the release of a model. Key roles include:
The publisher-business-partner-third-party relationships are sophisticated, and a number of players along the chain of responsibility could have exposure depending upon what role (if any) that technology played.
We use a methodical approach that functions in the Prevent — Detect — Respond cycle.
This dual-pronged approach allows our lawyers to craft strategic plans depending on whether a client is involved in developing the AI system or using it, guaranteeing tailored AI risk management strategies are in place at every stage.
Companies face the challenge of training various sizes of foundation models and generative AI. Your legal risk is real, and it can be high — whether you are unveiling a new AI system, tweaking existing models, or incorporating third-party tools.
Services Include:
We proactively help you pinpoint and address problems before they crop up, fight for your rights, and secure your technical innovations under existing and upcoming legal norms. If you need any legal support in this growing field, Eternity Law International is your guide. Contact us now to discuss how we might assist with your AI risk and compliance needs.
There are generally three types:
Depending on who you ask, it may be the model that got misused downstream after deployment. Liability is contract control-based. However, if reasonable safeguards were breached, providers might still find themselves tarred with culpability.
The question then becomes: can releasing open-source AI code allow you to renege compliance? Data protection laws, intellectual property (IP) frameworks, and harmful content policies all influence this landscape. Open-source AI is subject to local data protection laws like GDPR, and must meet all necessary IP frameworks for your market, including but not limited to copyright.
Nationally scraped content (even public) may include EU citizen data, which requires protection under GDPR. Such measures can raise the cost of an attack and are instrumental in facilitating legal protection to prevent misuse through licensing terms, usage constraints, model fingerprinting, and run-time monitoring apps.
Beneath the benefits mentioned above lies a real need for stricter governance, given how powerful and sophisticated foundation models are becoming. This includes:
We help clients keep up with hybrid AI systems, which could represent several models, tools, and/or services. The protection of such systems requires coordinated approaches to risk management from a range of legal philosophies.
The international company Eternity Law International provides professional services in the field of international consulting, auditing services, legal and tax services.