Risk assessment for foundation models

Send Request

Everywhere around us, engagement is getting disrupted, and Artificial Intelligence is at the center stage of everything. Governments, for their part, are scrambling to figure out how any new regime of regulations might emerge. Such a behavior standard would be the role of an AI Office in Europe. In America, meanwhile, some state lawmakers are debating where legal liability falls in evolving AI systems. One takeaway from these course corrections is the creaky, non-agile nature of the legal system and laws; it takes a long time for them to catch up with something as dynamic as Artificial Intelligence.

Foundation Models

  • Risks of foundation models

As already discussed, today’s powerful AI systems are based on foundation models — large and high-performing but versatile systems that can power a range of capabilities spanning from content automation tools to immersive virtual reality environments. Open-source models are open for innovation built on what is common; however, this same commonality also makes them prone to second-order harm. When everyone can see into the code and manipulate the building blocks, the creation of harmful content, malicious use or deviation from intended purpose becomes significantly harder to control.

Top tip: It is also important to keep the legal aspect in mind as this can vary from country to country. Our team of attorneys assists clients with all IP, regulatory compliance, and preventative mechanisms. In a software-oriented world, it is notable that compliance, exposure to legal liability, and hopefully liability protection be made clear.

  • Foundation models: Legal risk assessment

Foundation models present an array of legal issues, as the very basis upon which they are founded is problematic. Risk assessments must account for:

  1. Privacy law conflicts
  2. Content regulations vulnerabilities
  3. Cross-border data governance
  4. Jurisdictional uncertainties

Open models further complicate matters. Their convenience makes them available for misuse by others, who sometimes remove necessary protections from what the original creators had in mind. Attribution can be challenging, especially if training sets include data subject to GDPR or if the output of AI models violates user rights. This is not some abstract risk; this is future legal exposure.

Key regulatory considerations

There is more to law compliance than AI compliance. Our advice is centered around the following:

  • GDPR Compliance: Training and deployment stages are PII-free or confidentiality is safely managed.
  • Practical Aspects: Creating human-AI ecosystems with deployable and enforceable, human-readable privacy terms for AI tools processing any individual inputs.
  • Policy-driven Content Management: Addressing synthetic media, disinformation, and non-consensual creation of content.

With every new regulatory layer, documentation must be done, enforcement strategies set, and immediate response mechanisms put in place so that, in case of irregularity, companies can act quickly to avoid the risk of penalization.

Who’s Responsible? Understanding the AI Value Chain

Foundation designs are not linear software; they grow over time. Downstream activities also incur legal exposure beyond the release of a model. Key roles include:

  • Base Model Creators: Base model builders and contributors.
  • Adapters: Model specialist engineers.
  • Hosted 3D Generalists: Model owners as white-labeled studios.
  • Public/Privacy Policy Facilities
  • End-User Application Engineers: Principally subscribers of the models to build end-user interfaces or tools.

The publisher-business-partner-third-party relationships are sophisticated, and a number of players along the chain of responsibility could have exposure depending upon what role (if any) that technology played.

Liabilities Way Down the Line

We use a methodical approach that functions in the Prevent — Detect — Respond cycle.

  • Prevent
  1. Accountability Structures: A license for permissible consensual use.
  2. Compliance Auditing: Checking data that the model has been trained on for GDPR and copyright violations.
  3. Follow-up Releases: Gradual rollouts in each step of the model and its defense mechanisms.
  4. Preventing Public Tools from Overpromising or Underperforming: Warning labels and safety testing.
  • Detect
  1. Monitoring Tools: Finding misuse and drift of unwanted output.
  2. How to Report Incidents: Reporting legal, compliance, or ethical issues.
  3. Regulatory Tracking: Keeping up with changes in laws across geographies.
  • Respond
  1. Incident Management: Forensic investigations of incidents and holding responsible parties accountable for policy violations.
  2. Public Disclosures: Post-incident transparency requirements.
  3. Police Enforcement: Implementing take-downs or user bans when needed.

This dual-pronged approach allows our lawyers to craft strategic plans depending on whether a client is involved in developing the AI system or using it, guaranteeing tailored AI risk management strategies are in place at every stage.

Need Legal Support?

Companies face the challenge of training various sizes of foundation models and generative AI. Your legal risk is real, and it can be high — whether you are unveiling a new AI system, tweaking existing models, or incorporating third-party tools.

Services Include:

  • AI compliance strategy
  • Legal audit
  • Regulatory analysis
  • Liability analysis management
  • Contract drafting for AI use

We proactively help you pinpoint and address problems before they crop up, fight for your rights, and secure your technical innovations under existing and upcoming legal norms. If you need any legal support in this growing field, Eternity Law International is your guide. Contact us now to discuss how we might assist with your AI risk and compliance needs.

What are the models of risk assessment?

There are generally three types:

  • Before Launch: Static risk assessment.
  • Continuous Threat Monitoring: Monitored during deployment.
  • Contextual Analysis: Application of the majority material to the example, audience, and legal system.

Depending on who you ask, it may be the model that got misused downstream after deployment. Liability is contract control-based. However, if reasonable safeguards were breached, providers might still find themselves tarred with culpability.

The question then becomes: can releasing open-source AI code allow you to renege compliance? Data protection laws, intellectual property (IP) frameworks, and harmful content policies all influence this landscape. Open-source AI is subject to local data protection laws like GDPR, and must meet all necessary IP frameworks for your market, including but not limited to copyright.

What does this mean for training AI models?

Nationally scraped content (even public) may include EU citizen data, which requires protection under GDPR. Such measures can raise the cost of an attack and are instrumental in facilitating legal protection to prevent misuse through licensing terms, usage constraints, model fingerprinting, and run-time monitoring apps.

Preparing for the Next Wave of AI Governance

Beneath the benefits mentioned above lies a real need for stricter governance, given how powerful and sophisticated foundation models are becoming. This includes:

  • Defining Distinct Production Release Risk Gates
  • Cultivating International Cooperation on Adherence to AI Standards
  • Building Scalable and Reliable Real-Time Data Processing

We help clients keep up with hybrid AI systems, which could represent several models, tools, and/or services. The protection of such systems requires coordinated approaches to risk management from a range of legal philosophies.

 

 

Fill the blank: