
Artificial intelligence is now more than just a tool for data analysis; it has a role in formulating corporate strategy at the board level. This change presents intricate issues in corporate governance. Call it a question of adopting artificial intelligence as corporate governance; more fundamentally, it may be a question of what to do given AI-generated board decisions under fiduciary law—a lonely space where old duties of care and loyalty collide with new technology. Boards now need to be able to responsibly deploy AI tools and meet their responsibilities to the company and its shareholders. This is a new challenge for directors all over.
Fiduciary Duties and AI Decision-Making
Fiduciary duties require that directors act in the best interest of the corporation. This includes making educated or loyal decisions. When artificial intelligence becomes part of the equation, it alters how those responsibilities are fulfilled. Directors must not blindly yield to the output of an AI. They are required to actively use technology to do their duties. The principles are sound but the way we apply them needs to change. Under the Duty of Care, directors must rationally inform themselves prior to making a business decision. For AI, that means understanding the tool that is AI.
- Understanding the Model. Directors don’t have to be data scientists. Yet they need to get the underlying aim, assumptions made, and boundaries implied by the AI. Those algorithms should share what data they draw on and how they were trained so that hidden biases do not slip through.
- Questioning the Output. Questioning what has been told is a critical part of due diligence. AI recommendations should be treated by directors like advice from a consultant, not as a command performance. They need to probe the results, not just assume everything was as expected.
The Duty of Loyalty also requires that directors act in the absence of a personal conflict. AI brings a fresh kind of warfare. An AI model might have biases from its creators, or from the data it learned from. If those biases happen to be for some outcomes that aren’t in the company’s best interest, the duty of loyalty may have to be violated unintentionally. Boards are responsible for seeing to it that AI systems are focused strictly on the good of the company.
Board Oversight and Accountability for AI Systems
Proper board oversight is a cornerstone of corporate governance. As companies turn to A.I. to aid their most-important decisions, the job of the board in oversight becomes even more important. Ultimately, directors are responsible for the decisions made, with or without the assistance of AI. That is, they cannot devolve the work of judgment on an algorithm. A clear governance framework for AI is necessary.
This framework would specify how the company chooses, tests and tracks its AI systems. The board should establish standards for the use of A.I. and make sure the management follows them. This involves determining responsibilities and roles. Who is responsible when an AI system makes a mistake or inflicts some damage? The board needs to know the answer to these questions before a crisis hits. Regular AI performance and risk reporting should be standard in board meetings. This fosters continuous conversation, so the technology doesn’t become a “black box” that is out-of-sight and out-of-mind.
Transparency and Explainability of AI in Corporate Governance
One of the biggest problems with advanced AI is it is inscrutable. Some of these are so intricate that they baffle even the people who made them, such that they can’t always explain why the model does what it does. This problem of “black box” is a fundamental challenge to good corporate governance. If a board can’t articulate why it made a major strategic decision, it may not be fulfilling its duty of care. Shareholders and regulators both want to understand corporate moves, at least the larger ones.
Explainability is the solution. Boards must demand AI systems that are capable of transparent, comprehensible rationales for their recommendations. This way, directors can actually look over the reasoning and spot any potential mistakes and make an informed decision. It also builds a defensible record. Because if at the end of the day a decision is challenged in court, which is likely if for no other reason than the defendants don’t want to litigate — the board will have a rational basis for its action based on both human oversight and technological analysis. Without an explanation, the directors take a risk of being made accountable for decisions that they could not grasp entirely.
Liability Scenarios in AI-Driven Mismanagement
AI without oversight can have dire legal and financial consequences. Shareholders could sue directors for violating their fiduciary duty. It also appears that some common risk scenarios are beginning to emerge as companies increase their use of the technology. They spotlight a variety of governance failures that can put the board at risk for liability. Here are a few possible liability scenarios:
- Faulty AI Model. A board green-lights a multi-million-dollar investment thanks to a market prediction from an AI that was malfunctioning. An important variable was omitted from the model, which cost them millions. Shareholders could file a lawsuit against the board of directors for not effectively vetting the AI tool — a breach of the duty of care.
- Biased AI Output. An AI used for hiring or promotion decisions shows bias against a protected group. This leads to a discrimination lawsuit and damages the company’s reputation. The board could be held liable for failing to ensure the AI system was fair and unbiased.
- Over-reliance on Automation. A board becomes too dependent on an AI for monitoring financial compliance. The AI misses a sophisticated fraud scheme that a human likely would have caught. The board may be liable for abdicating its oversight responsibilities.
- Failure to Monitor and Update. The board implements an AI system but fails to ensure it is regularly updated. Market conditions change, but the AI’s old data leads to poor strategic advice. This neglect could be seen as a clear breach of the duty of care.
Regulatory and Case Law Developments
The legal terrain for AI is still being mapped. Regulators and courts, they’re working to catch up to the frenzied innovation of the tech industry. No such vast collection of case law exists to specifically address the issues that arise from AI in the boardroom. But it will be the regular rules of law, such as the Business Judgment Rule, that courts will apply to these new conditions. This principle insulates directors from liability for good-faith errors of judgment, so long as they acted on an informed basis in good faith and without a conflict of interest.
The regulators are also beginning to make moves. New regulations, like the European Union’s AI Act, are establishing rules for governing high-risk AI systems. Even if these regulations do not specifically address boards, the care standards they establish may influence courts looking forward. Boards will want to stay close to these issues. Establishing robust AI governance practices now, proactively, can help shield directors from future liability, and demonstrate a commitment to responsible innovation.
Conclusion
There’s a lot of promise for AI in board-level decisions — but also serious legal risk. The basics of fiduciary duty remain the same. Directors of course still have an obligation to be careful, loyal and act in the best interests of the enterprise. What should change is how they use these duties in an automated world. Boards must treat AI as an incredibly valuable tool that needs to be handled thoughtfully, rather than as a replacement for human judgment. Proactive vigilance, a call for transparency, and a bit of healthy skepticism are what will allow all of us to tread this new terrain with the best chances of success.