How best to regulate Artificial Intelligence
Insights from governing complex markets can inform AI regulation using partitions, transparency, control points and accountability
Artificial intelligence (AI) has exploded in capability in recent years, offering unprecedented advances in fields such as computer vision, language processing, robotics and more. However, the speed of AI development has also raised alarms about potential risks from uncontrolled system propagation. As AI’s capability grows, calls for regulation have increased to ensure safety, accountability and transparency. Unfortunately, standard regulatory approaches centred on ex-ante impact analysis and risk assessment are unlikely to work.
While promising immense societal benefits, the uncontrolled growth of AI presents risks ranging from systemic breakdowns to loss of privacy and agency. One danger is the potential for “runaway AI” that recursively self-improves beyond human control. Once advanced enough to reinvent itself without human input, its objectives could quickly become misaligned from human welfare. Integrated into critical infrastructure, compromised AI could also wreak havoc by disrupting utilities such as power grids and telecommunications. Malevolent systems could hack interconnected grids, causing cascading failures. Further risks arise from AI-driven cyberattacks compromising national security systems, or autonomous weapons unleashed in warfare. Unchecked surveillance presents threats of AI continuously monitoring individuals to predict and manipulate behaviours, even generating false simulated realities.
Currently, there are two strategies for controlling AI. Despite an executive order issued on October 30, the United States has largely taken a laissez-faire approach, relying predominantly on industry self-regulation. In contrast, the European Union’s Artificial Intelligence Act (2023), takes a more prescriptive approach of classifying AI systems based on risk perceptions and imposing graded regulatory requirements accordingly. The problem is that this approach only works for static, linear systems with predictable risks. AI combines qualities of complex adaptive systems (CAS) where components interact and evolve in nonlinear ways. This can lead to butterfly effects where small changes can cascade disproportionately through AI systems. Similarly, its evolutionary trajectory cannot be predicted through reductionist thinking.
Thus, regulating AI necessitates a different framework that appreciates its complex adaptive nature. Boundary conditions, real-time monitoring, guided evolution and collaborative governance are vital. The goal is not to meticulously regulate AI’s arc over decades. Rather, it is to institute hard guardrails/partitions, oversight mechanisms, and feedback loops to course-correct as AI adapts in unanticipated ways.
AI systems with dynamic interactions between components, emergent behaviour and non-deterministic evolutionary paths exemplify CAS. Their multifaceted feedback loops, susceptibility to nonlinear phase transitions and sensitivity to initial conditions defy forecasting. This uncertainty underscores the need for an alternative regulatory approach.
We propose a third approach based on CAS thinking with five principles:
First, guardrails and partitions should establish clear boundary conditions to constrain undesirable AI behaviours. Hard “guardrails” should ensure that AI systems don’t steer into obviously risky territories such as nuclear weapons. To prevent systemic failures, it’s essential to erect “partition walls” between distinct AI systems. This partitioning strategy is akin to firebreaks in a forest, preventing one localised malfunction from cascading and creating a larger catastrophe. Importantly, these partitions should be agnostic of risk perception. Even AI systems with supposedly benign, routine functions should be isolated. Strictly partitioning different AI systems limits contagion risks from any single system infecting others.
Second, manual overrides and chokepoints should be mandated in critical infrastructure, providing necessary human control. Multi-factor authentication and authorisation protocols requiring approvals from credentialed humans should provide checks and balances. Hierarchical governance structures allow intervention at key technical junctures to halt uncontrolled propagation. Note that this requires specialised skills and dedicated attention.
Third, transparency and “explainability” requirements are imperative. Open licensing core algorithms enable external audits by allowing full inspection. Implementing “AI factsheets” detailing training data, metrics, uncertainties and other parameters fosters informed and accountable adoption. Continuously monitoring black-box systems via AI debugging tools provides dynamic traceability.
Fourth, AI’s accountability lines must be clear. Predefined liability protocols are vital, given legal determination often lags behind technological advancement. In the event of malfunctions or unintended outcomes, an entity or individual should always be held accountable. This inserts ex-ante “skin in the game”.
Last, given the rapid evolution of AI technology, relying solely on traditional, slow-moving legal systems could be inadequate. Instead, the establishment of a specialist regulator, empowered with a clear mandate, becomes crucial. This body, akin to a nimble task force, can adapt and respond quickly to the ever-changing landscape.
Financial markets are an example of a complex system with similar systemic risks. Yet proactive systems-based thinking has led to workable regulation. Establishing a dedicated regulator (for instance, SEBI) provides specialised oversight. Transparency requirements such as financial statements and auditing provide traceability akin to algorithmic explainability standards. Circuit breakers act as chokepoints to halt market crashes before propagating. Liability systems hold individual directors accountable for company actions, similar to AI developer responsibility protocols.
While not a perfect parallel, insights from governing complex markets can inform nuanced AI regulation using partitions, transparency, control points and accountability. Prudent measures today can steer AI’s development responsibly, just as regulations help maintain orderly financial markets.
The intent here is not to definitively solve AI regulation, but rather to provide a new perspective. Given the technology’s dynamic complexity, its regulation must be agile and open to continuous iteration. AI can be steered responsibly amidst uncertainty if conceptualised holistically as a CAS.
Sanjeev Sanyal is member, Economic Advisory Council to the Prime Minister and Chirag Dudani is consultant, EAC-PM. The views expressed are personal