CAT: PRACTICE
EXECUTIVE SUMMARY


AI Risk and Business Risk


Pricing AI Risk into Financial Markets

With rapid advancements in artificial intelligence (AI) capabilities, there are growing concerns about social, political, and catastrophic risks if AI systems are developed without adequate safety measures. This project investigates integrating AI risk evaluation into criteria used by financial institutions as a means of assessing business risk. We see this as a crucial first step in internalising AI Risk into financial transactions.



Reporting standards incentivize corporate action by informing financial institutions of the risk exposure of a potential deal. In doing so, they direct capital flows towards companies meeting evaluative criteria. Adapting these frameworks can encourage industry investment in safe AI practices and reduce business risk. Asset managed funds with sustainability/impact investing mandates are expected to grow to $33.9 Trillion by 2026 from $18.4tn in 2021. We propose an AI risk disclosure framework compatible with current financial reporting. Our framework systematically discloses material company information based on best practices in cybersecurity, governance procedure, assurance, deployment and containment. The project includes disclosure requirements for corporate accountability, risk assessment, safety testing, and algorithmic auditing in line with similar industry standards in sustainability and financial accounting.

Assuming a conservative rate where 5% of companies under CSR mandates make significant changes to their AI development strategy due to new investment criteria, out of the estimated $33.9 trillion funds by 2026, roughly $1.7 trillion could be channeled towards companies with prioritized safe AI practices. If AI-related incidents or malfunctions cause, on average, a 2% loss in company value (through stock price drops, penalties, or lost customers), the new criteria might save companies an aggregate of $678 billion (2% of $33.9 trillion) by 2026. 


By leveraging financial mandates to divert funds and influence corporate priorities, investing frameworks can be a powerful tool to reduce risk from artificial intelligence. Financial incentives, historically potent catalysts for corporate action, will prompt businesses to prioritize AI safety, fostering both public trust and broader acceptance of AI technologies. We analyze how this can stimulate innovation in AI, increase regulatory synergy, and improve the efficiency of markets.

Together with industry and cvil society partners, we are devloping developing an implementable AI Risk standard for integration with current investment practices.