UGA Law Professor Bruner presents “Managing Fraud Risk in the Age of AI” at the National University of Singapore (NUS)

Professor Christopher Bruner, the Stembler Family Distinguished Professor in Business Law and a newly appointed Faculty Co-Director of the Dean Rusk International Law Center, presented “Managing Fraud Risk in the Age of AI” at the National University of Singapore (NUS) in June.

Bruner’s talk was part of a conference titled Fraud and Risk in Commercial Law, organized by Professors Paul Davies (University College London) and Hans Tjio (NUS) and hosted by the EW Barker Centre for Law & Business at the NUS Faculty of Law.

Below is Bruner’s presentation abstract:

Artificial Intelligence (AI) applications are widely expected to revolutionize every dimension of business. This paper explores current and potential impacts of AI on corporate management of fraud risk in both operational and compliance contexts. Much attention has been paid to the operational efficiencies that AI applications could enable in numerous industry settings, and such systems have already become central to a range of services in certain industries – notably finance. Heavy reliance on algorithmic processes can be expected to give rise, however, to a range of risks, including fraud risks. New forms of internal fraud risk, emanating from intra-corporate actors, as well as external fraud risk, emanating from extra-corporate actors, are already placing greater demands on the compliance function and requiring greater corporate investment in responsive AI capacity to keep pace with the evolving risk management environment. At the same time, these developments have already begun to prompt reevaluation of conventional legal theories of fraud that took shape, in commercial and financial contexts alike, by reference to human actors, as opposed to algorithmic processes.

The paper begins with an overview of growing operational reliance upon increasingly sophisticated AI applications across various industry settings, reflecting the increasingly data-intensive nature of modern business. It then explores forms of internal and external fraud risk that may arise from efforts to exploit weaknesses in operational AI, which efforts may themselves involve sophisticated deployment of malicious extra-corporate AI applications – ‘offensive AI’, as the cyber security industry describes it. This can in turn be expected to require responsive corporate efforts in the form of ‘defensive AI’, and the paper describes burgeoning efforts along these lines, as well as the increasing pressure to devote substantial resources and managerial attention to these dynamics that may arise from both corporate law and commercial realities. Finally, the paper analyzes shortcomings of conventional legal theories of fraud in this context. Here the paper assesses the difficulty of applying concepts such as deception, scienter, reliance, and loss causation to algorithmic processes lacking conventional capacity for intentionality and defying conventional explanation as to how inputs and outputs logically relate – a reflection of the AI ‘black box’ problem. The paper concludes with proposals to reform corporate oversight duties to incentivize managerial attention to these issues, and to reform conventional legal theories of fraud to disincentivize malicious AI applications.

Leave a comment