• Publication

Governing AI: Understanding the Limits, Possibilities, and Risks of AI in an Era of Intelligent Tools and Systems

Posted date/time:
Download
Science and Technology Innovation Program
Digital Futures Project
Cover of Governing AI paper

Abstract

In debates about artificial intelligence (AI), imaginations often run wild. Policy-makers, opinion leaders, and the public tend to believe that AI is already an immensely powerful universal technology, limitless in its possibilities. However, while machine learning (ML), the principal computer science tool underlying today’s AI breakthroughs, is indeed powerful, ML is fundamentally a form of context-dependent statistical inference and as such has its limits. Specifically, because ML relies on correlations between inputs and outputs or emergent clustering in training data, today’s AI systems can only be applied in well- specified problem domains, still lacking the context sensitivity of a typical toddler or house-pet. Consequently, instead of constructing policies to govern artificial

general intelligence (AGI), decision- makers should focus on the distinctive and powerful problems posed by narrow AI, including misconceived benefits and the distribution of benefits, autonomous weapons, and bias in algorithms. AI governance, at least for now, is about managing those who create and deploy AI systems, and supporting the safe and beneficial application of AI to narrow, well-defined problem domains. Specific implications of our discussion are as follows:

  • AI applications are part of a suite of intelligent tools and systems and must ultimately be regulated as a set. Digital platforms, for example, generate the pools of big data on which AI tools operate and hence, the regulation of digital platforms and big data is part of the challenge of governing AI. Many of the platform offerings are, in fact, deployments of AI tools. Hence, focusing on AI alone distorts the governance problem.
  • Simply declaring objectives—be they assuring digital privacy and transparency, or avoiding bias—is not sufficient. We must decide what the goals actually will be in operational terms.
  • The issues and choices will differ by sector. For example, the consequences of bias and error will differ from a medical domain or a criminal justice domain to one of retail sales.
  • The application of AI tools in public policy decision making, in transportation design or waste disposal or policing among a whole variety of domains, requires great care. There is a substantial risk of focusing on efficiency when the public debate about what the goals should be in the first place is in fact required. Indeed, public values evolve as part of social and political conflict.
  • The economic implications of AI applications are easily exaggerated. Should public investment concentrate on advancing basic research or on diffusing the tools, user interfaces, and training needed to implement them?
  • As difficult as it will be to decide on goals and a strategy to implement the goals of one community, let alone regional or international communities, any agreement that goes beyond simple objective statements is very unlikely.

Authors

Science and Technology Innovation Program

The Science and Technology Innovation Program (STIP) serves as the bridge between technologists, policymakers, industry, and global stakeholders.   Read more

Science and Technology Innovation Program

Digital Futures Project

Less and less of life, war and business takes place offline. More and more, policy is transacted in a space poorly understood by traditional legal and political authorities. The Digital Futures Project is a map to constraints and opportunities generated by the innovations around the corner - a resource for policymakers navigating a world they didn’t build.   Read more

Digital Futures Project