Explainability: Expanding Beyond Transparency in White House Artificial Intelligence Guidance


Explainability: Expanding Beyond Transparency in White House Artificial Intelligence Guidance

Guest blog by Steven Moore, Vice President, Global Government Affairs, DataRobot

In early January the White House Office of Management and Budget (OMB) published draft guidance to agencies calling for careful and thoughtful artificial intelligence (AI) regulation. 

The number one principle in the proposed AI guidance is Public Trust in AI. We applaud the White House for highlighting the importance of this concept. Our company has made this a top priority as well. DataRobot is spending ten million dollars to fund forty data scientists, engineers, and ethicists in our Trusted AI initiative. 

The core value of maintaining public trust in government data is a primary reason DataRobot joined the Data Coalition.

One place where OMB might improve the guidance for AI is by introducing the concept of explainability rather than transparency. The fifteen-page document mentions transparency fourteen times, but we at DataRobot feel that transparency falls short of public policy goals. 

Here’s why explainability is key: getting the blueprints for an airplane wouldn’t be as useful as knowing the maximum speed of the plane, range, and required take off distance. The plane’s performance is more useful than an analysis of its inner workings. And frankly, nobody asks for blueprints or performance specs before trustingly boarding a plane. Explainability means identifying and answering similar key questions that explain algorithmic behavior and engenders trust.

AI is no different from an airplane. Pulling the curtain back on a model provides no additional trust in the AI outcomes, even though this would be considered full transparency.

The reason transparency is mentioned so frequently in the guidance is that AI seems like a black box. Information goes in and decisions come out. Some AI users do rely on black box solutions. In fact, DataRobot recently conducted a survey of more than 350 U.S. and U.K. AI professionals have found that a third of those surveyed still use black box AI systems – meaning they have no visibility into how the data inputs of their AI solutions are being used.

We believe that in order to trust AI, it has to be explainable – or the opposite of a black box. This involves telling the public when AI is being used in decisions that impact their lives and revealing the algorithmic drivers behind those decisions. If someone doesn’t qualify for a loan, for example, they should understand the reasoning behind that decision. At DataRobot, we believe all AI requires a human-friendly explanation that anyone can understand. 

Speaking for early-stage AI innovators, DataRobot welcomes the White House’s call for a regulatory approach that is cautious of stifling innovation. It echoes the landmark 1996 Telecommunications Act that created the regulatory playing field that has helped create the five largest companies in the world.  

Broadly speaking, this landmark legislation had at least two goals for the development of the internet: 1) it opened a regulatory clearing where the internet ecosystem could grow and 2) it created a level playing field on which small internet startups could compete with much larger telecommunications and cable providers. This bipartisan legislation and the accompanying attitude by regulators created an environment that has grown the internet industry from virtually nothing in 1995 to more than 10% of the American economy today. 

We are pleased that the White House is giving more than 8,700 AI start-ups the same regulatory space that allowed internet companies to grow to one of the largest industries in the world. We are hopeful that the OMB guidance is a harbinger of the second goal of the 1996 Telecommunications Act – creating a level playing field so early-stage companies can compete fairly against larger companies. 

In contrast to the dot.com era Internet startups that benefited from the light regulation in 1996, the ethos behind today’s early-stage AI companies is to move fast… and carefully consider the consequences of our actions. That ethos permeates the AI startup ecosphere. Venture capitalists are using their influence to encourage AI startups to put ethics at the forefront of innovation. WIRED Magazine’s editor-in-chief describes how his publication has gone from a champion for technological change to “looking at the way algorithms are changing the way we behave for good or for ill.”   

It’s clear the use cases for how AI can improve society may be endless. DataRobot’s platform is used by 30% of the Fortune 50, many of the largest banks in the world, and several government agencies. We have worked with a USAID-funded NGO to use AI to help predict which water sources will fail in Africa. Our team is working in Chile to use AI to reduce the time it takes for firefighters to identify potentially devastating forest fires. At a time when military suicides are high, we have collaborated with one of the military services to predict suicide attempts by active duty warfighters. 

Many of our government customers’ first experience with AI is working with DataRobot. For these first-time customers we put ethics before technology. We take our clients through a process to refine their organizational values concerning AI, then build their technology around that. 

The new White House AI Guidance creates a space for innovation in similar ways to the Telecommunications Act of 1966. Transparency does not equal explainability, nor does it fundamentally lead to ethical development. Based on DataRobot’s success with thousands of AI deployments, we promote the ethical development AI through explainability in a way that will complement the public policy goals of the Administration’s Guidance.