Through the DATA Act of 2014 (for spending reports) and the proposed Financial Transparency Act (for financial services), the federal government is already embracing the notion that its information should be expressed as data. These transformations were planned in late 2014 as the first and second pillars of the Data Transparency Coalition’s policy agenda.
Yesterday’s Data Transparency Breakfast introduced the third pillar of our agenda and opened a whole new area of possibility for open data supporters. What if the government adopted consistent data formats for Congressional and administrative laws and rules?
The panelists at the latest installment of the Coalition’s Data Transparency Breakfast series, presented by Xcential, answered that question. Data standards are already making legal and regulatory work more accountable, more efficient, and more automatic. But a few simple further steps would magnify that change.
Our event space, provided by Booz Allen Hamilton in their downtown Washington, D.C., office, was packed to the gills with transparency advocates, media, government employees, and professionals interested in turning legislative and regulatory documents into standardized data.
Grant Vergottini, CEO of Xcential, was unable to attend the policy breakfast in person but sent a brief recorded presentation of standards activity in this area. Vergottini has been an advocate of legislative standards for over a decade and explained that the most momentum is behind a unified Extensible Markup Language (XML) standard covering many different legal traditions called Akoma Ntoso. Akoma Ntoso started at the United Nations and is supported by European Parliment, Library of Congress, and others around the world. A variant of Akoma Ntoso, U.S. Legislative Markup (USLM), has been established for standardization efforts in Washington, D.C.
Vergottini framed the day’s discussion with the question, “Why is this happening now?” Here are his top three reasons:
The need for a transparency platform. The worldwide transparency movement has created a need for an efficient and effective way for governments to share information with the public, and not just in a human-readable way but in a way that would allow independent organizations to electronically process the data to better inform and engage the public.
The need for process modernization. There is an ongoing need for process modernization. Today’s cloud based computing architectures naturally promote the adoption of standards and this extends to the way in which the data itself is shared.
The need for long-term preservation of digital data. While digital preservation is an important aspect for all organizations it is especially so for legislation and regulatory information that is long lived by nature. Preserved for the long term, nonproprietary data formats are needed. A proprietary data file may not be readable 20-50 years from now but a XML standard text format certainly will be.
Reynold Schweickhardt, Director of Technology Policy at the House Committee on Administration, has been working on modernizing the legislative process since 1997. According to Mr. Schweickhardt, the House is aiming to create a “digital workflow with transparency baked in,” rather than focus on publishing data versions of a paper process.
The Administration Committee’s vision: a full ecosystem of legislative data that enables lawmakers to take all the places where law appears — bills, law, statutes — and “cut and paste those pieces back and forth without having to reinterpret them.”
Mr. Schweickhardt’s current projects include the Legislative Modernization Initiative and the Amendment Impact Program. The Legislative Modernization Initiative uses tools like lll.linkedlegislation.com to allow more flexible, reference-based legislative lookups. The Amendment Impact Program will embed XML data for a proposed legislative amendment into its PDF file, enabling redlining tools to automatically interpret an amendment’s impact on an underlying bill.
Over the longer term, the U.S. Legislative Markup standard is being used to publish a more flexible version of the U.S. Code. If U.S Legislative Markup standard were adopted for laws, bills, and amendments, codification could be made automatic.
Kirsten Gullickson, Senior Systems Analyst at the U.S. House of Representatives, posed the question, “Why is it taking so long to transform legislative information into data?” Working with documents has remained the norm for lawyers; the legal profession has insisted on retaining access to legal documents. But future tools will allow legislative drafters to focus on the content and apply electronic structure automatically. Gullickson believes it will soon be possible to persuade drafters to use XML editing tools that produce legislation natively in structured electronic formats.
The House has made significant progress in recent years. The Administration Committee and the Clerk’s office have published docs.house.gov; a recent House rule requires all committees to submit hearing announcements in structured formats for publication there.
Gullickson divided data technologies for legislation between “generation 1,” in which XML formats express visual characteristics like bolding and intents but not organization or meaning, and “generation 2,” in which legal structure is expressed electronically. The House’s current XML format, in use since 2004, is a “generation 1” technology; by moving to the U.S. Legislative Markup, Congress would start “generation 2.”
Shashank Khandelwal, currently a software developer at the General Services Administration’s 18F technology team, previously spent time at the Consumer Financial Protection Bureau seeking to make that agency’s regulations easier for citizens to understand. Mr. Khandelwal helped write software that could access the Code of Federal Regulations, read an initial rule, follow the Federal Register, figure out how any new rules changed existing ones, and implement those changes. Mr. Khandelwal’s software allowed the CFPB to publish an automatically-updating electronic version of some of its own regulations with expandable sections and user-friendly navigation.
Mr. Khandelwal echoed Ms. Gullickson’s call for drafters to use tools that natively create legal materials in structured formats. He noted that the XML format currently used by the Federal Register “generates the [structure] based on how they want the text to be printed,” just like the House’s current “generation 1” format.
David Zvenyach, also a member of the 18F team, previously served as general counsel to the Council of the District of Columbia, where he published an electronic version of the D.C. Code with each section separately linkable with its own Web URL. Given the chance, developers quickly translated the text of the code into fully-electronic XML, but Mr. Zvenyach found that he couldn’t publish the official version of the code that way; applicable law requires legal codes to be authenticatable – a problem that has not yet been solved for XML.
With “generation 1” XML already in wide use for both laws and regulations, and specialized tools like the House’s Amendment Impact Program near completion, the U.S. government will soon be considering further steps that will make such materials even more accessible. Yesterday’s panelists seemed to agree on several of these steps: (1) more sophisticated formats that express meaning, like the USLM; (2) native drafting tools that allow legislative and regulatory documents to begin life in those formats, rather than needing to be translated; and (3) purpose-driven structures, such as special tags for appropriations line items and mandatory directives like the word “shall.”
The Coalition will continue to educate leaders on the need for the federal government to pursue these steps. Data-driven policymaking is technologically possible already; slowly, the policy changes are catching up.