Lawmakers moved to operationalize suggestions made by a Congressionally-mandated AI commission.

Two new pieces of Senate legislation aim to accelerate the study, fielding and buying of artificial intelligence capabilities across some agencies and the military, ensure transparency in the government’s deployments of the evolving technology, and confront relevant expertise gaps among the federal workforce.

Provisions in the Artificial Intelligence Capabilities and Transparency, or AICT Act, and Artificial Intelligence for the Military, or AIM Act—recently unveiled by Sens. Rob Portman, R-Ohio, and Martin Heinrich, D-N.M.—broadly incorporate recommendations made by the Congressionally-mandated National Security Commission on AI.

“[AI] presents both opportunities and challenges for our nation’s security and we need to be prepared for both,” Heinrich said in a statement. He and Portman have put forth multiple AI-focused bills since co-founding the Senate AI Caucus in 2019, and several of those passed via the previous two National Defense Authorization Acts. The fresh bills might also see a way ahead as inclusions in this year’s NDAA, a legislative aide told Nextgov and others in an email Friday.

AI adoption is on the rise, but its uses range across the government’s enterprise.

Under the 16-page AICT Act, the National Institute of Standards and Technology would be required to create an accreditation assessment program. The program would ultimately certify an organization’s ability to review AI systems used by the FBI, Defense and Energy departments, and intelligence community, and pinpoint privacy, civil rights and civil liberties impacts on people in the U.S.

Building on infrastructure the National Science Foundation is already implementing, the bill would also direct the agency to form new, federally funded National Artificial Intelligence Institutes over the next year with programs that explicitly hone in on AI safety and AI ethics.

That legislation states that “‘artificial intelligence ethics’ includes the quantitative analysis of [AI] systems to address matters relating to the effects of such systems on individuals and society, such as matters of fairness or the potential for discrimination” and ‘‘‘artificial intelligence safety’ includes technical efforts to improve [AI] systems in order to reduce adverse and unintentional effects of such systems.”

Among other inclusions, the AICT Act would require chief digital recruiting officers to be named at the DOD, DOE and the IC. Those individuals would look into and identify needs across the entities’ talent pipelines and help grow their federal, tech-skilled workforce. The bill would also introduce a $50 million pilot AI development and prototyping fund within the Pentagon that would encompass work to refine and transition promising technologies that could be operationalized by the military and mandate a new resourcing plan for DOD to establish with aims to stay on top of AI applications.

“This act is primarily based on the consensus recommendations of the National Security Commission on Artificial Intelligence,” officials wrote in the bill. Called for by the NDAA for fiscal year 2019, that commission has since filed numerous reports detailing steps that should be taken to help promote U.S. superiority in the tech field, and particularly within the rapidly changing global landscape.

The five-page AIM Act also further operationalizes other NSCAI-made suggestions. That bill would require the Pentagon to implement several educational and training programs to help guarantee junior officers and senior DOD officials are prepared to make use of AI and other applicable emerging technologies.

“I look forward to working with my colleagues to pass these bills so that we can continue to implement the good ideas that the commission has spent so long developing,” Portman said. “Ensuring that AI is trustworthy and transparent, and that our warfighters are skilled in the nuances of emerging technology are common sense priorities.”