“The framework enables a set of binding requirements for federal agencies to put in place safeguards for the use of AI so that we can harness its benefits and allow the public to trust the services provided by the federal government ” said Jason Miller, deputy director of the OMB. for management.
The draft note highlights certain uses of AI in which the technology may harm rights or security, including health care, accommodationAnd law enforcement—all situations where algorithms have, in the past, led to discrimination or denial of services.
Examples of potential security risks mentioned in the OMB draft include the automation of critical infrastructure like dams and autonomous vehicles like the robotaxis Cruise which have been closed last week in California and are under investigation by federal and state regulators after a pedestrian struck by a vehicle was dragged 20 feet. Examples of how AI could violate citizens’ rights in the draft memo include: predictive policingan AI capable of blocking protected speech, plagiarism or emotion detection software, tenant selection algorithmsand systems that may impact immigration or child custody.
According to the OMB, federal agencies currently use more than 700 algorithms, although inventories provided by federal agencies are incomplete. Miller says the draft memo requires federal agencies to share more about the algorithms they use. “We hope that in the coming weeks and months we will improve agencies’ ability to identify and report on their use cases,” he says.
Vice President Kamala Harris mentioned the OMB memo and other responsible AI initiatives in a speech today at the U.S. Embassy in London, a trip made for the UK AI Security Summit this week. She said that while some voices in AI policymaking focus on catastrophic risks Much like the role AI may one day play in cyberattacks or the creation of biological weapons, bias and misinformation are already amplified by AI and impact individuals and communities every day.
Merve Hickok, author of an upcoming book on AI procurement policy and a researcher at the University of Michigan, welcomes how the OMB memo would require agencies to justify their use of AI AI and assigning responsibility for the technology to specific people. It’s a potentially effective way to ensure that AI isn’t integrated into every government program, she says.
But granting exemptions could weaken these mechanisms, she fears. “I would be concerned if we started to see agencies using this waiver widely, particularly law enforcement, homeland security and surveillance,” she said. “Once they get the waiver, it can be indefinite.”