The White House Takes Proactive Steps Towards AI Risk Management: A Call to Action for Enterprises

By Mark Kelly

The White House recently disclosed plans for investments and initiatives targeted at the effective application of the Bill of Rights and the National Institute of Standards and Technology’s AI Risk Management Framework (AI RMF 1.0). The Biden administration is teaming up with industry giants such as Alphabet, Anthropic, Microsoft, and OpenAI, intending to tackle AI risks head-on, specifically highlighting the role of generative AI. Several government agencies, including the Department of Justice, Federal Trade Commission (FTC), Consumer Financial Protection Bureau (CFPB), and Equal Employment Opportunity Commission have introduced AI principles that endorse extensive data collection and analysis to help curb bias and discrimination.

Discussions with AI pioneers reveal that AI governance is currently in its infancy, but a significant shift is imminent, one that will impact all businesses. Leaders need to brace themselves as they are held accountable for their organization’s AI utilization. Currently, 17 states, along with the District of Columbia, are considering AI-related legislation, and AI task forces are reevaluating existing laws concerning cyberattacks, surveillance, privacy, discrimination, and the prospective impacts of AI. 

For enterprise AI governance to be effective, it is crucial to ensure:

1.Assessment of AI integrated into applications and platforms: A significant 51% of data and analytics decision-makers are procuring applications with in-built AI functionalities, while 45% are leveraging pretrained AI models. Companies need AI guidelines to evaluate efficacy, responsibility, and potential risks associated with business and data processing. Vendors must demonstrate how they transition models on-premises, facilitate configuration shut down, and issue updates when their software-as-a-service model using embedded AI contradicts enterprise policies.

2.Control over IP use and infringement: Foundation models and generative AI could potentially expose companies to entitlement and IP infringements. The US Supreme Court recently affirmed that IP creation is the domain of humans, not AI, a stance reflected in other countries like Australia. Businesses must have a comprehensive understanding of data sources, a process for validating training data, algorithms, and code, and automated controls to guard against IP violations.

3.Implementation of product safety standards on AI: AI leaders, including Alphabet’s Sundar Pichai, have advocated for regulations instead of proactively addressing AI risk, inadvertently leading to a rise in harmful propaganda and misinformation. The EU AI Act is an effort to counter this by extending product safety regulations to AI use. In the US, the CFPB and FTC are evaluating existing product safety, libel, and consumer protection laws. Legal teams must prepare for regulatory compliance and potential class-action lawsuits as enterprise AI capabilities are scrutinized by regulators.

4.Inclusion within AI ethics: AI ethics is incomplete without considering inclusivity. With more black-box machine learning models such as large language models and neural nets, organizations might struggle to ensure that model behavior complies with civil or human rights laws. Companies must take measures to minimize bias in training data and model results, and involve a broad spectrum of stakeholders in discussions about AI and ethics.

5.Maintenance of data integrity and transparency: Companies need to be able to trace and explain their data. A proposed regulation in New York State requires the disclosure of data sources and any use of synthetic data. Most organizations track data sources and monitor AI when a model is in operation. However, data governance will need to be implemented in data science processes and data sourcing to proactively address data transparency and usage rights throughout the AI lifecycle.

As the use of AI attracts more regulatory and legal scrutiny, enterprises must move swiftly to establish robust AI governance as a safeguard against risk. It’s a mistake to leave AI governance solely to data science and AI teams. Creating effective AI governance will require a collaborative approach that involves CEOs, leadership teams, and business stakeholders in the development of sound procedures and policies.