Responsible AI must be a priority, now

Join executives July 26-28 for Transform’s AI & Edge Week. Hear from top leaders discuss topics related to AL/ML technology, Conversational AI, VAT, NLP, Edge and more. Book your free pass now!


Responsible artificial intelligence (AI) must be embedded in the DNA of a company.

“Why is bias in AI something we all need to think about today? It’s because AI is driving everything we do today.” miriam vogelpresident and CEO of EqualAIhe told a livestream audience during this week’s Transform 2022 event.

Vogel discussed the topics of AI bias and responsible AI in depth in a fireside chat led by Victoria Espinel of the trade group. the software alliance.

Vogel has extensive experience in technology and politics, including at the White House, the US Department of Justice (DOJ), and the nonprofit organization EqualAI, which is dedicated to reducing unconscious bias in the development and use of AI. She also serves as chair of the recently launched National AI Advisory Committee (NAIAC) mandated by Congress to advise the President and the White House on AI policy.

As you pointed out, AI is becoming more and more important to our daily lives, and greatly improving it, but at the same time, we need to understand the many inherent risks of AI. Everyone, builders, creators and users alike, must make AI “our partner” as well as efficient, effective and reliable.

“You can’t build trust with your app if you’re not sure it’s safe for you, that it’s designed for you,” Vogel said.

Now is the time

We must address responsible AI now, Vogel said, as we are still setting “the rules of the road.” What constitutes AI is still something of a “gray area”.

What if it is not addressed? The consequences could be dire. Individuals may not receive adequate health care or employment opportunities as a result of AI biasand “litigation will come, regulation will come,” Vogel warned.

When that happens, “we can’t unpack the AI ​​systems that we’ve become so dependent on and that have become intertwined,” he said. “Right now, today, is the time for us to be very mindful of what we’re building and deploying, making sure we’re assessing the risks, making sure we’re reducing those risks.”

Good ‘AI hygiene’

Companies must address responsible AI now by establishing strong governance practices and policies and establishing a safe, collaborative and visible culture. This has to be “put through the levers” and handled with care and intentionality, Vogel said.

For example, in recruitment, companies can start by simply asking if the platforms have been tested for discrimination.

“Just that basic question is extremely powerful,” Vogel said.

of an organization human resources team it must be supported by an AI that is inclusive and does not rule out the best candidates for employment or promotion.

It’s a matter of “good AI hygiene,” Vogel said, and it starts with the C-suite.

“Why the C suite? Because at the end of the day, if you don’t have buy-in at the highest levels, you can’t implement the governance framework, you can’t get investment in the governance framework, and you can’t get commitment to make sure you’re doing it the right way. Vogel said.

Also, detecting bias is an ongoing process: once a framework has been established, there needs to be a long-term process to continually assess whether bias is hampering systems.

“Bias can be embedded in every human touch point” from data collection to testing, design, development and deployment, Vogel said.

Responsible AI: a problem at the human level

Vogel noted that the conversation around AI bias and AI liability was initially limited to programmers, but Vogel feels it is “unfair.”

“We cannot expect them to solve humanity’s problems by themselves,” he said.

It’s human nature: people often imagine only as far as their experience or creativity allows. So the more voices that can be brought in, the better, to determine best practices and ensure that the age-old problem of bias does not creep into AI.

This is already underway, with governments around the world crafting regulatory frameworks, Vogel said. The EU is creating a GDPR-regulation similar to AI, for instance. Additionally, in the US, the nation’s Equal Employment Opportunity Commission and Department of Justice recently released an “unprecedented” report joint declaration in reducing discrimination when it comes to disabilities, something that AI and its algorithms could make worse if left unchecked. The National Institute of Standards and Technology was also mandated by Congress to create a risk management framework for AI.

“We can expect a lot from the US in terms of AI regulation,” Vogel said.

This includes the recently formed committee that he now chairs.

“We are going to have an impact,” he said.

don’t miss the full conversation of the Transform 2022 event.

The VentureBeat Mission is to be a digital public square for technical decision makers to learn about transformative business technology and transact. Learn more about membership.

Leave a Comment