Kavyan Akhilani is the CEO and Co-founder of a RegTech company, Compliance.ai, transforming the way highly regulated organizations address compliance risk. He mentions how Isaac Asimov’s Three Laws of Robotics had its limits. The law stated three simple maxims: protect humans; obey humans; if it doesn’t violate the previous two rules, defend itself. Today, business leaders must figure out how to deploy AI in a way that doesn’t harm consumers, violate consumer privacy, or break any laws.
Facial Recognition: A cautionary tale?
Consumers use this feature to unlock their phones or sort digital photos. Law enforcement uses it to enforce no-fly lists and the likes.
But how reliable is this feature? Facebook identified people in photographs uploaded to the site and got sued for it. If someone uses AI in a way that harms the public’s interest, it could result in massive lawsuits.
The majority of US citizens believe AI should be carefully managed. And corporate leaders at Google, Microsoft, and Tesla agree with the need to regulate AI. Even the Catholic Church emphasized on the necessity of ethically correct AI that “protects people.”
Canada, France, and the Organization for Economic Co-operation and Development (OECD) have created a Global Partnership on AI to figure out ways to manage AI’s impact on society and develop AI frameworks to meet potential AI-related challenges. The US is the only G7 nation not to have signed on the GPAI.
Experts and AI must work together to mitigate risks.
Expert-in-the-loop (EITL) is the most promising AI framework that places experts at the key supervisory point in the AI decision making workflow.
Sure, AI handles chores that are difficult for humans to accomplish. But there are a few tasks that need human supervision. Human Intervention in AI decision making limits errors, mitigates risks and provides greater transparency into AI-based judgments and decisions.
The absence of expert-driven checks and balanced would leave organizations with no way to verify or influence AI decisions. There is a need for sensible, flexible, field-tested templates and laws to guide AI deployment and development. The sooner it is recognized, the better it will be for society.
#AIMonks #AI #ArtificialIntelligence #Regulate GPAI #Facebook #HumanSupervision #Microsoft #Tesla #Google
0 Comments