-0.2 C
New York
Wednesday, December 4, 2024

Airmic ballot reveals lack of AI danger assessments amongst companies




Airmic ballot reveals lack of AI danger assessments amongst companies | Insurance coverage Enterprise America















Up to date methodology to be produced

Airmic poll reveals lack of AI risk assessments among firms


Threat Administration Information

By
Terry Gangcuangco

A current survey performed amongst Airmic members has make clear a regarding hole in danger administration practices associated to synthetic intelligence (AI).

Carried out on February 26, the ballot discovered that as much as half of organisations have but to carry out danger assessments for AI applied sciences. For individuals who have examined the dangers, knowledge safety and mental property emerged as the first areas of concern.

Different threats embody the danger of creating choices based mostly on unsuitable data, in addition to moral dangers and those who relate to bias and discrimination.

“Analysis signifies that almost all organisations, after they do conduct an AI danger evaluation, are utilizing conventional danger evaluation frameworks higher suited to the pre-AI world of evaluation – that is an space of danger administration nonetheless in its infancy for a lot of,” Graham said, highlighting the inadequacy of current frameworks to deal with the distinctive challenges posed by AI.

In the meantime Hoe-Yeong Loke, Airmic’s head of analysis, famous how governments are responding when it comes to AI regulation.

“Many governments are simply starting to develop insurance policies and legal guidelines particular to AI, whereas those who have are competing to place their stamp on how this rising know-how will develop,” the analysis head mentioned.

“Understandably, there isn’t any universally accepted mannequin for assessing AI danger, however danger professionals can look to current printed requirements corresponding to ISO/IEC 23894:2023 Synthetic intelligence: Steerage on danger administration.”

In response to the findings, Airmic introduced plans to develop an up to date methodology for AI danger assessments. The initiative will contain collaboration with Airmic members and business stakeholders, aiming to craft a framework that matches the distinctive dangers related to AI applied sciences.

What do you consider this story? Share your ideas within the feedback under.


Related Articles

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Latest Articles