AG Project: Authorities’ suggestions for AI systems

AI risk management

The U.S. Department of Commerce: National Institute of Standards and Technology (NIST) published on Thursday January 26, 2023 the AI Risk Management Framework (AI RMF 1.0) for voluntary use by organisations designing, developing, deploying or using AI systems to help manage their risk. The framework is inclusive of two parts and aims to enhance AI trustworthiness and manage risks based on the democratic values: 

Firstly, in Part 1 of the framework:

First and foremost, it discusses how organisations can frame the risks related to AI and describes the intended audience. Next, it analyses the AI risks, trustworthiness, then outlines the characteristics of trustworthy AI systems: validity, reliability, safeness, secure, resilience, accountability, transparency, explainability, interpretability, privacy enhancement, fairness with their harmful biases managed.  

  1. To frame the risks, you should (1) understand, address risks, impacts and harms. Accordingly, in this context, risk refers to the composite measure of an event’s probability of occurring and the magnitude or degree of the consequences of the corresponding event. Also, risk management means coordinated activities to direct and control an organisation with regard to risk (based on ISO 31000:2018). Then, the framework mentioned some (2) challenges for AI risk management in pursuit of AI trustworthiness in risk measurement, tolerance and prioritisation.  
  2. Based on the OECD Framework for the Classification of AI systems – OECD Digital Economy Papers, the NIST modified the lifecycle and key dimensions of an AI system which are people and planet as the core. There are 7 stages of AI lifecycle:
    1. Plan & Design with the key dimension “Application Context”
    2. Collect & Process data with the key dimension “Data & Input” 
    3. Build & Use Model with key dimension “AI model”
    4. Verify & Validate with also key dimension “AI model”
    5. Deploy & Use with key dimension “Task and Output” 
    6. Operate & Monitor with key dimension “Application Context”
    7. Use or Impact by People & Planet. 
  1. Characteristics of trustworthy AI systems are inclusive of valid and reliable, safe, secure and resilient, accountable and transparent, explainable and interpretable, privacy-enhanced, and fair with harmful bias managed. 
  2. The Framework encourages its users to periodically evaluate AI RMF effectiveness regarding policies, processes, practices, implementation plans, indicators, measurements, and expected outcomes. 

Secondly, in Part 2, the “Core” of the Framework: 

Then, it describes four functions, helping organisations address the risk of AI systems in practice, including, Governing, which applies to all stages of organisations’ AI risk management process and procedures, and Mapping, Measuring and Managing which can be used in AI system-specific contexts and at specific stages of the AI lifecycle.

In details,

  1. The GOVERN function
    1. cultivates and implements a culture of risk management within organisations designing, developing, deploying, evaluating, or acquiring AI systems;
    2. outlines processes, documents, and organisational schemes that anticipate, identify, and manage the risks a system can pose, including to users and others across society – and procedures to achieve those outcomes;
    3. incorporates processes to assess potential impacts;
    4. provides a structure by which AI risk management functions can align with organisational principles, policies, and strategic priorities;
    5. connects technical aspects of AI system design and development to organisational
      values and principles, and enables organisational practices and competencies for the individuals involved in acquiring, training, deploying, and monitoring such systems; and
    6. addresses full product lifecycle and associated processes, including legal and other issues concerning use of third-party software or hardware systems and data.
  2. The MAP function established the context to frame risks related to an AI system.
  3. The MEASURE function employs quantitative, qualitative, or mixed-method tools, techniques, and methodologies to analyse, assess, benchmark, and monitor AI risk and related impacts. It uses knowledge relevant to AI risks identified in the MAP function and informs the MANAGE function. 
  4. The MANAGE function entails allocating risk resources to mapped and measured risks on a regular basis and as defined by the GOVERN function. Risk treatment comprises plans to respond to, recover from, and communicate about incidents or events.

Finally, the AI RMF use-case profiles are implementations of the AI RMF functions, categories, and subcategories for a specific setting or application. Accordingly, they are based on the requirements, risk tolerance, and resources of the Framework users aligned with their goals, considers legal/regulatory requirements and best practices, and reflects risk management priorities.

For more governmental authorities’ suggestions: https://www.astraiagear.com/category/ag-project/

Image by macrovector on Freepik

For more short news, connect with us on LinkedIn

To have further discussion with me


Posted

in

by