AG Project: Authorities’ suggestions for AI systems

AI ethics framework US

Image by Freepik

U.S. Office of the Director of National Intelligence: Artificial Intelligence (AI) Ethics Framework for the Intelligence Community suggestively applied to AI systems.  

The US Office released the version 1.0 of the framework in June, 2020. On the one hand, the Office underlines that this framework is the living guide, not an official document or checklist and they still welcome the feedback. On the other hand, there have already been several audit reports which were conducted based on this approach as follows. 

  1. Purpose of AI system: you must understand its Goals and Risks, so you can use the AI as an appropriate means to obtain the determined purposes after evaluating potential risks; 
  2. Legal Obligations & Policy Considerations governing the AI & Data: the AI must be used in the manner of respecting the human, legitimate, contractual rights of affected subjects and in compliance with the legal obligations, policy requirements; 
  3. Human Judgement and Accountability: Human judgement and accountability must be incorporated in the AI project to address and resolve the risks across the AI lifecycle and to inform of decisions appropriately; 
  4. Mitigating Undesired Bias and Ensuring Objectivity: an intelligence analysis should be conducted pursuant to Intelligence Community Directive 203 on Jan 2, 2015, requiring the AI function must be performed with objectivity, awareness of your own assumptions and risks. Any potential undesired bias should be identified, accounted for and mitigated to the extent practicable without undermining its efficacy and utility. Particularly, the Analysis must be consistent with the following 5 Analytic Standards: (i) Objective, (ii) Independent of political consideration, (iii) Timely fashion, (iv) Based on all available sources of intelligence information, and (v) the Analytic Tradecraft Standards should be implemented and exhibited, including quality and credibility of underlying sources, data, methodologies and expression, explanation of uncertainties associated with major analytic judgments. 
  5. Test your AI: the system must be examined for accuracy in an environment controlling for known and reasonably foreseeable risks associated with the use of that AI system prior to deployment. 
  6. Maintaining the Accountability for Builds, Versions, and Evolutions of an AI project – while the system is trained on training data and also after it matures & is applied to mission, analytic, and business data.    
  7. Documentation of Purpose, Parameters, Limitations and Design Outcomes: you may want to document and communicate (i) the objective of the AI system, (ii) aim that should be avoided, (iii) its design, and (iv) limitations. 
  8. Transparency – Explainability, Interpretability: you should use explainable and understandable methods to the extent practicable so that the users and public can understand how and why the AI system generated its outputs. 
  9. Periodic Review: the AI system should be checked at an appropriate documented interval to ensure that the purpose is met, the undesired bias, unexpected outcome and other issues to be determined are appropriately mitigated or resolved.  
  10. Stewardship and Accountability: Training Data, Algorithms, Models, Outputs of the Models, Documentation – Before the deployment, the person who will be accountable for the AI system, its continued maintenance, monitoring, updating, decommissioning, its effects at each phase of and across its lifecycle must be defined.   

For more short news, connect with us on LinkedIn

To have further discussion with me


Posted

in

by