ChatGPT: what legal and ethical issues do you need to know?

AG Project 2

On Jan 19, 2023, Boston Consulting Group (BCG)’s Henderson Institute published an article about the challenges of Responsible AI for the Generative AI Systems like ChatGPT. You may find here the highlighted challenges, including those that we have already well-known and thoroughly discussed and those are very new and particular in building and deploying the ChatGPT. 

First, traditional challenges: 
  1. Regurgitation of private information 
  2. Safety and security issues arising from brittleness in the models 
  3. Lack of explainability of the reason of certain result of content generated 
  4. Intellectual Property rights infringement in code outputs 
  5. Pervasive bias in image outputs 
Disclaimer regarding traditional ethical issues
  1. Ethics Assessment: analysing prior AI incidents involving Large Language Models (LLMs) and Natural Language Processing (NLP), outlining potential harms, constructing an ethical matrix, based on a specific use case, responding to questions from the US AI Ethics Framework on the “Purpose – Understanding Goal and Risks” of the system; 
  2. Bias Testing:
    • developing a high-level bias testing plan
    • selecting type(s) of bias to test for quantitatively
    • defining metrics for measuring bias,
    • designing & conducting experiment(s),
    • analysing & documenting the results, and
    • responding to questions from the US AI Ethics Framework on “Mitigating Undesired Bias & Ensuring Objectivity”. 
  3. Security Analysis:
    • identifying third-party software package dependencies needed to run the AI/ML tool,
    • running health scans of packages using Snyk and creating a “Nutrition Label” to visualise health scores,
    • inspecting packages with scores below a particular threshold,
    • standing up version of the tool, following available documentation and configuration instructions,
    • searching the Common Vulnerabilities and Exposures (CVE) database for known vulnerabilities,
    • testing and detect, communicate, review and remedy vulnerabilities to explore potential exploits, and
    • responding to questions from the US AI Ethics Framework on “Testing your AI.”
Next, other new challenges:
  1. Massive capability overhang: it still lacks explanation and controlling manner to understand and manage the system and its users’ usages.    
  2. Limited governance: the organisations adopting AI struggle to manage the concerned privacy, risk, compliance, legal issues and governance functions. Moreover, it is not simple to access the democratisation of AI capabilities of the Generative AI systems due to lack of resources (i.e. human, skills, finance)   
  3. Unclear copyright and other legal liability: Like traditional challenge, the data can contain copyrighted materials in case of dearth of controlling the origins of the outputs and of required permission to use them. 
  4. Erosion of customer trust: sometimes it is not pleasant at all to receive the answers lacking reliability and consistency, particularly the explanation about how the system works, how it processes the users’ data, etc. For this moment, only the EU provides a quite clear legislation on the recourse for the users. 
  5. Environmental impact: the research estimated the environmental impact of training a single, large NLP model equivalent to the carbon emissions of five cars over their lifetime. It drew the ideas of (1) carbon-efficiency which means the optimisation to get the most value, output we design from the system, per unit of computation and subsequently energy, expended to achieve that value of both hardware and software; and, (2) carbon-awareness, which refers to the adaptation of the operational parameters of the AI systems to meet  the state of grid supplying the energy with the most opportune times and locations, minimising the carbon footprint of the system.      
  6. Centralising power at the giant tech companies: To build and deploy foundation models and Generative AI systems, only large organisations with required enormous datasets, computing resources and engineering chops can democratise the its uses of these systems. As a result, it shall constitute the centralisation, domination position, not only in the tech world but also the sectors applying these models and systems.   
Finally, real ethical problem of ChatGPT

Furthermore, the list of ethical issues of the AI system is not exhaustive like it seemed. To eliminate the biassed, harmful content in the outcomes, the AI systems certainly require large databases, and filtration removing the illegal, unethical content from its training databases. Therefore, the tech companies outsource from the developing countries. OpenAI outsourced from Kenya to make its ChatGPT less toxic by using the workers on less than $2 per hour according to an investigation by Times, following the Sama collapse. At the end, it raises another question about the generation of training and deployment databases. It is obvious that synthesised data is not the most ideal solution but to make AI more ethical, compliant, we must be responsible and sustainable from the beginning stages of AI system development, such as design and data collection and filtration.  

For more short news, connect with us on LinkedIn

To have further discussion with me


by

Tags: