On 13 March 2023, Trudeau’s government released the Companion document, introducing the AI and Data Act (AIDA).
Image by TravelScape on Freepik
What is the AI and Data Act (AIDA)?
In June 2022, the Government of Canada introduced the AIDA as part of Bill C-27, the Digital Charter Implementation Act. It proposed a framework of a new regulatory system providing positive direction to guide AI innovation, encouraging the responsible adoption of AI technologies by Canadians and Canadian businesses. It aligns the approaches of the EU (AI Act), the UK (proposal regulating AI) and the U.S. (Blueprint for an AI Bill of Rights) to protect the Canadians globally and help the Canadian companies recognised internationally as meeting robust technical and social standards.
Relation of the AIDA and other regulations
You may have several following regulations to take into account prior to use the AI in your business or as your business:
- Personal Information Protection and Electronic Documents Act, guiding how businesses use personal information;
- Consumer Privacy Protection Act (Bill C-27) proposed to govern marketplace activities and communications services;
- Consumer protection frameworks:
- Canada Consumer Product Safety Act;
- Food and Drugs Act;
- Motor Vehicle Safety Act;
- Bank Act;
- Human rights and criminal law applying to the use of AI:
- Canadian Human Rights Act and provincial human rights laws;
- Criminal Code.
The AIDA’s approaches
Based on the existing Canadian consumer protection and human rights law, AIDA would define and ensure that the high-impact AI systems – same approach of the EU AI Act, meet the expectation regarding safety and human rights to which Canadians are accustomed. However, it would also avoid an undue burden on the Canadian AI ecosystem. As above mentioned, with other legislative proposals, the Act should ensure that policy and enforcement move together as the technology evolves. Furthermore, it would prohibit reckless and malicious uses of AI, causing serious harm to Canadians and their interest by proposing new penal provisions.
Definition of the high-impact AI systems
The document provided some key factors to determine high-impact AI systems as follows:
- Evidence of risks of harm to health and safety, or a risk of adverse impact on human rights, based on both the intended purpose and potential unintended consequences;
- The severity of potential harms;
- The scale of use;
- The nature of harms or adverse impacts that have already taken place;
- The extent to which for practical or legal reasons it is not reasonably possible to opt-out from that system;
- Imbalances of economic or social circumstances, or age of impacted persons; and
- The degree to which the risks are adequately regulated under another law.
In addition to the following principles guiding their obligations:
- Human Oversight & Monitoring
- Transparency
- Fairness and Equity
- Safety
- Accountability
- Validity & Robustness
This week, Microsoft laid off the whole AI ethics team but poured its investment into the release of new powerful GPT-4 to compete with Google’s document drafting tool called “magic wand”. I keep alive just a little hope that these recent AI legislative proposals by the governments of the major economies in the world may change their mindset. I’m not sure that catastrophe is a part of the current economic crisis or these governmental movements are not strong enough to catch their attention as well as the attention of the AI start-ups.
For more news about AI systems on AstraIA Gear, particularly what you should aware of ChatGPT: https://www.astraiagear.com/2023/01/22/chatgpt-what-legal-and-ethical-issues-do-you-need-to-know/