Artificial brain connected to some kind of system

Reflections on recent developments on artificial intelligence regulation in Canada and abroad

Blog

Written by Rebecca Parry, Nottingham Law School, Nottingham Trent University, UK


Artificial intelligence is seen as having transformative potential for Canada, as recently illustrated by the announcement of a large package of investment in this area.  The latter half of 2023 and early 2024 also saw some major steps being taken in many other countries towards the development of a regulatory structure for artificial intelligence, with proposed laws in China, and a US Executive Order and the announcement of a UK pro-innovation strategy.  The long awaited European Union AI Act also became law.  Global developments have included a statement from G7 regarding privacy and data protection in generative AI, as well as a major summit held in Bletchley Park in the UK looking at AI safety as a way to avert the potential for dystopian impacts of AI.  These high-level developments are significant, given the fast-moving nature of artificial intelligence, evidenced by the growth of generative AI tools, the potentially harmful impacts of deepfakes and facial recognition systems, although the existential threats that were the focus in Bletchley and often in moviesmay lie in the future. 

The task of regulating artificial Intelligence is challenging.  There are obstacles to effective regulation, as regulators can tend to be a step behind the fast pace of developments, as well as often under-resourced compared to those developing the technologies.  Tech giants can raise objections that regulation will stifle innovation.  Poor regulatory choices can indeed lead to suboptimal ways of doing things becoming an industry standard and hampering innovation.  The international character of modern technologies, often dominated by tech giants who have global influence, has also presented risks of regulatory arbitrage, as those who would be regulated can move easily to lighter-touch jurisdictions, potentially leading to a “race to the bottom”.  Outside of specific areas such as medical usage and automated vehicle technologies, there are few existing artificial intelligence-specific laws. Many emerging laws of significance aim towards general regulation of AI.  Where does this leave jurisdictions such as Canada?  As already noted that there is potential for artificial intelligence to bring benefits, and indeed Canada can hope to attract artificial intelligence businesses.  Yet there is also a strong Canadian culture of high levels of rights for citizens, embedded in the constitution as well as varied specific rights.  These rights can lead to demands for harmful aspects of this technology to be controlled and rival jurisdictions competing for AI business may not have the same high standards.  

Canada has been among the first jurisdictions to propose the regulation of artificial intelligence, through the proposed Artificial Intelligence and Data Act (AIDA), introduced as part of the Digital Charter Implementation Act, 2022.  The difficulties the proposals encountered demonstrate some of the difficulties of regulating AI.  The proposal aimed to establish requirements for companies towards the responsible development and implementation of AI with greater regulation of higher risk elements.  There were some criticisms of this draft law, for example a focus on higher risk AI that lacked a clear concept of what constitutes high risk AI.  Controls on the use of AI by the state against citizens was also considered to be a neglected area.   Subsequently a Voluntary Code of Conduct on the Responsible Development and Management of Advanced Generative AI Systems in September 2023 was announced.  For now, regulation proceeds on a soft law footing but developments elsewhere can demonstrate how the law can develop further.

Among the major recent developments have been the approval of a new AI Act by the European Union institutions although the text of this legislation still has to be finalised in different language versions, and it won’t become effective until at least 2025.  This Act encompasses the full spectrum of AI, employing a graduated approach, based on risk.  There is a heavy emphasis in the “unacceptable” use category on measures that will prevent AI from being used as a tool of oppression by the state, for example social scoring.  There are limited and controlled law enforcement exceptions.  There are also requirements for high-risk AI, such as where it is used in relation to employment, as well as rules regarding lower risk applications such as chatbots.  General AI applications that are of systemic risk will also have guardrails, such as high level cybersecurity, adversarial modelling and incident tracking.

The “EU effect”, demonstrated by the widespread influence of the GDPR, may yet see the proposed AI law influencing the regulation of AI elsewhere, including the UK.  Freed by Brexit the UK has proposed a more principles-based pro-innovation approach that relies on sector regulators to address the artificial intelligence usage that is of greatest risk.  There are parallels with Singapore’s pro-innovation, guideline-based approach to AI.  The UK may hope that it can be a beneficiary of regulatory arbitrage if the EU Act is found to be too restrictive for producers.  However, given remaining strong trade links with the EU it may be that in many areas producers of products that use AI will still need to observe the EU’s stipulations.  There have also more recently been UK reflections on whether aspects such as large language models require greater regulation.

Building on earlier voluntary commitments by tech firms regarding safety, security and trust, in October President Biden issued An Executive Order on Safe, Secure and Trustworthy AI. The order sets standards for AI production as a way to control risk.  Executive agencies have been required to produce standards, practices and regulations covering the whole AI life cycle. The Order can have wider impacts that other countries will benefit from.  For example, in requiring tech firms to share safety test results it challenges the autonomy that tech giants have previously enjoyed. Safe, secure and trustworthy AI is seen as a global challenge and the US will work with other countries towards this. Like the EU’s forthcoming Act, this Executive Order can see a raising of standards outside the US. 

China has also been active in enacting various regulations dealing with different aspects of AI, rather than attempting an umbrella measure like that of the EU.  One measure controls harmful use of deep synthesis technologies used in deepfakes and internet services. Another regulation implements principles for the management of generative AI to ensure that, among other matters, socialist values are not undermined, that there is no discrimination or intellectual property infringement.  There are also measures regarding the use of recommendation algorithms in online services.  There are also draft measures for the ethical reviewof research and development in AI.  These measures are likely to apply to non-Chinese entities which provide AI services in China.  

The focus of measures by states such as the US and China can have “spillover” effects in raising standards more generally in AI safety, through regulation of higher risk AI.  These effects can benefit Canada, as a major trading partner of the US and China.  Effective regulation requires a coordinated global approach if appropriate standards are to be achieved.  Such an approach can lead to effective and safe technical approaches. Where states may differ is in appropriateness of applications of AI in different contexts.  There may be disagreement for example as to the extent to which AI can be used against citizens in law enforcement, social monitoring and other state-level activities.  There is however likely to be greater consensus regarding a need to avoid dependencies on foundational technologies that could present systemic risks.  Individual states should prioritise these areas.  


The views and opinions expressed in the blogs and case reporter are the views of their authors, and do not represent the views of the Desautels Centre for Private Enterprise and the Law, the Faculty of Law, or the University of Manitoba. Academic Members of the University of Manitoba are entitled to academic freedom in the context of a respectful working and learning environment.