The Legal Regulation of Artificial Intelligence – Hamza Hannioui – DR.Omar Njoum
The Legal Regulation of Artificial Intelligence
Hamza Hannioui
Doctoral Student, Faculty of Legal and Political Sciences, Ibn Tofail University, Kenitra, Morocco
DR.Omar Njoum
Law Professor, Faculty of Legal and Political Sciences, Ibn Tofail University, Kenitra, Morocco
هذا البحث منشور في مجلة القانون والأعمال الدولية الإصدار رقم 61 الخاص بشهر دجنبر 2025
رابط تسجيل الاصدار في DOI
https://doi.org/10.63585/COPW7495
للنشر و الاستعلام
mforki22@gmail.com
الواتساب 00212687407665

The Legal Regulation of Artificial Intelligence
Hamza Hannioui
Doctoral Student, Faculty of Legal and Political Sciences, Ibn Tofail University, Kenitra, Morocco
DR.Omar Njoum
Law Professor, Faculty of Legal and Political Sciences, Ibn Tofail University, Kenitra, Morocco
Abstract:
Artificial Intelligence (AI) is evolving rapidly, posing complex legal and ethical challenges that raise significant legal questions and call for well-established regulations. Research on law and AI intersection is central to understanding existing frameworks, ethical challenges, and emerging legal problems in this developing field. A few of the concerns are AI legislation, liability, privacy, and ethical issues. Here in this paper, we talked about the interaction between AI and law and their implications, in addition to the future of national and international AI laws and the possible solutions to these issues. It becomes necessary to comprehend these legal aspects for policymakers, developers, and the entire society.
Keywords: Artificial intelligence, legislation, legal personality, International Law, legal liability.
التنظيم القانوني للذكاء اللإصطناعي
الباحث: حمزة حنيوي
طالب باحث بكلية العلوم القانونية والسياسية جامعة ابن طفيل القنيطرة
الدكتور عمر انجوم
أستاذ القانون الخاص بكلية العلوم القانونية والسياسية جامعة ابن طفيل القنيطرة
ملخص:
يُعدّ الذكاء الاصطناعي مجالًا سريع التطوّر، الأمر الذي يثير تحدّيات قانونية وأخلاقية معقّدة تطرح أسئلة قانونية جوهرية تستدعي وضع قوانين تنظيمية راسخة. وتُعدّ الأبحاث المتعلّقة بالتقاطع بين القانون والذكاء الاصطناعي محورية في فهم القوانين القائمة، والتحدّيات الأخلاقية، والإشكالات القانونية الناشئة في هذا الحقل المتنامي. وتشمل بعض هذه الإشكالات كل من التشريعات المتعلّقة بالذكاء الاصطناعي، والمسؤولية القانونية، والخصوصية، والقضايا الأخلاقية. وفي هذا المقال، تناولنا أثر التفاعل بين الذكاء الاصطناعي والقانون وانعكاساتهما، كما نعرض مستقبل التشريعات الوطنية والدولية المرتبطة بالذكاء الاصطناعي والحلول الممكنة لهذه الإشكالات. ولابد من الإشارة أن فهم هذه الأبعاد القانونية أصبحت تشكل ضرورة لصنّاع السياسات والمطوّرين والمجتمع بأسره.
1.Introduction
Artificial Intelligence is one of the most important technological advancements that humanity is aware of, offering numerous benefits and possibilities at the same time, Since it has enabled profound changes in a number of fields and societies around the world. As AI systems increasingly penetrate our daily lives, the need for robust and comprehensive legal frameworks to regulate their development and application has become more urgent than ever[1].
While frequently perceived as a new phenomenon, artificial intelligence legal research stretches back to the 1980s and 1990s. What is different today is the widespread application of AI and the steep rise in corresponding academic research, driven by technological advancement and changing laws. The last ten years have witnessed significant progress—particularly in machine learning—have marked a new “AI summer,” which is powered by big data and strong computing. In response, countries and international institutions have increased efforts at AI governance and solving the legal issues it causes in international markets[2].
The governance of artificial intelligence is a challenge that requires a global strategy, as the rapid pace of globalization highlights that AI-related problems go beyond national borders and different legal traditions. All legal systems—whether civil law or common law—struggle with developing comprehensive rules that regulate AI. while national and regional harmonization is important, there is an immediate need for international laws in order to adequately address global implications of AI[3]. Therefore, it becomes more necessary than ever to address the issue of liability for AI’s damages. And No need to remind in this sense that the general rules of law provide that any damage resulting from unlawful conduct must be compensated by the responsible party.
Based on all of this, many questions have arisen regarding this topic: What are the national and international legislation that have established a legal framework for AI? What legal challenges does it face? And who could assume legal responsibility for compensating damage caused by AI?
To answer these questions, a doctrinal legal research methos is adopted, focusing on the analysis of case law related to AI regulation. In addition, a comparative approach is employed to examine regulatory law models in the European Union and some developed countries.
2. Defining AI and its importance
To fully understand AI, we must first understand what it is, or what is meant by Artificial Intelligence? how has this technology evolved? And What is its importance for human society?
Despite ongoing improvements in computer processing power and memory capacity, there are still no programs that can show the same level of flexibility as humans across various fields or in tasks that demand substantial everyday knowledge. However, certain programs have reached the performance standards of human experts and professionals in specific tasks, making artificial intelligence, in this restricted sense, present in applications like medical diagnosis, search engines, voice or handwriting recognition, and chatbots[4].
The challenge of selecting an appropriate working definition is not exclusive to AI; it arises across all scientific disciplines and various other fields. However, in most instances, the definition is fairly evident, making the decision more of a declaration rather than one requiring extensive justification or argumentation[5]. As the field of AI continues to grow, the terms used and their associated definitions continue to evolve. therefore, defining AI in a single, unified way is challenge, and several definitions have emerged in this context.
Artificial Intelligence (AI), a term coined by Stanford Professor John McCarthy in 1955, was defined by him as “the science and engineering of making intelligent machines”. Much research showing that humans program machines to behave in a clever way, like playing chess, but, today, we emphasize machines that can learn, at least somewhat like human beings do[6].
Artificial intelligence (AI) refers to technology that allows computers and machines to replicate human abilities such as learning, understanding, problem-solving, decision-making, creativity, and autonomy[7]. It refers to the design and advancement of computer systems capable of executing tasks that typically require human intelligence. Through AI, science has made it easier to automate mechanical processes using a form of intelligence that does not rely on human intervention. Common examples of AI in daily life include self-driving cars, navigation systems, digital communication tools that operate via Internet, and computer games[8].
Coursera Staff (2024) pointed out that “Artificial intelligence (AI) refers to the concept and creation of computer systems designed to carry out tasks that traditionally needed human intelligence, such as speech recognition, decision-making, and pattern detection. AI is a broad term that covers various technologies, including machine learning[9], deep learning[10], and natural language processing[11] (NLP)[12]. Artificial intelligence was also defined as the use or study of computer system or machines that have some of the qualities that the human brain has, such as the ability to interpret and produce language in a way that seem human recognize or create images, solve problems and learn from data supplied to them[13].
As interests in AI continues to expand in academia, business, and public life, there is still no common definition of its scope that is accepted intelligence in the broad sense. AI has been defined across an enormous variety of contexts, largely in terms of human cognition or intelligence more generally[14].
Overall, it is evident that artificial intelligence has been defined in various ways, and there is no consensus on a single, precise definition. However, most definitions suggest that artificial intelligence refers to computer systems that perform human tasks more efficiently or reliably. On this basis, we can describe artificial intelligence as a computerized system that imitate human intelligence, potentially equaling it in certain situations, and often handling tasks that are difficult for humans due to the need for clarity, accuracy, and speed.
3. the legal framework of AI
3.1 International Regulations of AI
Artificial Intelligence appears to be one of the greatest accomplishments of scientists in modern time. Consequently, this advancement and its widespread adoption have sparked debates regarding the legal regulation of AI. The goal of legally regulating Artificial Intelligence is to develop policies, regulations, or legal measures that establish clear guidelines for the operation, use, and protection of AI system[15].
In April 2021, the European Commission[16] introduced the first European legislation on Artificial Intelligence, creating a risk-based classification system for AI. AI systems, which are applicable in various fields, are assessed and classified according to the potential risks they pose to users, and the different risk levels determine the extent of compliance requirements for these systems[17].Thus, the aim of this regulation is to enhance the functioning of the internal market and foster the adoption of human-centric and reliable artificial intelligence (AI).
The regulation set out a series of new rules to achieve these objectives. according to article 1 of AI act:[18]
(a) standardized rules for the market placement, operation, and use of AI systems within the Union;
(b) specified requirements for high-risk AI systems and obligations for their operators;
(c) unified transparency rules for certain AI systems;
(d) standardized rules for the market placement of general-purpose AI models;
In addition to the essential demand for global governance, the Council of Europe has made an important move by launching the world’s first legally enforceable international treaty on AI, known as the Framework Convention on Artificial Intelligence and Human Rights, Democracy, and the Rule of Law[19].This Convention[20], adopted on May 17, 2024, and opened for signature on September 5, 2024, marks a significant moment in the global initiative to set unified standards for AI governance in the European Union . It aims to establish a coordinated approach among its signatories while acknowledging the varied legal systems and regulatory practices across different regions[21].
The convention above has adopted nine key principles to support a framework of binding and non-binding legal instruments: human dignity and human freedom, harm prevention and non-discrimination, gender equality, fairness and diversity, transparency and explainability of AI systems, data protection and privacy rights, accountability and responsibility, democracy, and the rule of law supported by an enforcement mechanism[22]. The legal framework establishes rights and obligations to secure AI compliance with human rights, democracy, and the rule of law, supported by an enforcement mechanism. Thus, to guarantee the effective implementation of the treaty, an enforcement mechanism is in place, namely:
A) Legal Responsibility: Countries that sign the treaty must implement legislative and administrative actions to ensure that AI systems comply with the treaty’s principles, such as human rights and accountability in AI deployment.
B) Supervision and Control: The treaty sets up mechanisms for monitoring adherence to AI standards.
C) Global Collaboration: The treaty encourages cooperation among signatory nations to align AI standards, exchange best practices, and address cross-border AI challenges, recognizing the worldwide impact of AI technologies.
D) Flexibility: The framework is designed to be technology-neutral, allowing it to adapt to the rapid development of AI, ensuring that standards remain relevant and enforceable.
E) Exceptions in the Treaty: While the treaty covers all AI systems, it excludes those used in national security or defense, though these activities must still comply with international laws and democratic values[23].
3.2. National Regulations of AI :
Machine learning has developed considerably over the last decade, with AI having transitioned in the most recent five years from a specialized research field to a revolutionary technology with generalizable applications. This change has pushed governance, safety, and risk issues to the center of industry debates. As a result, industries and governments have started suggesting various regulations[24].Notably, Stanford University documented a steep rise in nations with AI laws from 25 in 2022 to 127 in 2023[25].
3.2.1 Some Comparative Experiences in IA Regulation
3.2.1.1 South Korean Model
As already pointed out, the legal framework surrounding Artificial Intelligence (AI) remined ambiguous until recent years. In South Korea, as a comparative experiment, legal initiatives were introduced with the creation of the “Intelligent Robot Distribution Development and Promotion Act” in 2008. This legislation aimed to promote policies for the durable development of intelligent robots, ensuring their distribution and laying the foundation for future advances in the field[26]. Recently, there has been a global push for clearer AI regulations, with South Korea playing an active role in this trend. The country has made significant progress, from the announcement of the National AI Strategy in December 2019 to the unveiling of the Digital Rights Charter in May 2023[27].
Moreover, South Korea has taken major steps forward by enacting the “Basic Act on the Development of Artificial Intelligence and the Establishment of Trust,” also known as the “AI Basic Act.” This law makes South Korea the second jurisdiction, following the European Union, to implement a comprehensive legal framework for AI regulation. Scheduled to take effect in January 2026 after cabinet approval, the Act aims to enhance South Korea’s AI competitiveness while promoting safe and responsible development[28].
The AI Basic Act is a major step for South Korea in AI regulation, risk management, and industry growth. Administered by the Ministry of Science and ICT, it establishes a national AI cooperation framework. This framework will guide policies through institutions like the National AI Committee and AI Safet Research Institute. “As global AI competition intensifies, this Act is a key milestone for Korea to become a top AI power,” said Yoo Sang-Im, Minister of Science Information and Communication Technology[29].
3.2.1.2 United Kingdom
Also, In March 2023, the UK government released its AI Regulation White Paper[30], outlining a pro-innovation strategy for AI governance. This approach is built on these principles: safety, security, and robustness; appropriate transparency and explainability; fairness; accountability and governance; and contestability and redress[31]. In November 2023, the UK reaffirmed its stance on AI by hosting the International AI Safety Summit, the first government-led global conference on the topic. The widespread participation of international representatives highlighted the growing focus on AI governance. This summit brought AI safety to the forefront of discussions, setting the stage for future regulations that would dominate the agenda in the coming year[32].
In January 2025, the Department of Science, Innovation, and Technology released the AI Opportunities Action Plan, outlining 50 recommendations to expand the UK’s AI sector, promote adoption across industries, and enhance products and services. Meanwhile, the Labor Party’s interventionist AI regulatory approach has faced challenges due to global developments, including Donald Trump’s election and the AI Action Summit in Paris Febraury 2025[33].
At the summit, U.S. and the UK declined to sign a declaration promoting “inclusive and sustainable” AI, which was endorsed by 60 other countries and emphasized ethical and responsible AI development. The UK’s refusal was attributed to concerns over national security and a perceived lack of clarity in global governance frameworks. This decision has drawn criticism from various groups, who argue that it could undermine the UK’s position as a leader in ethical AI innovation[34].
3.1.2.3The United State Federal Level of AI Regulation:
Like the United Kingdom, the US has yet to have an entire AI regulation. Thus, as artificial intelligence (AI) continues to shape various sectors of society, policymakers and regulators are working towards establishing effective governance for this rapidly emerging technology. While the Unites State still lacks federal AI regulation, significant progress has been made at both federal and state levels, particularly with the administration transition from Biden to Trump in January 2025[35].
The United States has been underway in legislating artificial intelligence (AI). In recent Congressional sessions, early federal AI bills have been introduced, either individually or as separate provisions within more comprehensive bills. It is particularly pertinent that the National Artificial Intelligence Initiative Act of 2020 established the American AI Initiative and established guidelines for AI development, research, and testing within federal science agencies[36].
According to the National Artificial Intelligence Initiative Act[37] of 2020 (H.R. 6216), the legislation will:
• Formalize interagency coordination and strategic planning efforts in AI research, development, standards, and education through an Interagency Coordination Committee and a coordination office managed by the Office of Science and Technology Policy (OSTP).
• Create an advisory committee to better inform the Coordination Committee’s strategic plan.
• Fellow the state of the science around artificial intelligence.
• Create a network of AI institutes.
• Support basic AI measurement research and standards development at the National Institute
for Standards and Technology (NIST)
• Support research at the National Science Foundation (NSF) across a wide variety of AI
related research areas
• Support education and workforce development in AI and related fields.
• Support AI research and development efforts at the Department of Energy (DOE).
• Require studies to better understand workforce impacts and opportunities created by AI.
The United States law on the regulation of AI has made several important steps that reflect ongoing efforts to manage its complexity and ensure proper development. Several U.S. laws, including the AI in Government Act (H.R. 2575) and the Advancing American AI Act (S.1353), have mandated agencies to manage AI programs. During the 117th Congress, 75 AI-related bills were introduced, and six passed. The 118th Congress introduced 40 more, but none passed. Nine AI-related bills passed since 2015[38].
As of November 2023, 33 bills remained pending. In January 2023, the White House released an AI Bill of Rights, and the National Institute of Standards and Technology published an AI Risk Management Framework. In the summer of 2023, two detailed AI policy frameworks were presented for bipartisan support. At the state level, between 2016 and 2022, 14 states passed AI-related bills, with Maryland leading the way[39].
On February 11, 2019, the American President Donald J. Trump signed an executive order titled “Maintaining American Leadership in Artificial Intelligence.” This order aims to promote the role of the United States in AI research, development, and utilization through a coordinated federal government strategy[40].After, on October 30, 2023, President Biden signed an Executive Order, focused on the responsible, secure, and safe development and use of Artificial Intelligence (AI), outlining essential principles and priorities to guide AI in America[41]. After his January 2025 inauguration, President Donald Trump revoked immediately the executive order titled “Removing Barriers to American Leadership in Artificial Intelligence.” Executive Order 14179 reversed AI governance policies of the previous administration, undoing the regulatory environment of President Biden’s administration. The order was meant to stimulate AI growth by removing regulatory barriers and giving businesses greater freedom to innovate without a strict federal control[42].
3.1.2.4State-Level AI Regulation in USA:
Due to the decentralized nature of U.S. government, a great deal of applied AI legislation is being formulated at the state level. Colorado, Illinois, and California are leading the charge in creating the legal environment for AI compliance in business[43].
Colorado on May 17, 2024, became the first U.S. state to pass a comprehensive AI law, the Colorado AI Act. SB24-205 is an AI consumer protection bill. It requires both users and creators of high-risk AI systems exercise reasonable caution to avoid algorithmic bias in those systems[44].While in California, the state introduced on January 31, 2024, AB2013. by January 1, 2026, this law requires a developer of an artificial intelligence system or service provided for use in California, whether or not the terms of that use include compensation, to post on the developer’s website documentation regarding the data used to train the artificial intelligence system or service[45].
Also, On January 1, 2025, a set of landmark AI laws took effect in California. With 18 new regulations, the state is making significant progress in AI regulation, setting standards for deepfake technology, transparency, data privacy, and the use of AI in healthcare. These laws demonstrate California’s commitment to being a pioneer in AI regulation[46].
Among such groundbreaking laws is SB 926, which makes it illegal to create or distribute sexually explicit deepfake images with the purpose of inflicting emotional distress. Additionally, SB 981 boosts privacy protection by requiring social media sites to implement measures enabling Californian users to easily report and remove non-consensual sexually explicit content[47]. Furthermore, the state of California has outlawed AI-generated child abuse material and deepfakes, with Governor Gavin Newsom signing legislation criminalizing the creation or distribution of such content, even if it is AI-generated content. These bills close a loophole in existing child pornography laws and extend criminal liability to abusive AI-generated content. Moreover, the new laws also aim to combat revenge porn by criminalizing the creation and dissemination of explicit AI-generated deepfakes of adults without their consent, and they require social media platforms to allow users to report and remove such content[48].
3.2.2 AI legal regulation in Morocco
It goes without saying that Morocco has the resources and regulatory mechanisms necessary to position itself as a leader in artificial intelligence. The most visible initiatives, including the Moroccan Center for AI[49] and the “Al-Khawarizmi”[50] program, as well as the efforts of educational institutions, demonstrate the country’s ambition to create AI technology[51]. Also, Morocco has once again taken the lead in AI governance, becoming a pioneer in adopting UNESCO’ ethical principles on artificial intelligence. This was marked by the launch of a Steering Committee on AI Ethics and the report[52] “Morocco’s Readiness for Artificial Intelligence” in May 2024[53].
In line with the CESE’s 2021 recommendation in its opinion “Towards a responsible and inclusive digital transformation”, which called for prioritizing artificial intelligence in Morocco’s digital agenda, a new opinion emphasizes the various factors shaping the country’s AI governance system. It examines AI’s integration into daily life and business, as well as its development prospects. This is the first of its kind in the Arab world and Africa and it was an opinion chosen by a unanimous vote in the 159th General Assembly on June 27, 2024[54].
Moreover, Cybersecurity is one of the most pressing issues for Morocco in term of artificial intelligence, as it is so vital for both the individual and the state. Due to the legislative vacuum in the fight against cybercrimes or computer ones- in addition to the resulting obstacles and challenges for the Moroccan judiciary in confronting the widespread and complex nature of cybercrime, which allowing perpetrators to escape prosecution based on the principle of legality of penalties—the Moroccan legislator has been forced to enact new laws[55]. Law No. 05.20 of 2020 provides a legal framework to fight cybercrime through the protection of essential information systems, networks, and software from international cyber threats. It also reinforces international judicial cooperation, as seen in Articles 714 and 715 of the Code of Criminal Procedure, which enable judges to issue letters rogatory for execution abroad pursuant to legal procedures[56].
In the fight against cyberterrorism, several laws have been enacted, including Law No. 03.07[57] supplementing the criminal code with regard to offences relating to automated data processing system, and Law No. 03.03[58] on terrorist acts. These laws target legislations encompass individual and collective activities that threaten public order and encourage violence and intimidation[59]. In addition, Law No. 17.97[60] on the protection of industrial property was amended with a view to adhering to new international standards aimed at combating efficiently issues arising from technological advances. And Law No. 53.05[61] regarding electronic data exchange (as amended by Law 43.20 on trust services), Law No. 24.96[62] on postal and telecommunications (covering interference crimes), and Law No. 31.08[63] on consumer protection, containing online consumer rights. Collectively, they constitute four principal pillars of Morocco’s cyber legislation: electronic transactions, cybercrime, confidentiality and protection of data, and consumer protection. Nevertheless, some laws—notably Law No. 09.08[64] on the protection of personal data—need to be revised to cover AI-created data and be harmonized with international norms.
4. Legal Challenges of AI Implementation
This section addresses legal issues of artificial intelligence (AI), including ongoing debates and discussions dominant approaches to solving them, and prevailing gaps and challenges. In this regard, several IA related issues arise, including algorithmic transparency, accountability and liability, data privacy and security, and intellectual property rights.
4.1Algorithmic Transparency
Artificial Intelligence (AI) is revolutionizing industries, but it faces challenges such as the “black box[65]” problem and the complexity of its decision-making processes. With complex algorithms that even experts struggle to interpret, its growing role in critical decisions raises ethical and regulatory questions. Despite efforts to improve transparency, many AI systems remain unclear. solving this issue is crucial to prevent mistrust and regulatory hurdles[66].
Algorithmic opacity is extremely problematic because it can have disastrous consequences. For example, people have been denied jobs, refused loans, placed on no-fly lists, or had their benefits cut off, often without knowing why other than these actions were done by computers. This lack of knowledge keeps them in the dark about what is controlling their lives[67]. Therefore, Legal and regulatory frameworks are essential to foster transparency and accountability within AI systems. Data protection laws enhance transparency by mandating companies to reveal details about their data processing activities and allowing individuals to access and manage their personal data[68].
In this regard, George Benneh Mensah stated that ‘case laws have also emphasized transparency and accountability within AI systems. In a landmark ruling by New York City’s Commission on Human Rights against an employment agency utilizing an algorithmic hiring tool, it was determined that if an employer uses an algorithmic tool during hiring processes, they must uncover this information to applicants upon request along with an explanation of how the tool works[69]’.
To affirm that, An EU Parliament STOA (Science and Technology Options Assessment) study analyzed policy strategies to increase the transparency and the accountability of algorithmics, considering social, technical, and regulatory issues. Main action items are raising awareness, public-sector responsibility, regulatory control, and international governance. Proposed solutions include algorithmic impact assessment, transparency standards, counterfactual explanations, and model-independent interpretability methods[70] such as LIME[71].
4.2Accountability and Liability
It is important to examine the need for accountability and liability mechanisms in AI systems and evaluates the current challenges posed by the luck of regulations or legal frameworks. It also suggests the creation of clear guidelines, standards, and responsibilities for developers, organizations, and users.
Accountability is also the basis for the ethical development and use of AI, and beyond transparency, it places responsibility in enabling remedies when harm is caused. It is, however, challenging to impose accountability because AI is extremely complicated[72]. Traditional product liability, which holds manufacturers responsible for defects or insufficient warnings, struggles to apply to AI systems. This is because machine learning allows AI to function independently and develop over time through learning, which makes it hard to link damage back to precise design errors or predict risks beforehand, hence making causation and legal accountability issues more complex[73].
Furthermore, a major difficulty lies in defining what should be considered a “defect” in an AI system. For instance, should biased results be classified as defects? And if an AI functions according to its programming but unintentionally causes harm due to unintended consequences, is this considered a design flaw[74]. The use of artificial intelligence in various applications raises a key challenge is determining who can be held liable in the event of a dispute. Liability becomes complex because it involves multiple parties, such as data providers, developers, and users, and AI systems can make decisions autonomously without human supervision[75]. For instance, Autonomous vehicle systems are a prime example of complex technologies that could significantly transform their industry in the future. Yet, the lack of transparency surrounding these systems led to lengthy legal disputes[76], primarily due to unclear liability issues[77].
Although with the advancement of AI-based robots, the harms caused by their actions may become more unpredictable. Machine learning poses a challenge to traditional tort law notions of foreseeability, as these systems could lead to a variety of unforeseeable harms. This highlights the need to redefine the foreseeability standard to address the technical complexities of AI[78].
Addressing the issue of liability for harm caused by artificial intelligence (AI) has brought diverse and sometimes conflicting opinions. One opinion places much stake on the primary responsibility of developers, who design the models and algorithms governing AI activity. The developers are tasked with making AI systems efficient, ethical, and secure, entailing thorough testing, vision into potential abuse, and instituting safeguarding[79]. However, the inherent unpredictability and sophistication of AI systems make it difficult for developers to foresee all side effects and possible applications. Therefore, the extent to which developers can individually be held responsible is limited[80]. Developers can use strategies such as liability insurance and contract terms stipulating limited liability or passing on liability to customers where appropriate[81].
Some perspectives place responsibility for AI-related harm on users, who engage with AI in diverse ways from casual consumers to professionals. User accountability involves both personal responsibility and broader systemic considerations. Thus, Users should understand AI’s limitations and biases, think critically about AI-driven decisions, and have ways to report issues. In professional settings, users, developers, and organizations share the duty to ensure AI is used ethically and effectively[82].
In medical field, too, liability implications becoming increasingly complex, with sophisticated systems capable of handling complex medical tasks and blurring the line between human and AI decision-making. As result, Legal scholars have recognized several situations under which a doctor’s employment of an autonomous AI system would expose them to malpractice liability. These include the case where the AI generates a correct recommendation that meets the standard of care, that the doctor ignores and harms the patient. Another example is the case where the AI generates incorrect and non-compliant recommendation that the doctor follows, harming the patient[83].
In 2011, IBM’s Watson showcased the potential of AI in cancer diagnosis and treatment in its attempt to tailor treatment to patients according to their genetic profile and clinical history[84]. However, its failure to deliver specific treatment recommendations exposed significant limitations and raised concerns about the responsibility of both its developers and the healthcare institutions that implemented it[85].
As AI systems become more autonomous, some argue that it might need to consider the possibility of assigning responsibility to the AI itself. This is a radical idea that challenges traditional notions of liability, which are typically rooted in human agency and intention[86]. For such a regime as mentioned above to be feasible, it is imperative that AI systems be able to hold property, either directly, like corporations, or indirectly, through agents acting in their interests. Without the potential for a plaintiff to enforce a judgment and receive compensation—usually monetary—for any harm suffered, the justification for extending legal personality to AI systems would be largely undermined[87]. Supporting this view, McDonald (2023) argued that “granting legal personhood to autonomous AI systems could result in legal simplification which would make it easier for injured parties to claim compensation than it would be on the liability model”[88].
On the other hand, Artificial intelligence systems cannot support legal responsibilities and rights like humans do, since their behavior is based on programmed rules instead of moral judgment. Using consciousness and self-awareness to allocate legal capacity is inapplicable and intangible[89]. Because Attributing legal personality to AI systems is not regarded as necessary for the purposes of liability since any damage caused by them can be attributed to natural persons or existing legal persons. Like children, mentally disabled persons, or animals—who are devoid of criminal intent and are considered innocent agents—AI systems are not expected to have legal responsibility. Liability could instead lie with the person in control of them[90].
In light of the above, assigning exclusive legal liability for damages caused by Artificial Intelligence on a single party is not reflective of the complexity that comes with the development and deployment of this technology. AI systems are not stand-alone products; they are the result of a series of interrelated steps, from coding and design to training, marketing, distribution, and use. Therefore, assigning responsibility to a single actor, such as the end user or the owner, neglects the roles of other significant actors involved in AI development.
4.3 Data privacy and security
With the dawn of the digital age, new sources of threats have emerged, along with the increasing availability of information, increasing the risks of disclosure. This has the need to develop advanced technologies such as firewalls and encryption to protect sensitive information[91].
More recently, artificial intelligence has transformed completely the situation by providing the ability to manage large amounts of data, identify patterns, and identify threats in real-time, thereby improving data security and helping to prevent breaches. It is also becoming increasingly clear that effective AI governance relies effectively on strict data governance[92]. Therefore, the availability of information on the internet raises concerns about privacy of personal Data, this information must be used in an open manner and only for its intended purpose. However, it is difficult to apply these rules due to the vast potential of artificial intelligence, because data protection agencies must adopt intelligent and adaptive measures to combat AI attacks without suppressing innovation. There must be a balance between technological progress and the protection of human rights[93].
AI can help data privacy by preventing unauthorized access and information leaks, aiding in compliance with the law, including the European GDPR[94]. It uses encryption and anonymization techniques to protect sensitive data. If an AI system processes data within the EU or if a data controller based the UE processes data, GDPR applies regardless of the size of data[95]. However, this GDPR protection only applied to natural persons. Thus, according to the European Parliament and Council of the European Union (2016),”The protection afforded by this Regulation should apply to natural persons, whatever their nationality or place of residence, in relation to the processing of their personal data[96]. ” .Also, The European AI Act which has been widely acclaimed as the world’s first comprehensive regulation of artificial intelligence, prohibit certain practices of AI explicitly and has strict standards in the fields of governance, risk control, and disclosure for other uses. While it does not explicitly prohibit privacy-related concerns related to AI, it places limits on data use by strictly following specifications, including having strong data stewardship practices and ensuring the quality of validation, testing, and training sets[97].
As AI powers and threats grow, the legal framework for privacy and data security is constantly evolving. New laws are likely to be introduced by governments to confirm ethical usage, to which organizations have to adapt and individuals have to stay conscious of how their data is being processed[98].
In Morocco, artificial intelligence presents fundamental issues of data privacy and cybersecurity, which become serious issues both for institutions and individuals. With the rise in cybercrime and data breaches and unauthorized handling of personal data, there is a need for full legal protection against such risks[99]. A survey by cybersecurity firm Kaspersky highlighted a sharp increase in cyber threats across the African digital landscape, noting that the Kingdom of Morocco is among the nations most exposed[100] to such threats[101].
The Moroccan legislator, as evidenced by the laws adopted, has not yet decided a specific legal framework to regulate artificial intelligence technologies. Consequently, no specific legal provisions govern this new phenomenon or clarify its legal aspects. Rather, practices rely on general legal principles in civil and criminal law. This approach aligns with that of the majority of other legislators who have not yet made sure steps towards establishing strong regulations for artificial intelligence.
4.4Intellectual Property Rights:
Intellect is a fundamental element of legal personality and a basic right which should be safeguarded. Since AI technologies are products of specialist thinking and formalized algorithms, as expressed in innovative works, they qualify for protection as literary works under traditional copyright law regardless of their digital form or delivery method[102]. Therefore, Intellectual property rights are incorporated in several key international human rights instruments, including Article 27 of the Universal Declaration on Human Rights (UDHR)[103], Article 15 of the International Covenant on Economic, Social and Cultural Rights (ICESCR)[104], Article 19 of the International Covenant on Civil and Political Rights (ICCPR)[105], and the 1993 Vienna Declaration and Program of Action (VDPA)[106]. They are considered to have a human rights dimension and have been integrated into various policy settings[107] by the WIPO[108].
Besides, Intellectual property includes mind-products such as inventions, works of art, and trademarks, and reflects individual creativity and effort. It has to be protected not just out of respect for the rights of innovators but also to maintain its economic value. Legal systems therefore target patents, copyrights, and trademarks to prevent abuse and allow innovators to financially benefit from their creations[109]. For instance, in the United Kingdom, law offers protection to computer-generated literary, dramatic, musical, or artistic works. However, there is no specific legal principle regarding whether such computer-generated works should or should not be patented. Generally, rights in an AI work belong to the AI creator unless the work was commissioned or prepared as part of a work relationship — in which case, rights belong to the employer or commissioner of work[110].
To confirm that, Dr. Stephen Thaler applied for patents in many countries, identifying DABUS[111] as the sole inventor. Most of these applications were denied on grounds of interpretation of the patent law which clearly states that an inventor must be a natural person[112].In the absence of any change to existing laws, it is highly probable that legal systems worldwide will continue to deny the possibility of non-human entities as inventors in patent applications. This is observed in Thaler case[113].Therefore, Artificial intelligence has made a huge impact in the area of intellectual property, presenting huge legal challenges. Because AI is not a legal personality, it cannot own IP rights. Thus, inventions produced by AI should logically be owned by the user who feeds the data and runs the system.
5.Legal Liability of Artificial Intelligence in Morocco
Individuals and legal entities are usually liable for their behavior both civilly and criminally. Yet as the global trend in treating artificial intelligence as a legal entity becomes stronger, one primary question has been raised: legally, who should be responsible for AI acts?
In Morocco, the current lack of direct legal provisions remains an obstacle to assigning such responsibility. Such deficiency should not prevent legal thinking nonetheless. Instead, it encourages a close examination of existing national and comparative legal frameworks to evaluate whether they may be tailored for AI accountability or if they fall short in addressing its unique legal issues[114]. There is a necessity to address civil liability for artificial intelligence by examining if AI can be held liable for its actions in the same way as natural persons. This involves examining the conditions for both contractual and tortious liability.
Under Moroccan civil law, tort liability is established by proving the existence of a fault, the occurrence of damage, and a direct causal relationship between them. Therefore, liability can be attributed to the programmer, the user, or the developing company if their conduct causes harm. This is regardless of whether the fault was committed intentionally or negligently, under Articles 77[115] and 78[116] of the Moroccan Code of Obligations and Contracts[117].Therefore, applying tort liability to artificial intelligence systems demands examining whether they could be made subjects of rules of custody liability or whether liability is to be imposed upon a principal for the acts of his subordinate.
Regarding tortious liability for the custodian of thing, the Moroccan legislator defined it in Article 88[118] of the Code of Obligations and Contracts, as the person who has authority to control and manage the object. For liability as a custodian of a thing to arise, the person must have custody over a non-living physical object. Additionally, the object must have caused harm to another person, and this harm must result from the object’s active conduct[119].
Where such conditions are met, civil liability arising from the use of artificial intelligence is generated. For instance, where a robot used in performing surgery causes an injury to a patient, the person who has actual control of it would be liable. The liable entity may hence be the operating surgeon, the hospital owner, the manufacturing company, programmer, or any person with actual control of the AI system[120]. However, its custody is the responsibility of the owner or operator exercising effective control over the protected asset. Since autonomous artificial intelligence is operated independently—not by direct human intention and through adaptive learning process—the owner/operator cannot be faulted for liabilities caused by supposed errors of AI since they never have enough regulatory control. It is therefore viable to consider an AI as property without a specified custodian[121].
Overall, Artificial intelligence systems are autonomous, not controlled by developers, owners, or operators. It is this autonomy, coupled with their capacity for self-learning, inscrutable decision-making, and unpredictability, that makes the imposition of conventional custodial liability diffcult. Robots are not accorded legal capacity and personhood, then cannot be considered to subordinates, with responsibility for damage inflicted falling on the owner, not just the custodian.
As far as the possibility of exercising product liability against errors in artificial intelligence technology is concerned, risk liability for a defective product is a new type of liability with a new legal regime. It has attracted the attention of jurisprudence and legislation, and it has been regulated by the Moroccan legislator[122] under Law No. 24.09[123].
The Moroccan legislature, following the approach of progressive legal systems influenced by French Law No. 98/389 of May 19, 1998—which was itself derived from EU Directive No. 85/374 of July 25, 1985—assigns primary responsibility to the manufacturer under Law No. 24.09. This law holds producers accountable for any damage caused by defective products. As a result, robot manufacturers are liable for any harm resulting from defects in their technologies[124]. Thus, the manufacturers of AI systems are liable if they ignore the legal safety and security standards, such as developing measures against misuse and testing for the safety, efficiency, and absence of defects in the systems. For example, cars are subjected to simulations such as crashes and abrupt braking to test their reliability and cause of injury to users and others[125].
According to Articles 3 and 106.2 of Law 24.09, a product is anything that is put on the market in a professional or commercial capacity, with or without a fee. Transposing this to artificial intelligence, its material facet can be equated to human intellectual creation in relation to copyright, but its moral aspect creates legal ambiguity. For this reason, AI can only be legally regarded as a product in so far as it takes a tangible, physical shape[126].
However, If there is a flaw in the physical elements of artificial intelligence, this does not exclude the application of product liability. However, if the safety and security standards demanded are not met, the manufacturer is obligated to compensate the party injured, pending the fact that the harm caused by the flaw can be proved[127]. On the other hand, when artificial intelligence has untouchable medium like when it works by means of programs or algorithms product liability can be hard to implement. Despite a programmer anticipating all possible scenarios, AI systems, especially those that facilitate deep learning capacity and can work on their own by making choices, make it challenging to pinpoint flaws. It is hard to detect a flaw, and even if a flaw was present at the time of release, it is hard to distinguish the damage directly brought about by the AI[128].
Overall, Artificial intelligence technology is inherently hard to control, because the risk comes from its independent functioning unlike defects in production. They can contribute to public safety risks, especially when harm is caused with no legal flaw in terms of safety standards. Since AI works on its own by means of complex data analysis, it is hard to identify the source of harm—whether it is from a defect or some other source. This is made worse by Morocco’s absence, as in the majority of comparative experiences, of specific legislation guiding AI-related liability.
Conclusion
Artificial intelligence remains a concept defined in numerous ways, without a universally agreed-upon definition. However, most interpretations recognize AI as computer systems capable of performing human-like tasks with greater efficiency, accuracy, or speed. As such, AI can be understood as a set of technologies that simulate human intelligence, often exceeding it in narrowly defined domains.
The regulation of AI raises complex legal challenges, not least of which relate to liability, privacy, and intellectual property rights. For example, with regard to liability, as a model for related legal issues, Assigning responsibility solely to one actor—such as the end user or owner—oversimplifies the layered nature of AI systems, which result from collaborative processes involving developers, designers, trainers, and deployers. This complexity makes traditional fault-based models of liability inadequate for addressing harm caused by autonomous systems.
At the international level, several initiatives and soft law instruments, including international conventions and ethical frameworks, aim to guide the global governance of AI. While these efforts mark important progress, they often lack binding force and suffer from uneven adoption across countries. This highlights the need for more harmonized and enforceable international legal standards. However, through a comparative analysis, models from countries such as the United States, the United Kingdom, and South Korea illustrate diverse approaches—ranging from sector-specific guidelines to broader national strategies—that could serve as valuable references for future other countries regulations.
In the Moroccan context, there is currently no dedicated legal framework governing AI technologies. Instead, legal practice relies on general principles of civil and criminal law. This mirrors the situation in many jurisdictions that have yet to establish specific, enforceable AI regulations.
Moreover, artificial intelligence poses significant challenges to intellectual property law. Given that AI lacks legal personality, it cannot hold IP rights, raising questions about the ownership of AI-generated works or inventions. Logically, these rights should fall to the individuals or entities that provide input data and operate the AI systems.
Finally, the autonomous and unpredictable nature of AI systems introduces new legal risks, especially in cases where harm occurs without any apparent breach of safety standards. This makes the task of identifying liability particularly difficult and emphasizes the urgent need for a clear, adaptive legal framework that can address the specific characteristics of AI technologies. In the Moroccan context, the lack of targeted AI legislation leaves a significant gap in addressing these legal uncertainties. Developing a comprehensive and forward-looking regulatory model is essential to ensure responsible innovation while safeguarding legal rights and public safety.
- Al-Marnissi, T, ‘الذكاء الاصطناعي بالمغرب 2025: استراتيجيات طموحة نحو التحول الرقمي والابتكار [Artificial intelligence in Morocco 2025: Ambitious strategies towards digital transformation and innovation]’ (Medina FM News, 25 February 2025) https://news.medinafm.ma/الذكاء-الاصطناعي-بالمغرب/ accessed 5 September 2025.
- Al-Marnissi, T, ‘Morocco and the challenges of artificial intelligence: An analytical study of legal and social issues [المغرب وتحديات الذكاء الاصطناعي: دراسة تحليلية في القضايا القانونية والاجتماعية]’ (2025) 3(1) Ibn Khaldoun Journal for Studies and Research 295 https://www.ibnkhaldoun-journal.com/article/view/867 accessed 5 September 2025.
- Ait Mouh, H, ‘القانون المغربي والذكاء الاصطناعي [Moroccan law and artificial intelligence]’ (MarocDroit, 13 December 2023) https://www.marocdroit.com accessed 5 September 2025
- Bentaleb, Y, ‘Fighting cybercrime in Morocco: Achievements and some challenges’ (Moroccan Centre for Polytechnic Research and Innovation, InCyber, 1 September 2017) https://incyber.org/en/article/fighting-cybercrime-in-morocco-achievements-and-some-challenges-by-prof-youssef-bentaleb-moroccan-centre-for-polytechnic-research-and-innovation/ accessed 17 June 2025.
- Badr, MR, Civil liability arising from the use of artificial intelligence techniques in Jordanian legislation (Master’s thesis, Middle East University, 2022) https://www.meu.edu.jo/libraryTheses accessed 5 September2025.
- Boufertiḥ, T, [Report: Morocco tops African countries targeted by cyberattacks]’ (Hespress, 16 April 2025) https://www.hespress.com/تقرير-المغرب-يتصدر-البلدان-الإفريقية-1544809.accessed 5 September 2025
- Boch, A, Hohma, E and Trauth, R, ‘Towards an accountability framework for AI: Ethical and legal considerations (IEAI Research Brief)’ (Institute for Ethics in Artificial Intelligence, Technical University of Munich,March2022) https://ieai.sot.tum.de/wpcontent/uploads/2022/03/ResearchBrief_March_Boch_Hohma_Trauth_FINAL_V2.pdf accessed 5 September 2025.
- Blumenthal, D, ‘The U.S. President’s Executive Order on Artificial Intelligence’ (NEJM AI, 1(2), 2024) https://www.nejm.ai/doi/full/10.1056/NEJMp2312345 accessed 5 September 2025.
- Blaine, T, ‘Guide to laws and regulations about AI (Artificial Intelligence) in the United States’ (Law Soup, n.d.) https://lawsoup.org/legal-guides/ai-laws-in-the-us-artificial-intelligence-regulations/ accessed 30 March 2025.
- Copeland, BJ, History of artificial intelligence (AI) (Scientific Research Publishing 2024) https://www.scirp.org/reference/referencespapers?referenceid=3858675 accessed 5 September 2025.
- Coursera Staff, ‘What is artificial intelligence? Definition, uses, and types’ (Coursera 2025) https://www.coursera.org/articles/what-is-artificial-intelligence?utm_ accessed 5 September 2025.
- Cambridge Dictionary, ‘Artificial intelligence’ (Cambridge University Press, n.d.) https://dictionary.cambridge.org/dictionary/english/artificial-intelligence accessed 11 March 2025.
- ComplexDiscovery Staff, ‘South Korea’s AI Basic Act: A blueprint for regulated innovation’ (ComplexDiscovery, 27 December 2024) https://complexdiscovery.com/south-koreas-ai-framework-act-a-blueprint-for-regulated-innovation accessed 5 September 2025.
- 28. Cheong, BC, ‘Transparency and accountability in AI systems: Safeguarding wellbeing in the age of algorithmic decision-making’ (3 July 2024) 1 Frontiers in Human Dynamics 1421273 https://doi.org/10.3389/fhumd.2024.1421273 accessed 5 September 2025.
- Chew, J and Davidson, J, ‘The interaction between intellectual property laws and AI: Opportunities and challenges’ (Norton RoseFulbright, November 2024) https://www.nortonrosefulbright.com/en/knowledge/publications/c6d47e6f/the-interaction-between-intellectual-property-laws-and-ai-opportunities-and-challenges accessed 5 September 2025
- Chang, C, ‘The first global AI treaty: Analyzing the Framework Convention on Artificial Intelligence and the EU AI Act’ (SSRN, 2024) https://ssrn.com/abstract=5069335 accessed 5 September 2025
- Dagmar, M, Lewis, C and Kristinn, R, ‘On defining artificial intelligence’ (2019) 10(2) Journal of Artificial General Intelligence 1 https://www.researchgate.net/ accessed 5 September 2025
- Devineni, SK, ‘AI in data privacy and security’ (2024) 3(1) International Journal of AI & Machine Learning 35 https://doi.org/10.17605/OSF.IO/WCN8A accessed 5 September 2025
- Dolfing, H, ‘Case study 20: The $4 billion AI failure of IBM Watson for Oncology’ (Henrico Dolfing Blog, 7 December 2024) https://www.henricodolfing.com/2024/12/case-study-ibm-watson-for-oncology-failure.html accessed 5 September 2025
- Digital Watch Observatory, ‘Overview of AI policy in 10 jurisdictions’ (23 December 2024) https://dig.watch/updates/overview-of-ai-policy-in-10-jurisdictions accessed 5 September 2025
- European Parliament, ‘EU AI Act: First regulation on artificial intelligence’ (European Parliament, 2 February 2025) https://www.europarl.europa.eu/news/en/headlines/society/20230601STO93804/eu-ai-act-first-regulation-on-artificial-intelligence accessed 5 September 2025
- Economic, Social and Environmental Council, ‘Artificial intelligence in Morocco: What uses and what prospects for development?’ (Al I’lami, 17 November 2024) https://ali3lami.ma accessed 5 September 2025
- European Parliament and Council of the European Union, Regulation (EU) 2016/679 of the European Parliament and of the Council of 27 April 2016 on the protection of natural persons with regard to the processing of personal data and on the free movement of such data (General Data Protection Regulation) [2016] OJ L119/1, art 14 https://eur-lex.europa.eu/eli/reg/2016/679/oj accessed 5 September 2025
-
European Parliament, ‘EU AI Act: First regulation on artificial intelligence’ (European Parliament, 2 February 2025) https://www.europarl.europa.eu/news/en/headlines/society/20230601STO93804/eu-ai-act-first-regulation-on-artificial-intelligence accessed 5 September 2025
- El Haddam, S, القانون في مواجهة الذكاء الاصطناعي [Law against artificial intelligence: A comparative study] (Master’s thesis, Sidi Mohamed Ben Abdellah University, 2022) https://www.droitarabic.com/2023/01/Law-against-artificial-intelligence.accessed 5 September 2025
- Elhamrawy, HMO, ‘The basis of civil liability for robots between traditional rules and the modern trend [أساس المسؤولية المدنية عن الروبوتات بين القواعد التقليدية والاتجاه الحديث]’ (2021) 23(2, part 4) Majallat Kulliyat al-Shari‘a wa-al-Qanunbi-Tafahnaal-Ashraf–Dakahlia3059 https://jfslt.journals.ekb.eg/article_218225489ec22285f25bbb719c42b80e3e1fceaccessed5September 2025
- El Belghiti, A, ‘المسؤولية القانونية لروبوتات الذكاء الاصطناعي [The legal responsibility of artificial intelligence robots]’(Mrlatalib.com, 16 October 2024) https://mrlatalib.com/Pfe-responsabilite-AI-robot accessed 5 September 2025
- Eltouzani, N, L’impact de l’intelligence artificielle sur la théorie de la responsabilité civile [تأثير الذكاء الاصطناعي على نظرية المسؤولية المدنية] (Master’s thesis, Université Abdelmalek Essaâdi, 12 January 2025) https://mrlatalib.com/Pfe-master-impact-de-ingelligence-artificielle-sur-la-theorie-de-la-responsabilite-civile accessed 5 September 2025
- France 24, ‘المغرب: هجوم سيبراني يتسبب في تسريب بيانات شخصية لمواطني الضمان الاجتماعي [Morocco: Cyberattack leads to leakage of personal data of social security citizens]’ (France24, 10 April 2025) https://www.france24.com/ar/%D8%A7%D9%84%D8%A3%D8%AE%D8%A8%D8%A7%D8%B1%D8%A7%D9%84%D9%85%D8%BA%D8%A7%D8%B1%D8%A8%D9%8A%D8%A9/20250410accessed 5 September 2025
- Gomstyn, A and Jonker, A, ‘Exploring privacy issues in the age of AI’ (IBM, 30 September 2024) https://www.ibm.com/think/insights/ai-privacy accessed 5 September 2025
- Giuffrida, I, ‘Liability for AI decision-making: Some legal and ethical considerations’ (2019) 88(2) Fordham Law Review 439 https://doi.org/10.2139/ssrn.4953916 accessed 5 September 2025
- Hayward, A, Vandervliet, A, Turner, B and Montagnon, R, ‘The IP in AI – Recent updates and developments’ (Herbert Smith Freehills, 19 May 2023) https://www.herbertsmithfreehills.com/insights/202305/the-ip-in-ai-%E2%80%93-what-you-need-to-know accessed 5 September 2025
- Haddad, M, ‘The race for AI governance: Navigating the international regulatory landscape of artificial intelligence’ (JURIST, 17 March 2023) https://www.jurist.org/commentary/2023/03/mais-haddad-international-regulations-artificial-intelligence/ accessed 18 July 2025
- Ko, HK, Lee, IS and Mok, MY, ‘Analysis of AI regulatory frameworks in South Korea’ (Asia Business Law Journal, 15 April 2024) https://law.asia/ai-regulatory-frameworks-south-ko accessed 5 September 2025
- Kuzior, A, Sira, M, Zozuľaková, V and Hetényi, M, ‘Navigating AI regulation: A comparative analysis of EU and US legal frameworks’ (2024) https://www.researchgate.net/publication/385087114 accessed 5 September 2025
- Lar’arari, AH, Sources of obligation: Civil liability (3rd edn, Faculty of Law, Agdal 2011)
- Leslie, D, Burr, C, Aitken, M, Cowls, J, Katell, M and Briggs, M, Artificial intelligence, human rights, democracy, and the rule of law: A primer (Council of Europe 2021) https://ssrn.com/abstract=3817999 accessed 5 September 2025
- Mahdavi, G, de La Lama, A and Auty, CM, ‘US state-by-state AI legislation snapshot’ (BCLP Law 2025) https://www.bclplaw.com/en-US/events-insights-news/us-state-by-state-artificial-intelligence-legislation-snapshot.html accessed 5 September 2025
- Mahendra, S, ‘Dangers of AI: Lack of transparency’ (AI Plus, 28 August 2023) https://www.aiplusinfo.com/blog/dangers-of-ai-lack-of-transparency/ accessed 5 September 2025
- Mensah, GB, Artificial intelligence and ethics: A comprehensive review of bias mitigation, transparency, and accountability in AI systems (ResearchGate 2023) https://www.researchgate.net/publication/375744287 accessed 5 September 2025
- McDonald, L, ‘AI systems and liability: An assessment of the applicability of strict liability & a case for limited legal personhood for AI’ (2023) 3(1) St Andrews Law Journal 5 https://doi.org/10.15664/stalj.v3i1.2645 accessed 5 September 2025
- Mecaj, SE, ‘Artificial intelligence and legal challenges’ (2022) 20(34) Revista Opinión Jurídica 180 https://www.researchgate.net/publication/360392203 accessed 5 September 2025
- 59. Njoum, O, ‘تمكين التطبيقات الذكية بين الفقه والقانون: رؤية مستقبلية في دولة الإمارات العربية المتحدة (الجزء الثاني – التطبيقات الذكية في القانون) [Empowering smart applications between jurisprudence and law: A future vision in the United Arab Emirates (Part Two – Smart applications in law)]’ (Paper presented at the 2nd International Conference on Scientific Research, Imam Malik College for Sharia and Law, United Arab Emirates, 18 April 2021) https://imc.gov.ae/ar/Scientific-Research-And-Magazine/Scientific-Research-2nd-International-Conference accessed 5 September 2025
- Nguyễn, T, ‘California cracks down on AI-generated child abuse imagery, deepfakes’ (AP News, 27 March 2024) https://apnews.com/article/ai-deepfakes-children-abuse 7dcf5c566e2a297567f1e148ac2074a4 accessed 5 September 2025
-
Njoum, O, ‘تمكين التطبيقات الذكية بين الفقه والقانون: رؤية مستقبلية في دولة الإمارات العربية المتحدة (الجزء الثاني – التطبيقات الذكية في القانون) [Empowering smart applications between jurisprudence and law: A future vision in the United Arab Emirates (Part Two – Smart applications in law)]’ (Paper presented at the 2nd International Conference on Scientific Research, Imam Malik College for Sharia and Law, United Arab Emirates, 18 April 2021) https://imc.gov.ae/ar/Scientific-Research-And-Magazine/Scientific-Research-2nd-International-Conference accessed 5 September 2025
- Oujbour, K, ‘المسؤولية التقصيرية عن أخطاء تقنيات الذكاء الاصطناعي [Tort liability for errors in artificial intelligence technologies]’ (Faculty of Legal, Economic and Social Sciences, Ibn Zohr University, Agadir, Morocco, 5 February 2025) https://mrlatalib.com/Respo-delictuelle-de-AI accessed 5 September 2025.
- Onate Inaingo, M, ‘Legal liability in the age of AI: Who’s responsible when algorithms go wrong?’ (Law Insider India, 11 October 2023) https://lawinsider.in/columns/legal-liability-in-the-age-of-ai-whos-responsible-when-algorithms-go-wrong accessed 5 September 2025.
- PublicLawLibrary.org, ‘California enacts groundbreaking AI and digital safety regulations for 2025’ (17 December 2024) https://publiclawlibrary.org/california-enacts-groundbreaking-ai-and-digital-safety-regulations-for-2025/ accessed 5 September 2025.
- Reed, V, ‘AI accountability: Who bears the responsibility?’ (AICompetence, 5 August 2024) https://aicompetence.org/ai-accountability-who-bears-the-responsibility/ accessed 5 September 2025
- Rachum-Twaig, O, ‘Whose robot is it anyway?: Liability for artificial-intelligence-based robots’ (2020) University of Illinois Law Review 2020(4) 1141 https://illinoislawreview.org/wp-content/uploads/2020/08/Rachum-Twaig accessed 5 September 2025.
- Rodrigues, R, ‘Legal and human rights issues of AI: Gaps, challenges and vulnerabilities’ (2020) 4 Journal of Responsible Technology 100005 https://doi.org/10.1016/j.jrt.2020.100005 accessed 5 September 2025.
- Stryker, C and Kavlakoglu, E, ‘What is artificial intelligence (AI)?’ (IBM, 9 August 2024) https://www.ibm.com/think/topics/artificial-intelligence accessed 5 September 2025.
- Samoili, S, Lopez Cobo, M, Gomez Gutierrez, E, De Prato, G, Martinez-Plumed, F and Delipetrev, B, AI WATCH. Defining artificial intelligence: Towards an operational definition and taxonomy of artificial intelligence (EUR 30117 EN) (Publications Office of the European Union 2020) https://publications.jrc.ec.europa.eu/repository/handle/JRC118163 accessed 5 September 2025.
- Satish, S, ‘International AI treaty: Framework convention on artificial intelligence, human rights, democracy and rule of law’ (ClearIAS, 23 November 2024) https://www.clearias.com/international-ai-treaty accessed 5 September 2025.
- Sherman, N, ‘AI regulations around the world – 2025’ (Mind Foundry, 25 January 2024) https://www.mindfoundry.ai/blog/ai-regulations-around-the-world accessed 5 September 2025
- Software Improvement Group, ‘AI legislation in the US: A 2025 overview’ (24 January 2025)https://www.softwareimprovementgroup.com/us-ai-legislationoverview/#elementor-toc__heading-anchor-2 accessed 5 September 2025.
- Serrato, JK, Mastromonaco, C, Arora, SB, Caplan, A, Choo, E, Rendar, M, Watson, L, Voigts, AM, Rivaux, S, Purcell, J and Ajanaku, DF, ‘California’s significant AI laws go into effect’ (Artificial Intelligence and California, 12 February 2025) https://www.consumerprotectiondispatch.com/2025/02/californias-significant-ai-laws-go-into-effect/ accessed 5 September 2025.
- Saenz, AD, Harned, Z, Banerjee, O, Abràmoff, MD and Rajpurkar, P, ‘Autonomous AI systems in the face of liability, regulations and costs’ (2023) 6 npj Digital Medicine 185 https://doi.org/10.1038/s41746-023-00929-1 accessed 5 September 2025.
- Secure Privacy, ‘Artificial intelligence and personal data protection: Complying with the GDPR and CCPA while using AI’ (SecurePrivacy.ai, 4 October 2023) https://secureprivacy.ai/blog/ai-personal-data-protection-gdpr-ccpa-compliance accessed 5 September 2025.
- Trincado Castán, C, ‘The legal concept of artificial intelligence: The debate surrounding the definition of AI system in the AI Act’ (2024) (1) BioLaw Journal – Rivista di BioDiritto 305 https://doi.org/10.15168/2284-4503-3000 accessed 5 September 2025.
- US House of Representatives, Committee on Science, Space, and Technology, National Artificial Intelligence Initiative Act of 2020 (H.R. 6216) – One Pager (2020) https://republicans-science.house.gov/ accessed 5 September 2025.
- UK Government, ‘Implementing the UK’s AI regulatory principles: Initial guidance for regulators’ (GOV.UK, 16 January 2024) https://www.gov.uk/government/publications/implementing-the-uks-ai-regulatory-principles-initial-guidance-for-regulators accessed 5 September 2025.
- Vladeck, DC, ‘Machines without principals: Liability rules and AI’ (2014) 89(1) Washington Law Review 117 https://digitalcommons.law.uw.edu/wlr/vol89/iss1/6 accessed 5 September 2025. accessed 5 September 2025.
- Vokrug, A, ‘Artificial intelligence and legal identity’ (Unite.AI, 27 October 2023) https://www.unite.ai/artificial-intelligence-and-legal-identity/ accessed 5 September 2025.
- Weskill, ‘AI accountability: Who’s responsible when AI fails?’ (Weskill Blog, 22 November 2024) https://blog.weskill.org/2024/11/AI-Accountability-Whos-Responsible-When-AI-Fails.html accessed 5 September 2025.
-
Youssef Bentaleb, ‘Fighting Cybercrime in Morocco: Achievements and Some Challenges’ (InCyber, 1 September 2017) https://incyber.org/… accessed 17 June 2025. ↑
-
C Trincado Castán, ‘The Legal Concept of Artificial Intelligence: The Debate Surrounding the Definition of AI System in the AI Act’ (2024) (1) BioLaw Journal – Rivista di BioDiritto 305 https://doi.org/10.15168/2284-4503-3000. ↑
-
Mais Haddad, ‘The Race for AI Governance: Navigating the International Regulatory Landscape of Artificial Intelligence’ (JURIST, 17 March 2023) https://www.jurist.org/commentary/2023/03/mais-haddad-international-regulations-artificial-intelligence/ ↑
-
B J Copeland, History of Artificial Intelligence (AI) (Scientific Research Publishing, 2024) https://www.scirp.org/reference/referencespapers?referenceid=3858675 accessed 26 August 2025. ↑
-
M Dagmar, C Lewis and R Kristinn, ‘On Defining Artificial Intelligence’ (2019) 10 Journal of Artificial General Intelligence 2, 1 https://www.researchgate.net/ accessed 26 August 2025 ↑
-
Christopher D Manning, Artificial Intelligence: Definitions (Stanford University, 2020) https://www.academia.edu/117031974/Artificial_Intelligence_Definitions_by_Stanford_University accessed 31 August 2025 ↑
-
C Stryker and E Kavlakoglu, ‘What is Artificial Intelligence (AI)?’ (IBM, 9 August 2024) https://www.ibm.com/think/topics/artificial-intelligence accessed 31 August 2025. ↑
-
S E Mecaj, ‘Artificial intelligence and legal challenges’ (2022) 20 Revista Opinión Jurídica 180 https://www.researchgate.net/publication/360392203 accessed 31 August 2025. ↑
-
Machine Learning (ML) is the part of AI studying how computer agents can improve their perception, knowledge, thinking, or actions based on experience or data. For this, ML draws from computer science, statistics, psychology, neuroscience, economics and control theory. ↑
-
Deep Learning is the use of large multi-layer (artificial) neural networks that compute with continuous (real number) representations, a little like the hierarchically organized neurons in human brains. It is currently the most successful ML approach, usable for all types of ML, with better generalization from small data and better scaling to big data and compute budgets. ↑
-
Natural Language Processing (NLP) is a subfield of Artificial Intelligence (AI) that focuses on enabling machines to understand, interpret, and generate human language in a way that is both meaningful and useful. ↑
-
Coursera Staff, ‘What is Artificial Intelligence (AI)?’ (Coursera, 2024) para 1 https://www.coursera.org/articles/what-is-artificial-intelligence-ai accessed 31 August 2025. ↑
-
Cambridge Dictionary, ‘Artificial intelligence’ (Cambridge University Press, undated) https://dictionary.cambridge.org/dictionary/english/artificial-intelligence accessed 11 March 2025. ↑
-
Stryker and Kavlakoglu (n 7). ↑
-
Mecaj (n 10). ↑
-
This is the executive branch of the European Union responsible for proposing legislation, implementing decisions, upholding EU treaties, and managing the day-to-day business of the EU. ↑
-
European Parliament, ‘EU AI Act: First regulation on artificial intelligence’ (2 February 2025) https://www.europarl.europa.eu/news/en/headlines/society/20230601STO93804/eu-ai-act-first-regulation-on-artificial-intelligence accessed 31 August 2025. ↑
-
ibid. ↑
-
C Chang, ‘The first global AI treaty: Analyzing the Framework Convention on Artificial Intelligence and the EU AI Act’ (2024) SSRN https://ssrn.com/abstract=5069335 accessed 31 August 2025. ↑
-
On 15 March 2024, Secretary General Marija Pejčinović Burić stated at the finalization of the Convention: ‘This first-of-a-kind treaty will ensure that the rise of Artificial Intelligence upholds Council of Europe legal standards in human rights, democracy and the rule of law. Its finalization by our Committee on Artificial Intelligence (CAI) is an extraordinary achievement and should be celebrated as such.’ See Council of Europe, ‘Historic Agreement on Artificial Intelligence: Council of Europe Finalizes New Treaty’ (15 March 2024) https://www.coe.int/en/web/artificial-intelligence/-/historic-agreement-on-artificial-intelligence-council-of-europe-finalises-new-treaty accessed 31 August 2025. ↑
-
Ibid. ↑
-
David Leslie and others, ‘Artificial Intelligence, Human Rights, Democracy, and the Rule of Law: A Primer’ (Council of Europe, 2021) <https://ssrn.com/abstract=3817999> accessed 31 August 2025. ↑
-
Satish S, ‘International AI Treaty: Framework Convention on Artificial Intelligence, Human Rights, Democracy and Rule of Law’ (ClearIAS, 23 November 2024) <https://www.clearias.com/international-ai-treaty> accessed 31 August 2025 ↑
-
Nick Sherman, ‘AI Regulations around the World – 2025’ (Mind Foundry, 25 January 2024) <https://www.mindfoundry.ai/blog/ai-regulations-around-the-world> accessed 31 August 2025. ↑
-
Marek Szczepański, ‘Artificial Intelligence: 2024 Update on EU and Global Policy Developments’ (European Parliamentary Research Service, January 2024) <https://www.europarl.europa.eu/RegData/etudes/ATAG/2024/757605/EPRS_ATA(2024)757605_EN> accessed 31 August 2025. ↑
-
Mecaj (n 10). ↑
-
H K Ko, I S Lee and M Y Mok, ‘Analysis of AI Regulatory Frameworks in South Korea’ (Asia Business Law Journal, 15 April 2024) <https://law.asia/ai-regulatory-frameworks-south-korea> accessed 31 August 2025 ↑
-
ComplexDiscovery Staff, ‘South Korea’s AI Basic Act: A Blueprint for Regulated Innovation’ (ComplexDiscovery, 27 December 2024) <https://complexdiscovery.com/south-koreas-ai-framework-act-a-blueprint-for-regulated-innovation> accessed 31 August 2025 ↑
-
Ibid. ↑
-
The UK AI Regulation White Paper (March 2023) is a government policy document outlining the UK’s approach to regulating AI. It is called a “White Paper” because it sets out policy proposals for discussion before they are turned into law or formal regulation. ↑
-
UK Government, ‘Implementing the UK’s AI Regulatory Principles: Initial Guidance for Regulators’ (GOV.UK, 16 January 2024) <https://www.gov.uk/government/publications/implementing-the-uks-ai-regulatory-principles-initial-guidance-for-regulators>accessed31August 2025 ↑
-
Sherman (n 26). ↑
-
Digital Watch Observatory, ‘Overview of AI Policy in 10 Jurisdictions’ (23 December 2024) <https://dig.watch/updates/overview-of-ai-policy-in-10-jurisdictions> accessed 31 August 2025 ↑
-
Sherman (n 26). ↑
-
A Kuzior, M Sira, V Zozuľaková and M Hetényi, ‘Navigating AI Regulation: A Comparative Analysis of EU and US Legal Frameworks’ (2024) <https://www.researchgate.net/publication/385087114> accessed 31 August 2025 ↑
-
Marek Szczepański (n 27). ↑
-
Ibid. ↑
-
Szczepański (n 27). ↑
-
Ibid. ↑
-
D Blumenthal, ‘The US President’s Executive Order on Artificial Intelligence’ (2024) 1 NEJM AI 2 <https://www.nejm.ai/doi/full/10.1056/NEJMp2312345> accessed 31 August 2025 ↑
-
T Blaine, ‘Guide to Laws and Regulations about AI (Artificial Intelligence) in the United States’ (Law Soup, no date) <https://lawsoup.org/legal-guides/ai-laws-in-the-us-artificial-intelligence-regulations/> accessed 30 March 2025 ↑
-
Sherman (n 26). ↑
-
Software Improvement Group, ‘AI Legislation in the US: A 2025 Overview’ (24 January 2025) <https://www.softwareimprovementgroup.com/us-ai-legislationoverview/#elementor-toc__heading-anchor-2> accessed 31 August 2025 ↑
-
G Mahdavi, A de La Lama and C M Auty, ‘US State-by-State AI Legislation Snapshot’ (BCLP Law, 2025) <https://www.bclplaw.com/en-US/events-insights-news/us-state-by-state-artificial-intelligence-legislation-snapshot.html> accessed 31 August 2025 ↑
-
Ibid. ↑
-
J K Serrato, C Mastromonaco, S B Arora, A Caplan, E Choo, M Rendar, L Watson, A M Voigts, S Rivaux, J Purcell and D F Ajanaku, ‘California’s Significant AI Laws Go into Effect’ (Artificial Intelligence and California, 12 February 2025) <https://www.consumerprotectiondispatch.com/2025/02/californias-significant-ai-laws-go-into-effect/> accessed 31 August 2025 ↑
-
PublicLawLibrary.org, ‘California Enacts Groundbreaking AI and Digital Safety Regulations for 2025’ (17 December 2024) <https://publiclawlibrary.org/california-enacts-groundbreaking-ai-and-digital-safety-regulations-for-2025/> accessed 3 September 2025 ↑
-
Nguyễn T, ‘California Cracks Down on AI-Generated Child Abuse Imagery, Deepfakes’ (AP News, 27 March 2024) <https://apnews.com/article/ai-deepfakes-children-abuse-7dcf5c566e2a297567f1e148ac2074a4> accessed 3 September 2025 ↑
-
International Center for Artificial Intelligence in Morocco (AI Movement). ↑
-
The “Khawarizmi Program” is a program that offers funding for research projects in the artificial intelligence (AI) sector. The program is among the initiatives that Morocco has implemented in order to push the country forward in the AI sector. ↑
-
Economic, Social and Environmental Council, ‘Artificial Intelligence in Morocco: What Uses and What Prospects for Development?’ (Al I’lami, 17 November 2024) <https://ali3lami.ma> accessed 3 September 2025 ↑
-
This major event, chaired by Ms. Ghita Mezzour, Minister Delegate to the Head of Government in charge of Digital Transition and Administration Reform, and Ms. Gabriela Ramos, UNESCO Assistant Director-General for Social and Human Sciences. The report offers a comprehensive overview of the current state and future prospects of AI in the country. ↑
-
T Al-Marnissi, ‘الذكاء الاصطناعي بالمغرب 2025: استراتيجيات طموحة نحو التحول الرقمي والابتكار [Artificial Intelligence in Morocco 2025: Ambitious Strategies towards Digital Transformation and Innovation]’ (Medina FM News, 25 February 2025) <https://news.medinafm.ma/الذكاء-الاصطناعي-بالمغرب/> accessed 5 September 2025 ↑
-
– Economic, Social and Environmental Council (n 53) ↑
-
Youssef Bentaleb, ‘Fighting Cybercrime in Morocco: Achievements and Some Challenges’ (InCyber, 1 September 2017) https://incyber.org/en/article/fighting-cybercrime-in-morocco-achievements-and-some-challenges-by-prof-youssef-bentaleb-moroccan-centre-for-polytechnic-research-and-innovation/accessed 17 June 2025 ↑
-
S El Haddam, القانون في مواجهة الذكاء الاصطناعي: دراسة مقارنة [Law against artificial intelligence: A comparative study] (Master’s thesis, Sidi Mohamed Ben Abdellah University, 2022) https://www.droitarabic.com/2023/01/Law-against-artificial-intelligence.html ↑
-
Kingdom of Morocco, Law No 07-03 supplementing the Penal Code concerning offences related to automated data processing systems (Dahir No 1-03-197 of 11 November 2003), BO No 5171 ↑
-
Kingdom of Morocco, Law No 03-03 relating to the fight against terrorism (Dahir No 1-03-140 of 28 May 2003), BO No 5114 ↑
-
H Ait Mouh, ‘القانون المغربي والذكاء الاصطناعي [Moroccan Law and Artificial Intelligence]’ (MarocDroit, 13 December 2023) <https://www.marocdroit.com> accessed 5 September 2025 ↑
-
Kingdom of Morocco, Law No 17-97 on the Protection of Industrial Property (Dahir No 1-00-91 of 15 February 2000), as amended by Law No 31-05 (in force 2 March 2006) and Law No 23-13 (in force 18 December 2014), BO No 4778 ↑
-
Kingdom of Morocco, Law No 53-05 relating to the electronic exchange of legal data, electronic signatures, and certification services (Dahir No 1-07-129 of 30 November 2007), BO No 5584 ↑
-
Kingdom of Morocco, Dahir No 1-97-162 promulgating Law No 24-96 relating to postal and telecommunications services (7 August 1997), BO No 4518 ↑
-
Kingdom of Morocco, Law No 31-08 on consumer protection measures (promulgated by Dahir No 1-11-03 of 18 February 2011), BO No 5932 ↑
-
Kingdom of Morocco, Law No 09-08 relating to the protection of individuals with regard to the processing of personal data (promulgated by Dahir No 1-09-15 of 18 February 2009), BO No 5714 ↑
-
In AI, the term “black box” refers to a system whose internal workings are opaque or not easily interpretable, even to its creators. This lack of transparency makes it difficult to understand how and why an AI model makes certain decisions. ↑
-
S Mahendra, ‘Dangers of AI: Lack of Transparency’ (AI Plus, 28 August 2023) <https://www.aiplusinfo.com/blog/dangers-of-ai-lack-of-transparency/> accessed 5 September 2025. ↑
-
R Rodrigues, ‘Legal and Human Rights Issues of AI: Gaps, Challenges and Vulnerabilities’ (2020) 4 Journal of Responsible Technology 100005 <https://doi.org/10.1016/j.jrt.2020.100005> ↑
-
B C Cheong, ‘Transparency and Accountability in AI Systems: Safeguarding Wellbeing in the Age of Algorithmic Decision-Making’ (2024) 1 Frontiers in Human Dynamics 1421273 <https://doi.org/10.3389/fhumd.2024.1421273> ↑
-
George Benneh Mensah, Artificial Intelligence and Ethics: A Comprehensive Review of Bias Mitigation, Transparency, and Accountability in AI Systems (2023) ResearchGate https://www.researchgate.net/publication/375744287 ↑
-
Rodrigues (n 69) ↑
-
LIME (Local Interpretable Model-Agnostic Explanations) is a method that provides comprehensible explanations of predictions made by machine learning models. It alters input data, observes changes in predictions, and uses a simpler model to approximate the complex one, allowing users to understand decisions and increasing trust in AI systems. ↑
-
Cheong (n 70) ↑
-
DC Vladeck, ‘Machines without Principals: Liability Rules and Artificial Intelligence’ (2014) 89 Washington Law Review 117 <https://digitalcommons.law.uw.edu/wlr/vol89/iss1/6/> accessed 5 September 2025 ↑
-
O Rachum-Twaig, ‘Whose Robot Is It Anyway?: Liability for Artificial-Intelligence-Based Robots’ (2020) 2020 University of Illinois Law Review 1141 <https://illinoislawreview.org/wp-content/uploads/2020/08/Rachum-Twaig> accessed 5 September 2025. ↑
-
M Onate Inaingo, ‘Legal Liability in the Age of AI: Who’s Responsible When Algorithms Go Wrong?’ (Law Insider India, 11 October 2023) <https://lawinsider.in/columns/legal-liability-in-the-age-of-ai-whos-responsible-when-algorithms-go-wrong> accessed 5 September 2025. ↑
-
A notable incident took place in 2018 when an Uber test vehicle hit a pedestrian while a safety driver was on board. Ultimately, the blame was placed on the test driver, although it was not initially clear: see CNBC, ‘Uber not criminally liable in fatal 2018 Arizona self-driving crash: Prosecutors’ (6 March 2019) https://www.cnbc.com/2019/03/06/uber-not-criminally-liable-in-fatal-2018-arizona-self-driving-crash-prosecutors.html accessed 5 September 2025. ↑
-
A Boch, E Hohma and R Trauth, Towards an Accountability Framework for AI: Ethical and Legal Considerations (IEAI Research Brief) (Institute for Ethics in Artificial Intelligence, Technical University of Munich, March 2022) https://ieai.sot.tum.de/wpcontent/uploads/2022/03/ResearchBrief_March_Boch_Hohma_Trauth_FINAL_V2.pdf accessed 5 September 2025. ↑
-
Twaig,(n 76). ↑
-
Weskill, ‘AI Accountability: Who’s Responsible When AI Fails?’ (Weskill Blog, 22 November 2024) https://blog.weskill.org/2024/11/AI-Accountability-Whos-Responsible-When-AI-Fails.html. ↑
-
V Reed, ‘AI Accountability: Who Bears the Responsibility?’ (AI Competence, 5 August 2024) https://aicompetence.org/ai-accountability-who-bears-the-responsibility/ accessed 5 September 2025. ↑
-
AD Saenz, Z Harned, O Banerjee, MD Abràmoff and P Rajpurkar, ‘Autonomous AI Systems in the Face of Liability, Regulations and Costs’ (2023) 6 npj Digital Medicine 185 https://doi.org/10.1038/s41746-023-00929-1 accessed 5 September 2025. ↑
-
Reed (n 82). ↑
-
Seanz (n 83) ↑
-
H Dolfing, ‘Case Study 20: The $4 Billion AI Failure of IBM Watson for Oncology’ (Henrico Dolfing Blog, 7 December 2024) https://www.henricodolfing.com/2024/12/case-study-ibm-watson-for-oncology-failure.html accessed 5 September 2025. ↑
-
Weskill (n 81). ↑
-
Reed (n 82). ↑
-
I Giuffrida, ‘Liability for AI Decision-Making: Some Legal and Ethical Considerations’ (2019) 88 Fordham L Rev 439 https://doi.org/10.2139/ssrn.4953916 accessed 5 September 2025. ↑
-
L McDonald, ‘AI Systems and Liability: An Assessment of the Applicability of Strict Liability & a Case for Limited Legal Personhood for AI’ (2023) 3 St Andrews L J 5 https://doi.org/10.15664/stalj.v3i1.2645 accessed 5 September 2025. ↑
-
Vokrug, A. (2023, October 27). Artificial intelligence and legal identity. Unite.AI. https://www.unite.ai/artificial-intelligence-and-legal-identity/ ↑
-
Onate (n 77). ↑
-
SK Devineni, ‘AI in Data Privacy and Security’ (2024) 3 International Journal of Artificial Intelligence & Machine Learning 35 https://doi.org/10.17605/OSF.IO/WCN8A accessed 5 September 2025. ↑
-
Secure Privacy, ‘Artificial Intelligence and Personal Data Protection: Complying with the GDPR and CCPA while Using AI’ (SecurePrivacy.ai, 4 October 2023) https://secureprivacy.ai/blog/ai-personal-data-protection-gdpr-ccpa-compliance accessed 5 September 2025. ↑
-
Economic, Social and Environmental Council (n 53). ↑
-
The General Data Protection Regulation (GDPR) is a European Union regulation that came into effect on May 25, 2018, designed to protect the personal data and privacy of individuals within the EU and the European Economic Area (EEA). ↑
-
Secure Privacy (n 94). ↑
-
Regulation (EU) 2016/679 of the European Parliament and of the Council of 27 April 2016 on the protection of natural persons with regard to the processing of personal data and on the free movement of such data (General Data Protection Regulation) [2016] OJ L119/1, art 14 https://eur-lex.europa.eu/eli/reg/2016/679/oj accessed 5 September 2025. ↑
-
Gomstyn, A., & Jonker, A. (2024, September 30). Exploring privacy issues in the age of AI. IBM. https://www.ibm.com/think/insights/ai-privacy ↑
-
T Boufertiḥ, [Report: Morocco tops African countries targeted by cyberattacks]’ Hespress (16 April 2025) https://www.hespress.com/تقرير-المغرب-يتصدر-البلدان-الإفريقية-1544809.html accessed 5 September 2025 ↑
-
Al-Marnissi(n 55) ↑
-
Boufertih(n 100) ↑
-
For example, after decades of Moroccan-Algerian tensions because of the conflict in Western Moroccan Sahara, the computer system of Morocco’s National Social Security Fund was breached by hackers in a cyberattack at the beginning of this year 2025. The breach led to the massive exposure of employees’ personal data on social media One such cyber team or “crew” called “Jabbarout DZ” claimed responsibility for the attack , Publihsed .in: France24.https://www.france24.com/ar/%D8%A7%D9%84%D8%A3%D8%AE%D8%A8%D8%A7%D8%B1%D8%A7%D9%84%D9%85%D8%BA%D8%A7%D8%B1%D8%A8%D9%8A%D8%A9/20250410 ↑
-
El haddam (n 58). ↑
-
Universal Declaration of Human Rights (UDHR) of 1948 is a foundational international instrument taken by the United Nations General Assembly that states basic human rights and freedoms for all humans. ↑
-
International Covenant on Economic, Social and Cultural Rights (ICESCR) of 1966 is an international binding agreement. ↑
-
International Covenant on Civil and Political Rights (ICCPR) of 1966 is the core human rights treaty protecting civil and political rights. ↑
-
Vienna Declaration and Program of Action (VDPA) of 1993 adopted during the World Conference on Human Rights. ↑
-
Rodrigues (n 68). ↑
-
WIPO stands for World Intellectual Property Organization, a UN specialized agency which was formed in 1998. It is mandated to promote the protection of intellectual property (IP) rights across the world, ↑
-
Njoum, O. (2021, April 18). تمكين التطبيقات الذكية بين الفقه والقانون: رؤية مستقبلية في دولة الإمارات العربية المتحدة (الجزء الثاني – التطبيقات الذكية في القانون) [Empowering smart applications between jurisprudence and law: A future vision in the United Arab Emirates (Part Two – Smart applications in law)]. Paper presented at the 2nd International Conference on Scientific Research, Imam Malik College for Sharia and Law, United Arab Emirates.https://imc.gov.ae/ar/Scientific-Research-And-Magazine/Scientific-Research-2nd-International-Conference ↑
-
Rodrigues (n 68). ↑
-
Device for the Autonomous Bootstrapping of Unified Sentience It is designed to simulate human-like creativity by autonomously generating new ideas, designs, and inventions—without direct human intervention. https://www.wipo.int/wipolex/en/text/585909?utm ↑
-
J Chew and J Davidson, ‘The interaction between intellectual property laws and AI: Opportunities and challenges’ (Norton Rose Fulbright, November 2024) https://www.nortonrosefulbright.com/en/knowledge/publications/c6d47e6f/the-interaction-between-intellectual-property-laws-and-ai-opportunities-and-challenges accessed 5 September 2025. ↑
-
A Hayward, A Vandervliet, B Turner and R Montagnon, ‘The IP in AI – Recent updates and developments’ (Herbert Smith Freehills, 19 May 2023) https://www.herbertsmithfreehills.com/insights/202305/the-ip-in-ai-%E2%80%93-what-you-need-to-know accessed 5 September 2025. ↑
-
El haddam (n 58) ↑
-
according to the Article 77 of the Moroccan Code of Obligations and Contracts (C.O.C), provides the following: “Any act committed by a person knowingly and voluntarily, without legal authorization, that causes material or moral harm to another, obliges the perpetrator to compensate for the damage—provided it is established that the act was the direct cause of the harm.” ↑
-
according to the Article 77 of C.O.C “Every person is liable for moral or material damage that they have caused, not only by their own act but also by their fault, whenever it is established that this fault was the direct cause of the damage. Any clause to the contrary shall have no effect. A fault is defined as either the omission of what should have been done or the commission of what should have been avoided, even if there was no intent to cause harm. ↑
-
Ait Mouh (n 61). ↑
-
Article 88 of the Code of Obligations and Contracts that: “Every person is liable for damage caused by things under their custody, if it is established that these things were the direct cause of the damage…”. ↑
-
A H Lar’arari, Sources of Obligation: Civil Liability (3rd edn, Faculty of Law Agdal 2011)374. ↑
-
M R Badr, Civil liability arising from the use of artificial intelligence techniques in Jordanian legislation (Master’s thesis, Middle East University 2022) https://www.meu.edu.jo/libraryTheses. ↑
-
H M O Elhamrawy, ‘The basis of civil liability for robots between traditional rules and the modern trend [أساس المسؤولية المدنية عن الروبوتات بين القواعد التقليدية والاتجاه الحديث]’ (2021) 23(2, pt 4) 3059 https://jfslt.journals.ekb.eg/article_218225_489ec22285f25bbb719c42b80e3e1fce. ↑
-
Khadija Oujbour, ‘المسؤولية التقصيرية عن أخطاء تقنيات الذكاء الاصطناعي [Tort liability for errors in artificial intelligence technologies]’ (Faculty of Legal, Economic and Social Sciences, Ibn Zohr University, Agadir, 5 February 2025) https://mrlatalib.com/Respo-delictuelle-de-AI accessed 5 September 2025. ↑
-
Kingdom of Morocco. (2011). Law No. 24.09 on the safety of products and services, amending the Code of Obligations and Contracts of August 12, 1913. Official Bulletin No. 5980, September 23, 2011, p. 4678. ↑
-
Lar’arari (n 121). ↑
-
Amina El Belghiti, ‘المسؤولية القانونية لروبوتات الذكاء الاصطناعي [The legal responsibility of artificial intelligence robots]’ (Mrlatalib.com, 16 October 2024) https://mrlatalib.com/Pfe-responsabilite-AI-robot accessed 5 September 2025. ↑
-
Oujbour (n 124). ↑
-
Eltouzani, N. (2025, January 12). L’impact de l’intelligence artificielle sur la théorie de la responsabilité civile [تأثير الذكاء الاصطناعي على نظرية المسؤولية المدنية] (Master’s thesis, Université Abdelmalek Essaâdi).Mrlatalib.com. https://mrlatalib.com/Pfe-master-impact-de-ingelligence-artificielle-sur-la-theorie-de-la-responsabilite-civile. ↑
-
Oujbour (n 124). ↑





