Tuesday, January 20, 2026

Global AI Governance in Flux: How International Regulations Are Diverging in 2025

Share

A Comparative Analysis of Regulatory Approaches and Compliance Complexity for Multinational AI Companies

The artificial intelligence regulatory landscape of 2025 resembles less a harmonized global framework than a fragmented mosaic of divergent national strategies. As the European Union enforces the world’s first comprehensive AI law, India unveils lightweight governance guidelines, Canada’s legislative efforts collapse amid political turmoil, and China leverages open-source technology to challenge Western AI dominance, multinational companies face an unprecedented compliance challenge.

This analysis examines how four major jurisdictions have adopted fundamentally different approaches to AI governance, creating a complex environment where compliance with one framework offers no guarantee of meeting another’s requirements. For companies deploying AI systems across borders, the stakes are measured in billions of euros in potential fines, restricted market access, and the strategic positioning that will determine competitive advantage in the AI era.

The European Union: Comprehensive Regulation as Global Standard

The AI Act: From Concept to Enforcement

On February 2, 2025, the European Union’s Artificial Intelligence Act reached its first major enforcement milestone when prohibitions on “unacceptable risk” AI practices became legally binding across all 27 member states. The moment marked a watershed in global AI governance: for the first time, AI developers and deployers faced concrete legal penalties—up to €35 million or 7 percent of global annual turnover, whichever is higher—for deploying prohibited AI systems within EU borders.

The AI Act, which entered into force on August 1, 2024, implements a risk-based regulatory framework that categorizes AI systems into four tiers: prohibited, high-risk, limited-risk requiring transparency, and minimal or no-risk. This classification drives proportional compliance obligations, with the most stringent requirements applying to AI systems deemed to pose unacceptable risks to fundamental rights, health, or safety.

What’s Prohibited: Eight Categories of Banned AI

The prohibited practices under Article 5 represent the EU’s ethical red lines for AI deployment. These eight categories, now enforceable with the threat of maximum fines, include:

Subliminal Manipulation: AI systems deploying manipulative techniques that materially distort behavior in ways that cause or are likely to cause physical or psychological harm. This prohibition targets voice-activated toys encouraging dangerous behavior in children and other systems exploiting cognitive vulnerabilities.

Exploitation of Vulnerabilities: Systems that exploit vulnerabilities of specific groups due to age, disability, or social or economic circumstances, causing physical or psychological harm.

Social Scoring by Public Authorities: Government-operated systems that evaluate or classify people based on social behavior or personal characteristics, leading to detrimental treatment unrelated to the context in which data was originally generated.

Predictive Policing Based on Profiling: Systems assessing individuals’ risk of committing crimes based on personality traits, characteristics, or past behavior—though risk assessments based on “objective and verifiable facts directly linked to criminal activity” remain permitted.

Biometric Categorization Inferring Sensitive Attributes: Systems attempting to infer race, political opinions, trade union membership, religious beliefs, sex life, or sexual orientation from biometric data—except for law enforcement authorities operating under specific conditions.

Emotion Recognition in Workplace and Education: AI systems inferring emotional states or characteristics from biometric data in workplace or educational settings, except for medical or safety reasons.

Untargeted Scraping of Facial Images: Creating or expanding facial recognition databases through untargeted scraping from internet or CCTV footage.

Real-time Remote Biometric Identification in Public Spaces: Law enforcement use of real-time biometric identification systems in publicly accessible spaces, with narrow exceptions for searching trafficking victims, preventing imminent threats, or investigating serious crimes—and only with prior judicial authorization.

These prohibitions became enforceable February 2, 2025, though formal penalty provisions didn’t activate until August 2, 2025. This created a transitional period where violations were legally prohibited but maximum fines weren’t yet applicable—a brief grace period that has now expired.

High-Risk Systems: The Compliance Frontier

Starting August 2, 2026, high-risk AI systems face comprehensive compliance requirements. These systems include those used in:

  • Critical Infrastructure: AI safety components in transport, water, gas, heating, and electricity networks
  • Education and Employment: Systems determining access to education or evaluating students; AI managing recruitment, worker evaluation, or employment decisions
  • Essential Services: AI assessing credit scores, insurance risk, or eligibility for public benefits
  • Law Enforcement: AI assisting in victim identification, evidence evaluation, or crime investigation
  • Border Control and Migration: Automated document verification, risk assessment, or examination of asylum applications
  • Justice and Democracy: AI influencing court decisions or democratic processes

For these high-risk applications, providers must conduct conformity assessments, implement robust data governance, ensure human oversight, maintain detailed technical documentation, and perform ongoing monitoring. National supervisory authorities will enforce these requirements, with violations triggering fines up to €15 million or 3 percent of global annual turnover.

General-Purpose AI Models: The August 2025 Turning Point

August 2, 2025 marked another critical milestone when obligations for general-purpose AI (GPAI) models became applicable. Large language models and other foundation models now face horizontal requirements including:

Transparency Obligations: Providers must prepare and maintain technical documentation describing the model’s training process, datasets used, evaluation results, and known limitations. They must provide this information to downstream providers and to the AI Office upon request.

Copyright Compliance: GPAI providers must implement policies ensuring models respect copyright protections when trained on web-crawled data. This includes honoring rights reservations under Article 4(3) of the Digital Single Market Directive and implementing measures to prevent infringement.

Systemic Risk Assessment: Models identified as posing systemic risks—generally those using training compute greater than 10^25 floating point operations (FLOP)—must perform model evaluation, identify and mitigate systemic risks, track and document serious incidents, and report them to authorities.

The European AI Office published a Code of Practice in May 2025 providing guidance on implementing these obligations. While initially voluntary, the Code became the primary compliance pathway until harmonized European standards are adopted in August 2027.

The Brussels Effect in Action

The AI Act’s extraterritorial reach extends to any AI system that produces outputs used in the EU, regardless of where the system is developed or deployed. This provision—combined with the Act’s substantial penalties—is driving what analysts call the “Brussels Effect”: multinational companies adopting EU standards globally rather than maintaining jurisdiction-specific variants.

“We’re seeing the same dynamic that played out with GDPR,” explains Dr. Cédric Burton, data privacy partner at Wilson Sonsini. “Companies find it more efficient to build to the EU’s standards once rather than maintain multiple compliance frameworks. The AI Act is becoming the de facto international baseline.”

However, this convergence faces resistance. In December 2025, the Trump administration’s executive order seeking to limit state AI regulation explicitly framed EU-style comprehensive regulation as a threat to American competitiveness. This transatlantic tension creates difficult choices for multinational firms caught between divergent regulatory philosophies.

Member State Implementation: The Enforcement Patchwork

While the AI Act provides a unified legal framework, enforcement remains decentralized through national competent authorities. Each EU member state must designate at least one market surveillance authority and one notifying authority by August 2, 2025.

Implementation varies significantly across member states. Some countries, like France and Germany, have established centralized AI authorities. Others distribute responsibilities across existing sectoral regulators—health authorities overseeing medical AI, financial regulators monitoring AI in banking, and so forth.

This decentralization creates compliance complexity. A healthcare AI system deployed across multiple member states may face investigations by French, German, and Italian health authorities, each potentially interpreting provisions differently. The European AI Office provides coordination, but cannot override national enforcement decisions.

India: The Lightweight Governance Gambit

Guidelines Over Legislation: India’s Strategic Choice

On November 5, 2025, India’s Ministry of Electronics and Information Technology unveiled the India AI Governance Guidelines, choosing a dramatically different path from the EU’s legislative approach. Rather than enacting a standalone AI law, India opted for principles-based guidelines that leverage existing statutes and promote industry self-regulation.

“India has consciously chosen not to lead with regulation but to encourage innovation while studying global approaches,” explained IT Secretary S. Krishnan at the guidelines’ launch. “Wherever possible, we will rely on existing laws and frameworks rather than rush into new legislation.”

This decision reflects India’s dual ambitions: positioning itself as an AI development hub for the Global South while ensuring responsible deployment. With 420,000 AI professionals and projected economic benefits of $500-600 billion by 2035, India views heavy regulation as potentially stifling the growth it seeks to catalyze.

Seven Sutras: Principles Without Penalties

The India AI Governance Guidelines rest on seven foundational principles, termed “Seven Sutras”:

1. People First: Human-centric design, human oversight, and human empowerment must guide AI development. Systems should serve citizens rather than replace human judgment.

2. Innovation Over Restraint: Responsible development should proceed without regulatory throttling. Trust forms the foundation for public adoption; without trust, innovation stagnates.

3. Fairness & Equity: AI systems must promote inclusive development and actively prevent discrimination based on protected characteristics.

4. Accountability: Clear allocation of responsibility throughout the AI value chain, with enforcement mechanisms for violations.

5. Understandable by Design: Transparency and explainability requirements enable users and regulators to understand how systems function and reach decisions.

6. Safety, Resilience & Sustainability: AI systems must be robust, secure, and environmentally responsible throughout their lifecycle.

7. Protect Data: Data governance frameworks ensuring privacy, security, and sovereignty—particularly critical given India’s recent Digital Personal Data Protection Act.

Crucially, these principles function as normative standards rather than legally binding mandates. Companies must adhere to them when seeking government funding or integration with public service platforms under the IndiaAI Mission, but violation does not trigger automatic penalties as in the EU.

Sectoral Regulation: The Distributed Governance Model

Instead of creating a central AI regulatory authority, India relies on sectoral regulators to manage application-specific risks. The Reserve Bank of India oversees AI in financial services, the Securities and Exchange Board regulates AI in capital markets, and the Ministry of Health governs medical AI applications.

This distributed model means AI developers face dual-layered obligations. MeitY provides national philosophical direction through the Seven Sutras, while binding compliance requirements come from sector-specific regulators. A healthcare AI company must satisfy both the general governance guidelines and the medical device regulatory framework.

The approach mirrors India’s successful Digital Public Infrastructure (DPI) model exemplified by Aadhaar, UPI, and DigiLocker: open, interoperable systems built on common standards but implemented through sector-specific mechanisms.

Mandatory AI Labeling: The Content Transparency Push

While avoiding comprehensive AI legislation, India has moved aggressively on AI-generated content transparency. Draft amendments to the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021, released in October 2025, establish some of the world’s strictest AI labeling requirements.

The proposed rules mandate:

10% Visibility Standard: AI-generated visuals must display labels covering at least 10 percent of the screen. Synthetic audio must carry labels during the first 10 percent of playback. This measurable standard represents one of the world’s first quantified requirements for AI content marking.

Permanent Metadata: All synthetically generated content must carry permanent unique metadata or identifiers that cannot be modified, suppressed, or removed by users.

Platform Responsibilities: Significant social media intermediaries (platforms with over 5 million users) must obtain explicit declarations from uploaders confirming whether content is AI-generated. They must deploy “reasonable and proportionate” technical measures to verify these declarations.

Government Accountability: Only senior government officers at joint secretary level and above can issue content removal directions to platforms—a provision aimed at preventing arbitrary censorship.

These requirements position India alongside the EU and China in mandating visible AI markers, addressing concerns about deepfakes and misinformation that have plagued Indian elections and public discourse.

Data Sovereignty and the IndiaAI Mission

Data sovereignty forms a critical pillar of India’s AI strategy. The government has invested $1.25 billion in the IndiaAI Mission, which includes:

AIKosh: A repository hosting 1,500 datasets and 217 AI models available for public use. As of August 2025, the platform facilitates access to training data while maintaining India’s control over sensitive information.

Sovereign Foundation Models: Government support for four startups developing indigenous foundation models. This initiative aims to reduce dependence on Western AI providers while ensuring models align with Indian values and priorities.

Subsidized GPU Access: Over 38,000 GPUs made available at subsidized rates to democratize access to compute resources. This infrastructure investment targets MSMEs (Micro, Small, and Medium Enterprises) often priced out of AI development.

Digital Public Infrastructure: Building on India’s DPI success, the mission promotes interoperable AI systems that can be deployed across government services and private sector applications.

The data sovereignty push reflects India’s concern that training data largely controlled by Western companies inadequately represents Indian languages, cultural contexts, and social norms. Indigenous models, the reasoning goes, will better serve India’s 1.4 billion population.

One area where India’s lightweight approach faces tension is copyright and AI training data. In April 2025, the Department for Promotion of Industry and Internal Trade established a committee to examine whether using copyrighted works for AI training constitutes fair dealing under Section 52 of the Indian Copyright Act.

The committee notes that current fair dealing exceptions for research limit to non-commercial use and may not extend to many forms of AI training. With the EU, Japan, Singapore, and UK adopting Text and Data Mining (TDM) exceptions, India faces pressure to clarify its position.

This uncertainty creates risk for AI developers operating in India. Without clear safe harbor for training on copyrighted material, companies may face infringement claims after investing substantially in model development. The committee’s recommendations, expected in early 2026, will significantly impact India’s AI development trajectory.

Global South Leadership Ambitions

India has positioned itself as a leader in AI governance for the Global South. The November 2025 guidelines explicitly state that “India’s governance model should serve as a reference for developing economies seeking agile and context-sensitive AI regulation.”

At the Paris AI Action Summit in February 2025, co-chaired by France and India, Indian officials promoted their lightweight model as superior to the EU’s prescriptive approach for countries lacking robust regulatory infrastructure. The argument: developing nations need frameworks that encourage AI adoption rather than create compliance burdens that entrench existing technology divides.

Whether this model proves exportable remains uncertain. India’s vast technical workforce, established digital infrastructure, and relatively sophisticated regulatory ecosystem may not exist in countries India seeks to influence. The success of India’s approach as a Global South template depends on whether other nations can replicate these enabling conditions.

Canada: The Death of AIDA and Regulatory Vacuum

A Three-Year Effort Collapses

On January 6, 2025, Canada’s first comprehensive AI regulation attempt died when Prime Minister Justin Trudeau’s resignation and prorogation of Parliament caused Bill C-27—which included the Artificial Intelligence and Data Act (AIDA)—to die on the order paper. The bill had languished in parliamentary committee since June 2022, facing intensifying criticism despite government attempts at reform.

AIDA’s collapse represents more than legislative failure; it symbolizes the difficulty democratic governments face balancing innovation advocacy with consumer protection in fast-moving technology sectors. The bill’s demise leaves Canada without federal AI-specific legislation despite being an AI development hub hosting major research centers and hosting companies like OpenAI’s co-founders.

What AIDA Proposed: The Risk-Based Framework

AIDA aimed to regulate international and interprovincial trade in AI systems through a risk-based approach focusing on “high-impact” AI applications. The framework would have required:

Risk Assessment and Mitigation: Developers and deployers of high-impact systems conducting iterative assessments identifying potential harms to health, safety, or rights. Mitigation measures would need implementation before deployment.

Transparency and Explainability: Affected individuals accessing sufficient information about AI system usage to understand and challenge decisions impacting them.

Human Oversight: Meaningful human involvement in consequential decisions, with humans able to override AI recommendations.

Accountability Frameworks: Organizations establishing governance structures documenting roles, responsibilities, and accountability mechanisms for AI systems.

Record Keeping: Detailed documentation of AI system development, testing, deployment, and ongoing monitoring to support regulatory oversight.

Biased Output Prohibition: Restrictions on AI systems producing discriminatory outcomes based on protected characteristics.

The Minister of Innovation, Science and Industry would have administered enforcement, supported by a newly created AI and Data Commissioner. Violations would trigger administrative penalties and potentially criminal offenses for serious infractions.

Why AIDA Failed: The Stakeholder Revolt

AIDA faced sharp criticism from diverse constituencies, creating an unusual coalition opposed to its passage:

Labor Organizations complained the bill ignored labor rights implications. The Canadian Labor Congress demanded it be “reconceived from a human, labor, and privacy rights-based perspective,” arguing it failed to protect workers from AI-driven displacement or workplace surveillance.

Creative Industries protested inadequate protection for copyrighted works used in AI training. The Directors Guild of Canada, Writers Guild of Canada, and Music Canada argued the bill failed to address how AI systems would compensate creators whose works trained models.

Civil Society Groups criticized the exclusionary consultation process. Over 130 witnesses testified before the House of Commons Standing Committee on Industry and Technology, many raising concerns about insufficient stakeholder inclusion and inadequate protection for vulnerable groups.

Technology Companies worried about vague requirements. The definition of “high-impact” systems remained unclear, creating uncertainty about which applications would face stringent regulation.

Legal Experts questioned the constitutional validity of certain provisions and their relationship to provincial jurisdiction over areas like labor, health, and education.

This broad-based opposition reflected fundamental disagreements about AI governance philosophy. Labor groups wanted stronger protections; industry wanted clearer safe harbors; and civil rights organizations questioned whether the framework adequately prevented algorithmic discrimination.

The November 2023 Amendments: Too Little, Too Late

In November 2023, the government proposed significant amendments attempting to address criticism. Changes included:

  • Clarifying the definition of AI systems and high-impact categorization
  • Strengthening provisions around biased outputs and discrimination
  • Enhancing transparency requirements for affected individuals
  • Expanding the AI and Data Commissioner’s powers
  • Aligning terminology with international frameworks like the EU AI Act and OECD AI Principles

Despite these revisions, the amendments failed to satisfy critics. The fundamental structure remained unchanged, and many stakeholders felt their concerns had been cosmetically addressed rather than substantively resolved. By the time Trudeau prorogued Parliament in January 2025, AIDA had been in committee for over two years with no clear path to passage.

What Operates Instead: The Interim Patchwork

Without AIDA, Canada’s AI governance landscape consists of fragmented federal and provincial measures:

Treasury Board Directive on Automated Decision-Making: Governs federal government use of automated decision systems, requiring transparency, accountability, and human oversight. While not legally binding legislation, federal departments face budget restrictions, audits, and oversight for non-compliance.

Voluntary Code of Conduct on Advanced Generative AI: Released September 2023, this non-binding code encourages responsible development and management of generative AI systems. Signatories commit to accountability, safety, fairness, transparency, and robustness principles.

Sectoral Regulation: Existing regulators address AI within their domains. The Office of the Superintendent of Financial Institutions released Draft Guideline E-23 on Model Risk Management for federally regulated financial institutions. Law societies in Alberta, British Columbia, and Ontario issued guidance for lawyers using generative AI.

Provincial Initiatives: Ontario advanced Bill 194, proposing provincial-level AI regulation. Quebec’s Innovation Council recommended provincial AI legislation. These efforts create a patchwork where rules vary by province and sector.

Privacy Law Application: The Personal Information Protection and Electronic Documents Act (PIPEDA) applies to AI systems processing personal data, though it predates modern AI and wasn’t designed for algorithmic decision-making challenges.

This patchwork provides some governance but lacks cohesion, creating uncertainty for companies operating across provinces and sectors. A healthcare AI company might face different requirements in Ontario, Quebec, and British Columbia, with no federal framework ensuring baseline consistency.

Post-AIDA: What’s Next for Canadian AI Regulation

The April 28, 2025 federal election returned the Liberal Party, raising questions about whether AI legislation will be revived. Several scenarios appear possible:

AIDA Resurrection: The government could reintroduce substantially similar legislation, incorporating lessons from the failed attempt. However, the stakeholder opposition that doomed the original bill remains unresolved.

Complete Overhaul: Canada might start from scratch, conducting broader consultations and addressing fundamental critiques. This approach risks further delay but might produce legislation with broader support.

Provincial Leadership: Without federal action, provinces might advance their own AI laws, creating the kind of fragmentation AIDA aimed to prevent. This outcome seems increasingly likely given Ontario’s and Quebec’s legislative initiatives.

Sectoral Approach: Canada might abandon comprehensive AI legislation in favor of sector-specific rules, similar to the U.S. approach. Financial services, healthcare, and employment could each develop tailored frameworks.

International Alignment: Canada might wait for international frameworks to mature, then adopt harmonized standards rather than pioneering unique requirements. This approach would prioritize interoperability over leadership.

For multinational companies, Canada’s regulatory vacuum creates both opportunity and risk. The absence of AI-specific legislation means less immediate compliance burden, but also less certainty about future requirements. Companies investing in Canadian AI development face the possibility of retroactive compliance obligations if and when legislation passes.

China: Open Source as Strategic Weapon

DeepSeek’s Earthquake: The January 2025 Disruption

On January 20, 2025, Chinese AI startup DeepSeek released its R1 model, sending shockwaves through the global AI industry. The company claimed to have developed a model performing comparably to OpenAI’s offerings at a fraction of the cost, using significantly less computing power, and operating under U.S. export restrictions limiting access to advanced chips.

DeepSeek’s achievement challenged fundamental assumptions about AI development. The prevailing wisdom held that frontier AI required massive computing clusters using the latest Nvidia GPUs, multi-billion-dollar investments, and teams of hundreds of researchers. DeepSeek suggested algorithmic efficiency could substitute for raw computational power—a paradigm shift with profound implications.

More strategically, DeepSeek released R1 under an MIT License, one of the most permissive open-source licenses available. Unlike Meta’s Llama or Google’s Gemma—marketed as open-source but carrying restrictive licenses—DeepSeek provided unrestricted use, modification, and distribution, including for commercial purposes.

This open-source strategy represented more than generosity; it positioned China as the champion of accessible AI in contrast to Western closed-source models. As Kai-Fu Lee, Chinese AI entrepreneur and scholar, noted: “The biggest revelation from DeepSeek is that open-source has won.”

China’s AI Regulatory Framework: Control Through Flexibility

China’s approach to AI governance embodies paradox: encouraging rapid development while maintaining tight control. The framework consists of several layers:

The 2017 Development Plan: China’s “New Generation Artificial Intelligence Development Plan” established a three-phase timeline: achieving global competitiveness by 2020, making major breakthroughs by 2025, and securing world leadership by 2030. This roadmap positions AI as central to China’s economic and strategic ambitions.

The 2021 Ethical Norms: “Ethical Norms for the New Generation Artificial Intelligence” provides ethical guidelines for individuals, enterprises, and organizations engaged in AI activities. The norms emphasize human-centricity, fairness, privacy protection, and controllability.

The 2023 Interim Measures: “Interim Measures for the Management of Generative Artificial Intelligence Services” specifically target generative AI, requiring registration with authorities, security assessments, and alignment with government-defined ethical principles.

The 2025 Network Data Security Regulations: Implemented January 2025, these rules require companies providing generative AI services to prepare for data breach risks and report incidents within 24 hours—directly targeting concerns raised by DeepSeek’s major cyberattack in January.

Sector-Specific Rules: Regulations cover recommendation algorithms, deepfakes, and other specific AI applications. Rather than comprehensive omnibus legislation, China targets particular use cases based on perceived risks.

This regulatory structure balances promotion and control. China encourages AI development and deployment—particularly when it advances economic goals—while ensuring the technology serves party objectives and doesn’t threaten social stability or political control.

AI Safety Commitments: Industry Self-Regulation with State Backing

In December 2024, DeepSeek joined sixteen other Chinese companies in signing the Artificial Intelligence Safety Commitments, a domestic initiative bearing strong similarities to international industry efforts like the Seoul Commitments from the May 2024 AI Summit.

The Chinese commitments include:

Red-Teaming Exercises: Testing systems to identify severe threats before deployment

Transparency: Providing information about frontier model capabilities and limitations

Security Organization: Building structures to promote frontier system security

Data and Infrastructure Protection: Comprehensive security requirements for data and critical infrastructures

Open-Source Safety Measures: Specific provisions for appropriate safety measures in open-source initiatives—acknowledging that Chinese companies like DeepSeek, Alibaba, Tencent, and Zhipu AI compete primarily through open-source models

The effort was spearheaded by China’s Artificial Intelligence Industry Alliance (AIIA), a prominent industry consortium guided by the Ministry of Industry and Information Technology (MIIT). Historically, AIIA involvement has presaged future Chinese regulation; in 2019-2020, AIIA recommendations formed the foundation for regulations on recommendation algorithms and deepfakes.

This pattern suggests the AI Safety Commitments may evolve into binding requirements. For now, they represent industry best practices with implicit government backing—a characteristically Chinese approach where voluntary guidelines become effectively mandatory through state pressure and potential exclusion from government contracts or funding.

State Control Intensifies: The DeepSeek Treatment

DeepSeek’s success triggered intensified government oversight. Zhejiang provincial authorities now reportedly screen investors before meetings with company leadership and have instructed headhunters to cease talent recruitment targeting the firm. Some DeepSeek employees have reportedly surrendered passports due to access to information potentially classified as state secrets.

These restrictions extend beyond DeepSeek. China’s leading AI researchers face directives to avoid U.S. travel to prevent inadvertent sharing of strategically sensitive information. The measures reflect the government’s increasingly cautious approach as AI capabilities improve and geopolitical tensions intensify.

This creates tension between China’s open-source AI strategy and its security imperatives. Open-source models by definition share code, architecture, and training approaches—precisely the information the government now seeks to protect as state secrets. How China resolves this tension will significantly impact its AI development trajectory.

The Open-Source Ecosystem: China’s Strategic Bet

China has produced 17 percent of global open-source software—the second-most worldwide—with over 30 million open-source projects spanning chips to applications. This ecosystem provides the foundation for China’s AI ambitions.

Beyond DeepSeek, major Chinese companies have embraced open-source AI:

Zhipu AI declared 2025 “the year of open source” and released multiple open-source models

Alibaba’s Qwen 2.5 model ranks among the world’s best open-weight models according to Anthropic’s Head of Policy Jack Clark

Tencent’s Hunyuan model is “by some measures world class” in open-weight category

Baidu, Moonshot AI, and others have released competitive open-source models challenging Western closed-source offerings

This open-source proliferation serves multiple strategic purposes:

Cost Competition: Free, high-quality models undercut Western companies’ pricing power. As Ray Wang of Constellation Research notes: “With DeepSeek free, it’s impossible for any other Chinese competitors to charge for the same thing.”

Global Influence: Open-source models enable developing countries to deploy advanced AI without depending on Western technology, positioning China as champion of the Global South.

Ecosystem Development: Open-source fosters a developer ecosystem building on Chinese models, creating network effects and de facto standards.

Regulatory Arbitrage: Open-source models distributed globally complicate efforts to restrict Chinese AI through export controls or sanctions.

Technical Learning: Widespread open-source deployment generates feedback improving models faster than closed development cycles.

The strategy carries risks. Open-source models can be modified to remove safety guardrails, as evidenced by DeepSeek’s poor performance in security assessments—the model failed to block harmful prompts that OpenAI’s GPT-4o blocked 86 percent of the time. However, China appears willing to accept these risks to achieve strategic objectives.

Data Sovereignty and Localization

China’s AI governance strongly emphasizes data sovereignty. The Cyberspace Security Law, Data Security Law, and Personal Information Protection Law create a comprehensive framework requiring:

Data Localization: AI training data on Chinese citizens generally must remain within China, with strict approval processes for cross-border transfers

Security Assessments: Companies transferring data abroad must conduct security assessments and obtain approval from authorities

Network Security Reviews: Critical information infrastructure operators deploying AI systems face network security reviews

State Access: Government authorities maintain broad powers to access data for national security purposes

These requirements create challenges for multinational companies. A global AI model trained on data from multiple countries may violate Chinese data localization rules if Chinese user data is included. Conversely, a China-specific model trained only on domestic data may not generalize well to other markets.

The DeepSeek Paradox: Innovation Under Authoritarianism

DeepSeek’s success raises fundamental questions about innovation under authoritarian governance. Can open-source collaboration thrive in a tightly controlled system? Can breakthrough AI emerge from an environment that restricts researcher travel, controls information flows, and demands political alignment?

Initial evidence suggests these tensions create real constraints. While DeepSeek achieved technical breakthroughs, the company faces challenges in safety, security, and trust that may limit adoption despite technical capabilities. Western enterprises, governments, and security-conscious organizations remain wary of Chinese AI systems potentially containing backdoors or surveillance capabilities.

Moreover, China’s AI strategy faces economic headwinds. Venture capital funding for Chinese AI startups declined nearly 50 percent year-over-year in Q1 2025, reflecting investor wariness amid sluggish growth and regulatory uncertainty. While the government commits massive state funding, the ecosystem’s long-term dynamism may depend on private capital that remains skittish.

The question becomes whether China’s open-source strategy represents sustainable competitive advantage or a temporary asymmetric response to Western export controls. As China’s AI capabilities improve and security concerns intensify, the government may restrict open-source releases to prevent technology transfer—undermining the very strategy that enabled DeepSeek’s global impact.

The Compliance Nightmare: Navigating Divergent Frameworks

The Cost of Fragmentation

For multinational AI companies, the divergent regulatory landscape creates compliance challenges unprecedented in technology sector history. The EU’s comprehensive requirements, India’s sectoral approach, Canada’s regulatory vacuum, and China’s state-directed framework represent not just different rules but fundamentally incompatible governance philosophies.

Consider a healthcare AI system deployed globally:

In the European Union, the system likely qualifies as high-risk under the AI Act. The developer must:

  • Conduct and document conformity assessments
  • Implement robust data governance ensuring training data quality
  • Establish human oversight mechanisms
  • Maintain detailed technical documentation
  • Perform ongoing monitoring and report adverse events
  • Ensure national competent authorities can audit the system

Total estimated compliance cost for initial certification: €500,000-€2,000,000. Ongoing monitoring and reporting: €200,000-€500,000 annually.

In India, the same system faces:

  • General compliance with AI Governance Guidelines’ Seven Sutras
  • Sector-specific medical device regulations
  • Digital Personal Data Protection Act requirements
  • AI labeling requirements if the system generates patient communications
  • Data localization for Indian patient data

Compliance strategy depends on whether the company seeks government healthcare contracts (requiring strict guideline adherence) or operates purely in private sector (allowing more flexibility). Estimated compliance cost: $100,000-$400,000 initially, with ongoing costs varying by deployment model.

In Canada, absent AIDA, the system must satisfy:

  • Federal privacy law (PIPEDA) for personal health information
  • Provincial healthcare regulations varying by province
  • Professional medical association guidance
  • Treasury Board directives if used by federal government
  • Potential future compliance obligations if legislation passes

The regulatory vacuum creates uncertainty. Companies must decide whether to build to anticipated requirements or adopt minimal compliance now, risking costly retrofits later. Estimated current compliance cost: $150,000-$300,000, with significant contingency reserve for potential legislation.

In China, the system faces:

  • Registration with health authorities
  • Data localization requirements for patient information
  • Network security reviews as critical information infrastructure
  • Alignment with ethical principles defined by government
  • Reporting requirements for data breaches within 24 hours

Government approval processes can take 6-18 months. Estimated compliance cost: $200,000-$600,000 plus substantial time delays. Companies often develop China-specific variants rather than adapting global products.

Total multinational compliance cost for a single healthcare AI system: $950,000-$3,300,000 initially, with ongoing costs of $400,000-$700,000 annually.

These figures don’t include opportunity costs from deployment delays, market access restrictions, or competitive disadvantages from slower iteration cycles.

The Strategic Dilemma: Build Once or Build Many?

Faced with divergent requirements, companies adopt three primary strategies:

Strategy 1: “Brussels Effect” Baseline

Companies build to EU’s stringent requirements, then deploy globally. This “highest common denominator” approach:

Advantages: Single compliance framework, simplified development, easier auditing, positions company as responsible AI leader

Disadvantages: Over-compliance in less restrictive jurisdictions increases costs, may sacrifice features viable elsewhere, slows innovation velocity compared to competitors adopting minimal compliance

Strategy 2: Regional Variants

Companies develop jurisdiction-specific versions optimized for each market:

Advantages: Minimizes compliance costs per jurisdiction, enables features prohibited elsewhere, maintains competitive agility

Disadvantages: Multiplies development and maintenance costs, complicates testing and validation, creates version control challenges, risks inconsistent performance across markets

Strategy 3: Modular Compliance

Companies build core functionality meeting universal requirements, with jurisdiction-specific compliance modules:

Advantages: Balances efficiency and localization, enables rapid market entry, facilitates updates when regulations change

Disadvantages: Requires sophisticated architecture, still incurs significant compliance overhead, may create integration challenges

No strategy proves universally superior. The optimal approach depends on company size, product complexity, target markets, risk tolerance, and competitive positioning.

Data Governance: The Universal Challenge

Data governance represents the most complex cross-jurisdictional challenge. Every major framework includes data requirements, but they differ fundamentally:

GDPR (EU): Requires purpose limitation, data minimization, consent for processing, right to erasure, automated decision-making restrictions, and data protection impact assessments

DPDPA (India): Mandates consent for data processing, purpose limitation, data localization where required, breach notification within 72 hours

PIPL (China): Requires explicit consent, data localization, security assessments for cross-border transfers, government access provisions

Provincial Privacy Laws (Canada): Varying requirements across provinces, with Quebec’s Law 25 among the strictest

Training a global AI model requires reconciling these incompatible requirements. A dataset including EU citizen data can’t be freely transferred to China without violating GDPR. Chinese citizen data can’t leave China without security assessments. Indian data requires localization unless exemptions apply.

Companies respond through several approaches:

Federated Learning: Training models on distributed datasets without centralizing data. This technique enables compliance with localization requirements while allowing global model development. However, federated learning introduces technical challenges around model convergence, data heterogeneity, and computational efficiency.

Synthetic Data: Generating artificial training data mimicking real data’s statistical properties without containing personal information. This approach sidesteps many privacy regulations but raises questions about model performance and potential for memorization of underlying real data.

Regional Models: Training separate models for each major jurisdiction using only data from that region. This strategy ensures compliance but fragments model development and may degrade performance in data-scarce regions.

Consent Management: Obtaining explicit consent for cross-border data use. While legally sound, consent requirements often conflict with machine learning needs for continuous training and improvement.

None of these approaches fully resolves the tension between AI’s hunger for data and privacy frameworks’ restrictions. As regulators scrutinize AI training practices, particularly around copyrighted content and personal information, data governance challenges will intensify.

The Auditing and Monitoring Gap

Effective AI governance requires ongoing monitoring—not just initial compliance certification. Yet auditing and monitoring requirements diverge significantly:

EU AI Act: Mandates ongoing conformity assessment, with some high-risk systems requiring third-party auditing. Providers must maintain quality management systems and report serious incidents.

India: Encourages internal or third-party impact assessments for high-risk systems, but doesn’t mandate continuous monitoring with specific reporting requirements.

China: Requires regular security assessments and immediate breach reporting, with authorities maintaining broad audit rights.

Canada: Without AIDA, monitoring requirements depend on sector-specific regulations and voluntary commitments.

This creates operational complexity. A company operating globally must maintain multiple monitoring systems satisfying different jurisdictional requirements, track different incident thresholds, report to different authorities, and respond to audit requests with varying legal foundations.

The monitoring burden falls disproportionately on startups and mid-sized companies lacking dedicated compliance teams. Large multinational corporations can staff global compliance functions; smaller companies must choose between expensive external consultants or accepting compliance risk.

Enforcement Uncertainty: Who’s Watching?

Perhaps the greatest challenge is enforcement uncertainty. With new regulatory frameworks just beginning implementation, precedents remain sparse.

EU Member State Variance: National authorities interpret AI Act provisions differently. What qualifies as “high-risk” in Germany may not in Spain. French authorities may scrutinize applications differently than Italian counterparts. Until the Court of Justice of the European Union provides definitive interpretations, this uncertainty persists.

India’s Light Touch: India’s guideline-based approach intentionally avoids strict enforcement. But sector-specific regulators may fill the void with their own interpretations. How aggressively will financial regulators enforce AI requirements? Will health authorities adopt strict standards? Uncertainty abounds.

China’s Political Enforcement: Chinese enforcement depends more on political priorities than legal standards. A company in good standing may face sudden scrutiny if geopolitical tensions escalate. Conversely, politically favored companies may escape penalties for violations.

Canada’s Future Reckoning: Companies operating in Canada face potential retroactive compliance obligations. If legislation passes establishing requirements for existing AI systems, companies may face costly retrofits or market exit decisions.

This enforcement uncertainty complicates risk assessment. Traditional compliance calculus evaluates probability of detection, likelihood of enforcement, and magnitude of penalties. With AI regulation, all three factors remain highly uncertain across jurisdictions.

Looking Forward: Convergence or Fragmentation?

The Case for Convergence

Some analysts predict regulatory convergence as frameworks mature. The argument:

Economic Pressure: Compliance costs drive companies toward standardization. As firms demand consistent requirements, governments face pressure to harmonize to maintain competitiveness.

Technical Necessity: AI systems increasingly deploy globally through cloud infrastructure. Maintaining jurisdiction-specific variants becomes technically infeasible as complexity increases.

International Coordination: Forums like the Paris AI Action Summit, Bletchley Declaration, and Council of Europe AI Treaty establish multilateral governance principles. While current agreements remain high-level, they create foundation for detailed harmonization.

Brussels Effect: EU standards become de facto international baseline as companies find it more efficient to build to EU requirements than maintain multiple variants.

Best Practices Emergence: As industry gains experience with different frameworks, best practices emerge that satisfy multiple jurisdictions simultaneously. These practices become new baseline expectations.

Evidence supports this view. The AI Safety Commitments signed by Chinese companies closely mirror Western industry commitments. India’s Seven Sutras align substantially with OECD AI Principles. Even the U.S. executive order attacking state AI laws acknowledges alignment with international norms where beneficial.

The Case for Fragmentation

Countervailing forces suggest persistent divergence:

Geopolitical Competition: U.S.-China AI rivalry frames regulation as national security issue. Neither country will accept frameworks they perceive as advantaging competitors.

Regulatory Philosophy Differences: The EU prioritizes human rights and precautionary regulation; the U.S. emphasizes innovation and market solutions; China values political control and state-directed development. These fundamental differences may prove irreconcilable.

Domestic Political Pressures: Each jurisdiction faces distinct political constituencies demanding different regulatory approaches. European citizens prioritize privacy; American tech companies demand light regulation; Chinese authorities require political alignment.

Economic Positioning: Countries use regulation strategically to advantage domestic industries. India’s lightweight approach aims to attract AI development; the EU’s strict standards create compliance barriers favoring established European players.

Technical Divergence: Different jurisdictions focus on different AI applications and risks. The EU emphasizes workplace and social applications; China prioritizes content moderation and social stability; India focuses on financial inclusion and public services. These varied priorities drive different requirements.

Evidence for fragmentation includes the Trump administration’s aggressive opposition to EU-style regulation, China’s intensifying control over AI development despite open-source rhetoric, and the collapse of AIDA showing Canada can’t even achieve domestic consensus, let alone international harmonization.

The Most Likely Outcome: Fragmented Blocs

The realistic future combines elements of both scenarios: convergence within geopolitical blocs, fragmentation across them.

The Western Bloc: EU, UK, Canada, Australia, and possibly Japan converge around frameworks emphasizing human rights, transparency, and accountability. The EU AI Act serves as blueprint, with variations reflecting each country’s specific concerns.

The Chinese Sphere: China’s model—combining state direction, sectoral regulation, and open-source strategy—extends through Belt and Road countries and nations seeking alternatives to Western frameworks. Russia, parts of Southeast Asia, and some Latin American countries adopt Chinese-influenced approaches.

The Global South Experiments: India, Brazil, South Africa, and other large developing economies chart middle paths—lighter than EU regulation but more structured than Chinese control. These countries prioritize economic development and technology access over strict rights protections or political control.

The U.S. Exception: America remains an outlier with sector-specific federal regulation, fragmented state laws, and ongoing political battles over whether comprehensive AI legislation is needed or desirable.

Within each bloc, harmonization proves feasible. Between blocs, fundamental differences persist. Companies operating globally must maintain compliance strategies for each bloc, accepting fragmentation as structural feature rather than transitional phase.

Implications for Strategy

For multinational companies, this predicted fragmented-bloc future drives several strategic implications:

1. Regional Specialization: Rather than developing truly global products, companies increasingly focus on specific geopolitical blocs where they maintain competitive advantage and compliance expertise.

2. Partnership Models: Western companies partner with Chinese firms to access Chinese markets without bearing full compliance burden. Chinese companies partner with Western firms to gain trust in Western markets.

3. Open Source as Shield: Companies release foundation models as open source, allowing regional partners to handle jurisdiction-specific compliance while maintaining technological influence.

4. Regulatory Arbitrage: Companies locate development, training, and deployment in jurisdictions optimizing their particular cost-benefit calculus, rather than pursuing global integration.

5. Compliance as Competitive Advantage: Early investment in robust compliance infrastructure becomes moat against competitors lacking resources to navigate fragmented landscape.

Practical Guidance for Multinational Companies

Step 1: Map Your Risk Profile

Before developing compliance strategy, companies must understand their risk exposure across jurisdictions. This requires answering:

Where do you deploy?: Which jurisdictions will see your AI systems? Consider not just initial launch markets but potential expansion.

What do you deploy?: How do regulators in each jurisdiction classify your systems? High-risk, limited-risk, or minimal-risk? General-purpose model or specialized application?

Who are you?: Provider, deployer, or both? The distinction determines obligations under frameworks like the EU AI Act.

What data do you use?: Whose personal information trains your models? Where does this data reside? Can it lawfully cross borders?

What decisions do you make?: Do your systems make automated decisions affecting legal rights, employment, access to services, or fundamental rights? These use cases typically trigger highest scrutiny.

Step 2: Adopt Regulatory Stack Approach

Given fragmentation, the “regulatory stack” approach offers optimal balance between compliance and efficiency:

Base Layer: Build core capabilities meeting strictest applicable requirements (typically EU AI Act). This ensures foundation satisfies all major frameworks.

Compliance Modules: Develop jurisdiction-specific components that can be configured for different markets. These modules handle local requirements without requiring core re-architecture.

Testing and Validation: Maintain testing suites validating compliance across jurisdictions. Automated testing catches regression when updating systems.

Documentation: Establish robust documentation practices from inception. Development decisions, data sources, testing methodologies, performance metrics—document everything. Satisfies auditors across regions.

Monitoring: Implement comprehensive monitoring detecting issues before they become regulatory incidents. Real-time monitoring supports EU ongoing conformity requirements and China breach reporting obligations simultaneously.

Step 3: Build Data Governance First

Data governance isn’t compliance overhead—it’s fundamental architecture. Companies getting data governance right from the start avoid costly retrofits later.

Data Inventory: Catalog what personal information you collect, from whom, where it resides, how long you retain it, and who accesses it.

Purpose Specification: Document specific purposes for data collection and processing. Avoid catch-all provisions; regulators demand granular justification.

Minimization: Collect only data necessary for specified purposes. This simultaneously satisfies GDPR’s minimization requirement and reduces exposure under other frameworks.

Localization Strategy: Determine whether federated learning, regional models, or other techniques can satisfy localization requirements while enabling desired functionality.

Consent Management: Implement systems obtaining, tracking, and respecting consent across jurisdictions with different requirements.

Breach Response: Establish processes detecting, containing, and reporting data breaches within shortest required timeframes (currently 24 hours in China, 72 hours under GDPR).

Step 4: Engage Early and Often

Waiting until regulations finalize before engaging proves too late. Companies should:

Monitor Developments: Track regulatory proposals across target markets. Many frameworks undergo consultation periods providing input opportunities.

Participate in Standards Bodies: Contribute to industry standards development. Standards often become mandatory through regulatory incorporation by reference.

Pilot in Sandboxes: Jurisdictions increasingly offer regulatory sandboxes allowing controlled testing under relaxed requirements. Participation provides early compliance experience and regulatory relationships.

Build Relationships: Establish connections with competent authorities before enforcement actions. Regulators dealing with known entities may exercise discretion unavailable to strangers.

Industry Coordination: Join industry associations coordinating compliance approaches. Collective action reduces costs and influences regulatory development.

Step 5: Make Compliance a Feature

Companies leading on compliance can market it as competitive advantage. Rather than viewing requirements as costs to be minimized, leaders recognize compliance investments attract customers concerned about responsible AI.

Transparency Marketing: Prominently advertise compliance with strict standards. “EU AI Act Certified” becomes trust signal.

Explainability as UX: Build explainability required by regulations into user experience improvements. Users appreciate understanding AI decisions regardless of legal requirements.

Fairness Testing: Proactively test for bias and discrimination. Market fairness certifications from credible third parties.

Human Oversight by Design: Rather than bolting human review onto automated systems, design interfaces enabling meaningful human involvement. This satisfies regulations while improving quality.

Incident Response: When issues occur, transparent and rapid response builds trust. Companies known for responsible incident handling recover faster than those perceived as evasive.

Conclusion: The New Geography of AI

The divergence of AI governance frameworks in 2025 represents more than regulatory complexity—it signals the emergence of a new technological geography where geopolitical boundaries increasingly constrain digital infrastructure once imagined as borderless.

The European Union’s comprehensive AI Act establishes the most stringent baseline, using the threat of massive fines to compel global compliance. Yet this regulatory ambition faces challenges from member state implementation variances and American resistance to what the Trump administration frames as extraterritorial overreach.

India’s lightweight governance approach positions the country as an alternative hub for AI development, particularly for Global South nations seeking less prescriptive frameworks. Whether this model proves sustainable as AI capabilities mature and risks materialize remains an open question.

Canada’s AIDA failure illustrates the difficulty democratic governments face balancing innovation promotion with consumer protection. The resulting regulatory vacuum creates opportunities for companies seeking flexible environments but risks competitive disadvantages as other jurisdictions establish standards.

China’s embrace of open-source AI, exemplified by DeepSeek’s January 2025 breakthrough, represents a strategic response to Western export controls and an attempt to establish China as champion of accessible AI for developing countries. Yet this strategy coexists uneasily with intensifying state control and security restrictions that may ultimately undermine the open collaboration open-source requires.

For multinational AI companies, this fragmented landscape demands sophisticated compliance strategies treating jurisdiction-specific requirements not as temporary burdens but as permanent features of the operating environment. The companies succeeding in this new geography will be those that embed compliance into core architecture, build flexible systems accommodating divergent requirements, and position responsible AI deployment as competitive advantage rather than regulatory tax.

The question is no longer whether AI regulation will converge globally—the trends clearly indicate persistent fragmentation. The question is whether companies can navigate this complexity while maintaining the innovation velocity required to compete in an industry where technological advantage measures in months, not years.

As AI capabilities accelerate and deployment scales, the stakes of these governance choices compound. The regulatory frameworks established in 2025 will shape not just compliance costs but fundamental questions about AI’s role in society: Who controls these powerful technologies? Whose values do they embody? Who benefits from their deployment? And who bears the risks of their failures?

These questions lack universal answers. Different societies, shaped by distinct histories, cultures, and political systems, will necessarily reach different conclusions. The challenge for companies operating across these jurisdictions is not to resolve these differences but to operate effectively within a world where they persist.

The new geography of AI is not one of convergence toward universal standards but fragmentation along geopolitical lines. Success in this environment demands not just technical excellence but regulatory sophistication, strategic positioning, and the organizational flexibility to adapt as frameworks continue evolving. For companies making these investments, the fragmented regulatory landscape creates defensible competitive advantages. For those treating compliance as afterthought, the costs of navigating global AI governance may prove prohibitively steep.

Sources

European Union AI Act

  1. European Commission, “AI Act | Shaping Europe’s digital future,” https://digital-strategy.ec.europa.eu/en/policies/regulatory-framework-ai
  2. AlgorithmWatch, “As of February 2025: Harmful AI applications prohibited in the EU,” https://algorithmwatch.org/en/ai-act-prohibitions-february-2025/
  3. Quinn Emanuel, “Initial Prohibitions Under EU AI Act Take Effect,” July 2025, https://www.quinnemanuel.com/the-firm/publications/initial-prohibitions-under-eu-ai-act-take-effect/
  4. DLA Piper, “Latest wave of obligations under the EU AI Act take effect: Key considerations,” August 2025, https://www.dlapiper.com/en/insights/publications/2025/08/latest-wave-of-obligations-under-the-eu-ai-act-take-effect
  5. European Parliament, “EU AI Act: first regulation on artificial intelligence,” https://www.europarl.europa.eu/topics/en/article/20230601STO93804/eu-ai-act-first-regulation-on-artificial-intelligence
  6. Jones Day, “EU AI Act: First Rules Take Effect on Prohibited AI Systems and AI Literacy,” February 2025, https://www.jonesday.com/en/insights/2025/02/eu-ai-act-first-rules-take-effect-on-prohibited-ai-systems
  7. eyreACT, “Prohibited Systems Under EU AI Act,” November 2025, https://www.eyreact.com/prohibited-systems-under-eu-ai-act/

India AI Governance

  1. American Chase, “AI Regulations in India 2025: Complete Compliance Guide,” September 2025, https://americanchase.com/generative-ai-regulations-india/
  2. Saikrishna & Associates, “Decoding the India AI Governance Guidelines,” November 2025, https://www.saikrishnaassociates.com/decoding-the-india-ai-governance-guidelines/
  3. India Law, “Regulating The Machine Mind: AI, Privacy, And Intellectual Property Under India’s 2025 AI Governance Guidelines,” November 2025, https://www.indialaw.in/blog/data-privacy/ai-privacy-and-copyright-under-indias-2025-governance-guidelines/
  4. AIGN, “India’s AI Governance Guidelines 2025,” November 2025, https://aign.global/ai-governance-insights/aign-global/indias-ai-governance-guidelines-2025/
  5. IAPP, “Global AI Governance Law and Policy: India,” https://iapp.org/resources/article/global-ai-governance-india/
  6. SS Rana, “2025 IT Rules Amendment: Regulating Synthetically Generated Information in India’s AI and privacy landscape,” November 2025, https://ssrana.in/articles/2025-it-rules-amendment-regulating-synthetically-generated-information-in-indias-ai-and-privacy-landscape/
  7. Lexology, “India’s First Steps In Regulating AI Generated Content,” October 2025, https://www.lexology.com/library/detail.aspx?g=79911cee-addc-4599-aeed-b90d6647c95b
  8. The AI Track, “India Proposes Strict AI Labelling Rules to Counter Deepfakes,” October 2025, https://theaitrack.com/india-ai-labelling-rules-2025/
  9. Vision IAS, “India’s New AI Governance Guidelines Push Hands-Off Approach,” November 2025, https://visionias.in/blog/current-affairs/indias-new-ai-governance-guidelines-push-hands-off-approach

Canada AIDA

  1. Government of Canada, “The Artificial Intelligence and Data Act (AIDA) – Companion document,” https://ised-isde.canada.ca/site/innovation-better-canada/en/artificial-intelligence-and-data-act-aida-companion-document
  2. Government of Canada, “Artificial Intelligence and Data Act,” https://ised-isde.canada.ca/site/innovation-better-canada/en/artificial-intelligence-and-data-act
  3. Xenoss, “AI regulation in Canada in 2025: AIDA, PIPEDA, future plans,” November 2025, https://xenoss.io/blog/ai-regulation-canada
  4. White & Case, “AI Watch: Global regulatory tracker – Canada,” https://www.whitecase.com/insight-our-thinking/ai-watch-global-regulatory-tracker-canada
  5. Cox & Palmer, “Canada’s Artificial Intelligence and Data Act (AIDA) 2024: A Comprehensive Guide,” April 2024, https://coxandpalmerlaw.com/publication/aida-2024/
  6. Schwartz Reisman Institute, “What’s Next After AIDA?” March 2025, https://srinstitute.utoronto.ca/news/whats-next-for-aida
  7. 360 Business Law, “Canada’s Artificial Intelligence and Data Act (AIDA): Key Developments, Objectives, and Future Implications,” November 2025, https://www.360businesslaw.com/canadas-artificial-intelligence-and-data-act-aida-key-developments-objectives-and-future-implications-for-ai-regulation/
  8. Montreal AI Ethics Institute, “The Death of Canada’s Artificial Intelligence and Data Act: What Happened, and What’s Next?” January 2025, https://montrealethics.ai/the-death-of-canadas-artificial-intelligence-and-data-act-what-happened-and-whats-next-for-ai-regulation-in-canada/

China AI Strategy

  1. Just Security, “Open Questions for China’s Open-Source AI Regulation,” May 2025, https://www.justsecurity.org/111053/chinas-open-source-ai-regulation/
  2. Carnegie Endowment, “China’s AI Policy at the Crossroads: Balancing Development and Control in the DeepSeek Era,” July 2025, https://carnegieendowment.org/research/2025/07/chinas-ai-policy-in-the-deepseek-era?lang=en
  3. Rest of World, “DeepSeek’s new model could push China ahead in the global AI race,” December 2025, https://restofworld.org/2025/deepseek-china-r2-ai-model-us-rivalry/
  4. EU Institute for Security Studies, “Challenging US dominance: China’s DeepSeek model and the pluralisation of AI development,” July 2025, https://www.iss.europa.eu/publications/briefs/challenging-us-dominance-chinas-deepseek-model-and-pluralisation-ai-development
  5. South China Morning Post, “DeepSeek, Alibaba researchers endorse China’s AI regulatory framework,” November 2025, https://www.scmp.com/tech/policy/article/3334376/deepseek-alibaba-researchers-endorse-chinas-misunderstood-ai-regulatory-framework
  6. Carnegie Endowment, “DeepSeek and Other Chinese Firms Converge with Western Companies on AI Promises,” January 2025, https://carnegieendowment.org/research/2025/01/deepseek-and-other-chinese-firms-converge-with-western-companies-on-ai-promises?lang=en
  7. Phys.org / The Conversation, “DeepSeek: How China’s embrace of open-source AI caused a geopolitical earthquake,” February 2025, https://phys.org/news/2025-02-deepseek-china-embrace-source-ai.html
  8. National Bureau of Asian Research, “China’s Approach to AI Development and Governance,” https://www.nbr.org/publication/chinas-approach-to-ai-development-and-governance/
  9. CNBC, “China’s open-source embrace upends conventional wisdom around artificial intelligence,” March 2025, https://www.cnbc.com/2025/03/24/china-open-source-deepseek-ai-spurs-innovation-and-adoption.html

Multinational Compliance

  1. People+ai, “AI Compliance in 2025: How Tech Teams Can Stay Ahead of Global Regulations,” https://peopleplus.ai/ai-promise-folder/ai-promise/ai-compliance-in-2025-how-tech-teams-can-stay-ahead-of-global-regulations
  2. GDPR Local, “Top AI Governance Trends for 2025: Compliance, Ethics, and Innovation,” September 2025, https://gdprlocal.com/top-5-ai-governance-trends-for-2025-compliance-ethics-and-innovation-after-the-paris-ai-action-summit/
  3. Scrut, “AI compliance in 2025: Key regulations and strategies for business,” December 2025, https://www.scrut.io/post/ai-compliance
  4. Anecdotes.ai, “AI Regulations in 2025: US, EU, UK, Japan, China & More,” November 2025, https://www.anecdotes.ai/learn/ai-regulations-in-2025-us-eu-uk-japan-china-and-more
  5. Keymakr, “AI Regulations and Laws in 2025,” May 2025, https://keymakr.com/blog/regional-and-international-ai-regulations-and-laws-in-2024/
  6. Skillcast, “Top 10 Compliance Challenges in 2025,” https://www.skillcast.com/blog/top-10-compliance-challenges-2025
  7. Nucamp, “Understanding Data Privacy Laws for AI Startups Across Different Regions,” May 2025, https://www.nucamp.co/blog/solo-ai-tech-entrepreneur-2025-understanding-data-privacy-laws-for-ai-startups-across-different-regions
  8. Gradient Flow, “AI Governance Cheat Sheet: Comparing Regulatory Frameworks Across the EU, US, UK, and China,” March 2025, https://gradientflow.com/ai-governance-global-cheat-sheet/
  9. Mind Foundry, “AI Regulations around the World – 2025,” https://www.mindfoundry.ai/blog/ai-regulations-around-the-world
  10. King & Spalding, “Transatlantic AI Governance – Strategic Implications for U.S. — EU Compliance,”https://www.kslaw.com/news-and-insights/transatlantic-ai-governance-strategic-implications-for-us-eu-compliance

Table of contents [hide]

Read more

Local News