Tuesday, January 20, 2026

AI and Human Rights: Balancing Innovation with Ethical Responsibility

Share

The rapid advancement of artificial intelligence has created a pivotal moment in human history. As AI systems become increasingly integrated into healthcare, employment, law enforcement, and public services, they present both unprecedented opportunities and serious threats to fundamental human rights. The challenge facing society today is not whether to adopt AI, but how to harness its transformative potential while protecting the rights and dignity of every person.

The Dual Nature of AI: Promise and Peril

Artificial intelligence stands at a crossroads. On one side, AI promises revolutionary benefits across multiple sectors. The healthcare industry exemplifies this potential, with AI applications projected to grow from $20.9 billion in 2024 to $148.4 billion by 2029, representing a compound annual growth rate of 48.1%. AI-powered diagnostic tools now achieve 87% accuracy in disease detection, surpassing human performance in many cases. These technologies enable earlier disease detection, personalized treatment plans, and significant cost reductions in healthcare delivery.

The benefits extend beyond healthcare. AI-driven systems streamline administrative processes, enhance educational outcomes, improve transportation safety, and accelerate scientific research. Virtual nursing assistants alone could save healthcare providers up to $20 billion annually, while AI-assisted robotic surgeries reduce patient hospital stays by 21%.

Yet this same technology poses substantial risks to human rights. In March 2024, all 193 United Nations member states adopted a resolution emphasizing that human rights and fundamental freedoms must be respected, protected, and promoted throughout the lifecycle of artificial intelligence systems. This unanimous declaration reflects growing international concern about AI’s potential to violate privacy, enable discrimination, facilitate surveillance, and undermine democratic processes.

The Human Rights Violations Already Occurring

AI systems are not theoretical threats to human rights. They are already causing documented harm across multiple dimensions.

Algorithmic Discrimination

The evidence of AI bias is overwhelming and growing. Research from the University of Washington published in October 2024 found that three state-of-the-art large language models demonstrated significant racial and gender bias in resume screening. The systems preferred white-associated names 85% of the time versus Black-associated names just 9% of the time. Male-associated names were preferred 52% of the time compared to female-associated names at only 11%.

The bias extends to intersectional identities. The same study revealed that AI systems never preferred names associated with Black males over white male names. Yet they preferred names typically associated with Black females 67% of the time versus 15% for Black male names, demonstrating unique harm against Black men that becomes invisible when examining race or gender in isolation.

These patterns are not anomalies. An estimated 99% of Fortune 500 companies now use some form of automation in their hiring process, potentially exposing millions of job applicants to discriminatory screening. The COMPAS algorithm used in U.S. court systems to predict recidivism has shown that it predicted twice as many false positives for Black offenders (45%) compared to white offenders (23%).

Privacy Violations Through Surveillance

AI-powered surveillance systems pose unprecedented threats to privacy rights. As of 2019, at least 75 countries had employed AI technologies for surveillance purposes. These systems often operate without consent, collecting biometric data including facial recognition, fingerprints, and iris scans that, unlike passwords, cannot be changed if compromised.

High-profile cases illustrate the scale of privacy violations. Clearview AI scraped millions of photos from social media and peer-to-peer payment platforms like Venmo to create a comprehensive facial recognition database, which was then sold to over 600 law enforcement agencies and private entities. The Office of the Privacy Commissioner of Canada found that Clearview AI violated express consent requirements, particularly for sensitive biometric information.

IBM faced lawsuits under Illinois’ Biometric Information Privacy Act for its “Diversity in Faces Dataset,” which collected face geometry scans from photographs without proper consent. Microsoft removed a database of 10 million facial photographs after discovering that most people whose faces were included had no knowledge their images had been collected.

Weaponization by Authoritarian Regimes

Perhaps most concerning is AI’s use as a tool of repression. AI systems are systematically employed to suppress dissent, manipulate public discourse, amplify gender-based violence, enable unlawful surveillance, and reinforce inequalities. China’s treatment of Uighur Muslims in Xinjiang exemplifies this abuse, where AI-based surveillance networks and social scoring systems have enabled mass detention in re-education camps based on algorithmic analyses.

The European Parliament documented in 2024 how algorithmic authoritarianism represents a growing global trend, with countries imposing significant government control over digital channels underpinned by extensive AI surveillance systems. These practices constitute serious threats to human rights and democracy.

In February 2025, Amnesty International condemned Google’s decision to reverse its ban on developing AI for weapons and surveillance, warning that AI-powered technologies could fuel surveillance and lethal killing systems at a vast scale, potentially leading to mass violations of fundamental privacy rights.

The Feedback Loop of Discrimination

One of AI’s most insidious effects is the creation of discrimination feedback loops. UN Special Rapporteur Ashwini K.P. explained this phenomenon in 2024: “When officers in overpoliced neighborhoods record new offenses, a feedback loop is created, whereby the algorithm generates increasingly biased predictions targeting these neighborhoods. In short, bias from the past leads to bias in the future.”

This pattern extends across multiple domains. Predictive policing tools make assessments about who will commit future crimes based on historical arrest data often tainted by systemic racism. Variables such as socioeconomic background, education level, and location act as proxies for race, perpetuating historical biases through seemingly neutral algorithms.

In healthcare, AI tools for creating health risk scores have included race-based correction factors that systematically disadvantage minority populations. Educational algorithms designed to predict academic success often score racial minorities as less likely to succeed, thus perpetuating exclusion and discrimination through self-fulfilling prophecies.

A 2025 study revealed that ChatGPT used 24.5% fewer female-related words than human writers, while older models like GPT-2 reduced such words by more than 43%. This systematic underrepresentation of women and people of color in AI-generated content creates a form of digital erasure that compounds existing inequalities.

The Emerging Regulatory Landscape

Recognizing these threats, governments and international bodies have begun developing AI governance frameworks, though implementation remains inconsistent and often insufficient.

The EU AI Act

The European Union’s Artificial Intelligence Act, which entered into force on August 1, 2024, represents the world’s first comprehensive legal framework on AI. The Act establishes a risk-based regulatory approach with four levels of AI system classification:

Unacceptable Risk: AI systems posing clear threats to safety, livelihoods, and rights are banned. Prohibited practices include social scoring by governments or companies, AI systems that manipulate human behavior through subliminal components, emotion recognition in workplaces and schools, and untargeted scraping of facial images from the internet or CCTV footage to create recognition databases.

High Risk: Systems used in critical infrastructure, education, employment, essential services, law enforcement, migration management, justice, and democratic processes face strict requirements. These systems must assess and reduce risks, maintain use logs, ensure transparency and accuracy, and guarantee human oversight. Citizens have the right to file complaints and receive explanations about decisions affecting their rights.

Limited Risk: Systems like chatbots must comply with transparency requirements, ensuring humans know they are interacting with machines.

Minimal Risk: Most AI systems face no obligations but can voluntarily adopt codes of conduct.

The Act became fully applicable on August 2, 2026, with prohibitions and AI literacy obligations entering application on February 2, 2025. However, civil society organizations including Amnesty International have criticized the Act for failing to adequately address human rights concerns, particularly regarding migrants, refugees, and asylum seekers, and for lacking robust accountability and transparency provisions.

U.S. Approach

The United States has taken a more fragmented approach. The White House’s Blueprint for an AI Bill of Rights identifies five principles for automated system design: safe and effective systems, algorithmic discrimination protections, data privacy, notice and explanation, and human alternatives and oversight.

Executive Order 14110 on Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence declared that AI policies must advance equity and civil rights. However, unlike the EU, the U.S. lacks comprehensive federal legislation, with privacy laws varying by state.

In July 2024, the U.S. State Department released a Risk Management Profile for AI and Human Rights, emphasizing that international human rights law must guide AI governance. The profile focuses on five key rights potentially impacted by AI: privacy, equal protection and non-discrimination, freedom of expression, freedom of peaceful assembly and association, and protection from arbitrary detention.

International Frameworks

In September 2024, the Council of Europe opened for signature the Framework Convention on Artificial Intelligence, the first legally binding international agreement on AI. The Convention provides a common approach to ensure AI systems are compatible with human rights, democracy, and the rule of law.

The Freedom Online Coalition’s 2025 Joint Statement on Artificial Intelligence and Human Rights reaffirmed commitment to protecting rights both online and offline, noting that AI systems are used systematically to suppress dissent, manipulate public discourse, amplify gender-based violence, enable unlawful surveillance, and reinforce inequalities and discrimination.

Balancing Innovation and Rights Protection

The challenge is not to halt AI development but to steer it toward human-centric applications that respect fundamental rights. This requires multi-stakeholder cooperation across several dimensions.

Technical Solutions

Technical interventions can reduce bias and enhance fairness. These include:

Diverse and Representative Datasets: Training data must reflect the full spectrum of human diversity. Selection bias occurs when data is not representative of real-world populations. For example, facial recognition models trained predominantly on lighter-skinned individuals struggle to accurately identify people with darker skin tones, leading to discriminatory outcomes.

Bias Detection and Mitigation: Organizations must implement fairness metrics, adversarial testing, and explainable AI techniques to identify and rectify bias. The European Union Agency for Fundamental Rights emphasizes that users of predictive algorithms need to assess data quality and other sources that influence bias and may lead to discrimination.

Algorithmic Transparency: AI systems must be interpretable. Black-box algorithms that cannot explain their decision-making processes make it impossible to identify discrimination or seek accountability. Article 22 of the General Data Protection Regulation addresses automated decision-making, requiring transparency and human oversight for decisions that significantly affect individuals.

Continuous Monitoring: Addressing bias requires ongoing evaluation. AI systems must be regularly audited after deployment to detect emerging biases and improve fairness based on real-world interactions and new data.

Governance and Accountability

Strong governance structures are essential for protecting human rights in the AI era:

Independent Oversight Bodies: Regulatory authorities must have sufficient resources, expertise, and independence to enforce AI regulations effectively. The EU AI Act establishes the European AI Office and national authorities responsible for implementing, supervising, and enforcing the regulation.

Mandatory Impact Assessments: High-risk AI systems should undergo comprehensive Fundamental Rights Impact Assessments before deployment. These assessments must explicitly evaluate racial and ethnic bias, potential discrimination, and effects on vulnerable populations.

Corporate Responsibility: Companies developing AI systems must take meaningful responsibility for human rights guided by frameworks such as the UN Guiding Principles on Business and Human Rights. This includes incorporating safety-by-design principles into development and governance models to identify, mitigate, and prevent adverse human rights impacts.

Transparency Requirements: AI providers must maintain detailed documentation providing all information necessary for authorities to assess system compliance. General-purpose AI systems must meet transparency requirements including compliance with copyright law and publication of detailed information about training data.

Whistleblower Protection: Individuals working on AI systems must be able to report human rights concerns without fear of retaliation. According to a 2021 survey, 98% of Americans feel they should have more control over data sharing, and 79% of India residents remain uncomfortable with data sales to third parties, demonstrating widespread concern that governance must address.

Inclusive Development Processes

AI development must include voices from affected communities, particularly marginalized groups most at risk from discriminatory systems:

Participatory Design: Those impacted by AI technologies must be meaningfully involved in decision-making about how AI should be regulated and deployed. The current reality is that communities in the Global Majority are often excluded from these discussions, despite being disproportionately affected.

Civil Society Engagement: Standards-setting, code of practice development, and advisory groups overseeing AI Act implementation must be transparent and inclusive of civil society organizations. Private entity dominance in these processes risks undermining human rights protections.

Cross-Cultural Perspectives: AI governance cannot be designed solely by and for Western countries. There have been documented cases of outsourced workers being exploited in Kenya and Pakistan by companies developing AI tools, highlighting the global power imbalances in AI development.

Addressing the Export Problem

A critical gap in current AI regulation is the lack of restrictions on exporting harmful AI systems from one jurisdiction to another. The UK, U.S., and EU approaches fail to account for global power imbalances, particularly the impact on communities in the Global Majority whose voices are not represented in regulatory discussions.

Regulations that limit or prohibit certain AI systems in one jurisdiction must prevent the same systems from being exported to countries where they could harm human rights, especially those of marginalized groups. This requires international cooperation and binding commitments that extend beyond national borders.

The Healthcare AI Paradox

Healthcare AI illustrates both the tremendous potential and serious risks of artificial intelligence. The sector has become America’s AI powerhouse, with deployment rates more than twice (2.2x) that of the broader economy. In just two years, healthcare went from 3% AI adoption to 22% adoption of domain-specific AI tools, representing a seven-fold increase.

The benefits are substantial. AI-supported mammogram screening increases breast cancer detection by 20%. AI technology can reduce drug discovery time from five or six years to just one year while cutting costs by up to 70%. Predictive analytics enable early intervention before health issues become severe. The healthcare AI market reached $32.34 billion in 2024 and is projected to reach $431.05 billion by 2032.

However, 60% of Americans remain uncomfortable with healthcare providers relying heavily on AI. Their concerns stem from valid privacy worries, potential loss of human oversight, and distrust in machine-based decision-making. Furthermore, 52% of consumers worry that AI-powered medical decisions could introduce bias into healthcare.

These concerns are not unfounded. Research has documented cases where AI tools for health risk scores included race-based correction factors that systematically disadvantaged minority populations. The challenge is ensuring that AI enhances rather than undermines equitable healthcare access.

The solution requires transparency about how AI functions in healthcare settings, robust testing for bias before deployment, ongoing monitoring for discriminatory outcomes, and maintaining meaningful human oversight for critical medical decisions. As 68% of U.S. adults fear AI could weaken patient-provider relationships, preserving the human element in healthcare remains essential.

The Path Forward

Creating an AI ecosystem that respects human rights while enabling innovation requires sustained commitment across multiple fronts:

Education and AI Literacy

The public, policymakers, and AI developers need better understanding of AI capabilities, limitations, and risks. AI literacy obligations under the EU AI Act recognize this need. Education must extend beyond technical communities to include civil society, affected populations, and decision-makers at all levels.

International Cooperation

AI systems operate globally, requiring international coordination on human rights standards. The UN resolution on trustworthy AI for sustainable development, adopted by consensus in March 2024, provides a foundation. However, implementation remains inconsistent, and mechanisms for accountability across borders are weak.

Investment in Rights-Respecting AI

Market incentives currently favor rapid deployment over careful rights assessment. Governments and investors must prioritize funding for AI systems designed with human rights at their core. This includes supporting research on bias mitigation, privacy-preserving technologies, and explainable AI.

Worker Protections

As AI transforms employment, regulations must protect workers’ rights. The EU AI Act allows Member States to maintain or introduce provisions more favorable to workers in terms of protecting their rights regarding employer AI use. This principle should be adopted globally.

Environmental Considerations

AI systems have significant environmental impacts through energy consumption and electronic waste. Human rights include the right to a healthy environment. The United Nations Environment Programme’s 2024 report emphasizes that the environmental impact of the full AI lifecycle needs comprehensive assessment. Rights-respecting AI must also be environmentally sustainable.

The Urgency of Action

The window for shaping AI’s trajectory is narrowing. As Mher Hakobyan, Amnesty International’s Advocacy Advisor on Artificial Intelligence, stated in March 2024: “While EU policymakers are hailing the AI Act as a global paragon for AI regulation, the legislation fails to take basic human rights principles on board.”

AI systems are already deployed at scale, affecting millions of people’s lives, livelihoods, and fundamental rights. According to industry data, 81% of tech leaders support government regulations to control AI bias, recognizing that market forces alone cannot ensure responsible development.

The decisions made in 2024 and 2025 will shape how international law, including international human rights law, is taken into account throughout the lifecycle of AI systems. This is a pivotal moment where frameworks could be shaped either by authoritarian interests and commercial priorities alone, or firmly rooted in and compliant with international law, including international human rights law, developed responsibly through inclusive, multistakeholder processes.

Concrete Steps for Stakeholders

Different actors must take specific actions to ensure AI respects human rights:

For Governments

  1. Enact comprehensive, binding AI legislation that prioritizes people and their rights over commercial interests
  2. Establish well-resourced, independent regulatory authorities with enforcement powers
  3. Mandate Fundamental Rights Impact Assessments for high-risk AI systems before deployment
  4. Prohibit AI systems with unacceptable risks to human rights, including intrusive surveillance and social scoring
  5. Create mechanisms for affected individuals to seek redress for AI-inflicted rights violations
  6. Regulate the export of AI systems that could be used to violate human rights in other jurisdictions
  7. Invest in public AI research focused on rights-respecting applications

For AI Companies

  1. Implement human rights due diligence throughout AI system lifecycles
  2. Ensure training datasets are diverse, representative, and obtained with informed consent
  3. Build algorithmic transparency and explainability into systems from the design stage
  4. Conduct regular bias audits and address identified issues before deployment
  5. Establish human oversight for consequential decisions
  6. Refuse to develop or sell systems designed for mass surveillance or social control
  7. Engage meaningfully with affected communities and civil society
  8. Publish transparency reports on AI system impacts and mitigation measures

For Civil Society Organizations

  1. Continue monitoring AI deployments and documenting human rights impacts
  2. Advocate for strong, binding regulations that protect vulnerable populations
  3. Participate in standards-setting and code development processes
  4. Support affected individuals in seeking accountability for AI harms
  5. Educate communities about AI risks and rights
  6. Build coalitions across borders to address global AI governance gaps
  7. Push for meaningful inclusion of marginalized voices in AI policy discussions

For Individual Users

  1. Demand transparency about AI systems that affect your life
  2. Exercise data protection rights including access, correction, and deletion
  3. Support organizations working to regulate AI responsibly
  4. Question algorithmic decisions that seem biased or unfair
  5. Advocate for human alternatives to automated decision-making in consequential contexts
  6. Stay informed about AI developments and their rights implications

Conclusion

Artificial intelligence is neither inherently beneficial nor harmful. Its impact depends entirely on how humans choose to develop, deploy, and govern it. The technology’s transformative potential to improve healthcare, accelerate scientific discovery, enhance education, and address global challenges is real and significant.

However, this potential can only be realized if AI development is grounded in respect for human rights and dignity. The evidence is overwhelming that current AI systems already violate fundamental rights through discrimination, privacy intrusions, and enabling repression. Without urgent action, these harms will compound as AI becomes more pervasive and powerful.

The regulatory frameworks emerging in 2024 and 2025 represent important first steps, but they are insufficient. Effective AI governance requires comprehensive, binding regulations with strong enforcement mechanisms; meaningful inclusion of affected communities in decision-making; technical solutions that address bias and enhance transparency; accountability mechanisms that allow victims to seek justice; and international cooperation that prevents the export of rights-violating systems.

Most fundamentally, society must reject the false choice between innovation and rights protection. The most valuable innovation is that which serves humanity while respecting the inherent dignity and equal rights of all people. As UN High Commissioner for Human Rights Volker Türk emphasized, “Placing human rights at the center of how we develop, use and regulate technology is absolutely critical to our response to these risks.”

The path forward requires sustained commitment from governments, companies, civil society, and individuals. It demands that we learn from past technology regulation failures and build robust accountability into AI governance from the outset. It necessitates that we prioritize people over profits and rights over convenience.

The stakes could not be higher. AI will shape the future of work, healthcare, education, justice, and democracy itself. Whether that future respects human rights and promotes human flourishing depends on the choices made today. The time for action is now, while there is still opportunity to steer AI development toward human-centric applications that enhance rather than undermine fundamental freedoms.

As we navigate this pivotal moment, the principle must remain clear: innovation that violates human rights is not progress. True advancement occurs when technology amplifies human capability while protecting human dignity. This is the balance we must achieve, and the responsibility we must embrace, to ensure that artificial intelligence serves all of humanity.

Sources

  1. United Nations General Assembly. (2024). Resolution A/RES/78/265: Seizing the opportunities of safe, secure and trustworthy artificial intelligence systems for sustainable development. United Nations. https://www.state.gov/risk-management-profile-for-ai-and-human-rights/
  2. Bowman, S. (2024, November 14). The role of artificial intelligence in predicting human rights violations. OpenGlobalRights. https://www.openglobalrights.org/the-role-of-ai-in-predicting-human-rights-violations/
  3. Council of Europe. (2025, May 13). Human Rights and artificial intelligence (CDDH-IA). https://www.coe.int/en/web/human-rights-intergovernmental-cooperation/intelligence-artificielle
  4. Freedom Online Coalition. (2025, June 26). Joint Statement on Artificial Intelligence and Human Rights. https://freedomonlinecoalition.com/joint-statement-on-ai-and-human-rights-2025/
  5. European Parliament. (2024). Artificial intelligence (AI) and human rights: Using AI as a weapon of repression and its impact on human rights. https://www.europarl.europa.eu/RegData/etudes/IDAN/2024/754450/EXPO_IDA(2024)754450(SUM01)_EN.pdf
  6. Thomson, L. L., & Sanders, T. (2024). Human Rights Challenges with Artificial Intelligence. American Bar Association. https://www.americanbar.org/groups/crsj/resources/human-rights/2024-june/human-rights-challenges-artificial-intelligence/
  7. Amnesty International. (2024, January 29). The Urgent but Difficult Task of Regulating Artificial Intelligence. https://www.amnesty.org/en/latest/campaigns/2024/01/the-urgent-but-difficult-task-of-regulating-artificial-intelligence/
  8. Amnesty International. (2025, February 11). Global: Google’s shameful decision to reverse its ban on AI for weapons and surveillance is a blow for human rights. https://www.amnesty.org/en/latest/news/2025/02/global-googles-shameful-decision-to-reverse-its-ban-on-ai-for-weapons-and-surveillance-is-a-blow-for-human-rights/
  9. Office of the United Nations High Commissioner for Human Rights. (2024, February). Human rights must be at the core of generative AI technologies, says Türk. https://www.ohchr.org/en/statements-and-speeches/2024/02/human-rights-must-be-core-generative-ai-technologies-says-turk
  10. Wilson, K., & Caliskan, A. (2024, October 31). AI tools show biases in ranking job applicants’ names according to perceived race and gender. UW News, University of Washington. https://www.washington.edu/news/2024/10/31/ai-bias-resume-screening-race-gender/
  11. European Union Agency for Fundamental Rights. (2022, November 29). Bias in algorithms: Artificial intelligence and discrimination. https://fra.europa.eu/en/publication/2022/bias-algorithm
  12. All About AI. (2025). Shocking AI Bias Statistics 2025: Why LLMs Are More Discriminatory Than Ever. https://www.allaboutai.com/resources/ai-statistics/ai-bias/
  13. Office of the United Nations High Commissioner for Human Rights. (2024, July). Racism and AI: “Bias from the past leads to bias in the future.” https://www.ohchr.org/en/stories/2024/07/racism-and-ai-bias-past-leads-bias-future
  14. Jackson, M. (2021). Ethics and discrimination in artificial intelligence-enabled recruitment practices. Humanities and Social Sciences Communications. https://www.nature.com/articles/s41599-023-02079-x
  15. IBM. (2024). What Is Algorithmic Bias? https://www.ibm.com/think/topics/algorithmic-bias
  16. IBM. (2025, October 13). Exploring privacy issues in the age of AI. https://www.ibm.com/think/insights/ai-privacy
  17. ISACA. (2021). Beware the Privacy Violations in Artificial Intelligence Applications. https://www.isaca.org/resources/news-and-trends/isaca-now-blog/2021/beware-the-privacy-violations-in-artificial-intelligence-applications
  18. DataGuard. (2025, January 10). The growing data privacy concerns with AI: What you need to know. https://www.dataguard.com/blog/growing-data-privacy-concerns-ai/
  19. VeraSafe. (2025, July 1). What Are the Privacy Concerns With AI? https://verasafe.com/blog/what-are-the-privacy-concerns-with-ai/
  20. Enzuzo. (2024, June 14). 7 AI Privacy Violations (+What Can Your Business Learn). https://www.enzuzo.com/blog/ai-privacy-violations
  21. California State University Long Beach. (2025, May 2). Artificial Intelligence: Privacy Concerns. https://www.csulb.edu/college-of-business/legal-resource-center/article/artificial-intelligence-privacy-concerns
  22. Francis, S. K. (2024, May). Navigating the Intersection of AI, Surveillance, and Privacy: A Global Perspective. United Nations SDGs Science-Policy Brief. https://sdgs.un.org/sites/default/files/2024-05/Francis_Navigating the Intersection of AI, Surveillance, and Privacy.pdf
  23. European Parliament. (2024). EU AI Act: first regulation on artificial intelligence. https://www.europarl.europa.eu/topics/en/article/20230601STO93804/eu-ai-act-first-regulation-on-artificial-intelligence
  24. European Commission. (2024, August 1). AI Act enters into force. https://commission.europa.eu/news-and-media/news/ai-act-enters-force-2024-08-01_en
  25. European Commission. AI Act: Shaping Europe’s digital future. https://digital-strategy.ec.europa.eu/en/policies/regulatory-framework-ai
  26. European Parliament. (2024, March 13). Artificial Intelligence Act: MEPs adopt landmark law. https://www.europarl.europa.eu/news/en/press-room/20240308IPR19015/artificial-intelligence-act-meps-adopt-landmark-law
  27. Amnesty International. (2024, August 1). Statement: EU takes modest step as AI law comes into effect. https://www.amnesty.eu/news/statement-eu-takes-modest-step-as-ai-law-comes-into-effect/
  28. Amnesty International. (2024, April 3). EU’s AI Act fails to set gold standard for human rights. https://www.amnesty.eu/news/eus-ai-act-fails-to-set-gold-standard-for-human-rights/
  29. Official Journal of the European Union. (2024). Regulation (EU) 2024/1689 on Artificial Intelligence. https://eur-lex.europa.eu/eli/reg/2024/1689/oj/eng
  30. Amnesty International. (2024, March 14). EU: Artificial Intelligence rulebook fails to stop proliferation of abusive technologies. https://www.amnesty.org/en/latest/news/2024/03/eu-artificial-intelligence-rulebook-fails-to-stop-proliferation-of-abusive-technologies/
  31. AlgorithmWatch. (2024). Upcoming Commission Guidelines on the AI Act Implementation: Human Rights and Justice Must Be at Their Heart. https://algorithmwatch.org/en/statement-commission-guidelines-ai-act/
  32. Docus. (2025). AI in Healthcare Statistics 2025: Overview of Trends. https://docus.ai/blog/ai-healthcare-statistics
  33. Menlo Ventures. (2025). 2025: The State of AI in Healthcare. https://menlovc.com/perspective/2025-the-state-of-ai-in-healthcare/
  34. All About AI. (2025, August 15). 19+ AI in Healthcare Statistics for 2024: Insights & Projections. https://www.allaboutai.com/resources/ai-statistics/healthcare/
  35. AIPRM. (2024, July 8). 50+ AI in Healthcare Statistics 2024. https://www.aiprm.com/ai-in-healthcare-statistics/
  36. Vention. (2024). AI in Healthcare 2024 Statistics: Market Size, Adoption, Impact. https://ventionteams.com/healthtech/ai/statistics
  37. TempDev. (2025, May 28). 65 Key AI in Healthcare Statistics. https://www.tempdev.com/blog/2025/05/28/65-key-ai-in-healthcare-statistics/
  38. Market.us. (2024, December 13). Generative AI in Healthcare Market to Witness 37% CAGR By 2032. https://media.market.us/generative-ai-in-healthcare-market-news-2024/
  39. LITSLINK. (2025, June 26). AI in healthcare statistics: Key Trends Shaping 2025. https://litslink.com/blog/ai-in-healthcare-breaking-down-statistics-and-trends
  40. CAREFUL. (2024, June 13). The future of healthcare: 2024 AI impact analysis. https://careful.online/future-healthcare-ai-2024/
  41. Boston Institute of Analytics. (2024, December 4). Top 10 AI Innovations Revolutionizing Healthcare In 2024. https://bostoninstituteofanalytics.org/blog/top-10-ai-innovations-transforming-the-healthcare-industry-in-2024/

Read more

Local News