AI Regulation Across the Atlantic: EU AI Act vs. U.S. AI Governance
By Osman Eren Dogan
December 29,2025
Abstract
One of the most significant contemporary policy issues requiring cooperation between the United States and the European Union is artificial intelligence (AI) governance, particularly in the post-2020 period. This study examines the differences between the European Union’s Artificial Intelligence Act and United States AI governance, with particular attention to Executive Order 14110. The analysis focuses on how these frameworks differ from a legal perspective, in their methods of application, and in their approaches to encouraging innovative activities (European Parliament & Council of the European Union, 2024; Harris & Jaikaran, 2024). The study explores whether these two distinct systems can coexist, potentially converge under a unified framework, or continue to operate separately. These regulatory choices are assessed in light of their broader implications for global AI norms and public perceptions of AI governance.
The research employs multiple methods, including document analysis and expert outreach conducted in Washington, DC. Three primary dimensions are examined. The first concerns institutional structure and enforcement authority; the second focuses on the balance between innovation and risk; and the third analyzes coordination through international bodies such as the OECD and the Council of Europe (Council of Europe, 2024; Organization for Economic Co-operation and Development, 2024). The EU system emphasizes human-centered regulation, transparency, and risk categorization, while the U.S. system prioritizes voluntary frameworks, innovation leadership, and sectoral flexibility (European Parliament & Council of the European Union, 2024; Harris & Jaikaran, 2024; National Institute of Standards and Technology, 2023). The reviewed sources indicate increasing transatlantic collaboration, particularly following the establishment of the U.S.–EU Trade and Technology Council and related initiatives that facilitate regulatory dialogue and integration (Office of the United States Trade Representative, 2024). This study seeks to assess the outcomes of transatlantic AI policy decisions and their implications for future regulatory and ethical frameworks.
Introduction
Artificial intelligence has become one of the primary drivers of rapid global change in recent years, particularly within economic and political systems. As AI technologies continue to expand in scope and application, both the European Union and the United States have been compelled to develop regulatory frameworks that evaluate their potential benefits and risks. The widespread adoption of AI has raised increasing concerns related to transparency, accountability, respect for human rights, and market fairness. In response to these challenges, the European Union adopted the Artificial Intelligence Act (Regulation 2024/1689), which establishes a detailed, legally binding, risk-based governance model aimed at ensuring that AI systems operate in accordance with human dignity, democracy, and the rule of law (European Parliament & Council of the European Union, 2024).
The United States, by contrast, has pursued a different regulatory approach. Rather than implementing a comprehensive binding statute comparable to the EU AI Act, the U.S. relies on a non-binding, innovation-supportive framework centered on Executive Order 14110 and the NIST AI Risk Management Framework (Harris & Jaikaran, 2024; National Institute of Standards and Technology, 2023). This approach reflects a preference for sector-specific regulation, agency guidance, and voluntary compliance mechanisms.
Despite these divergent regulatory strategies, both the EU and the U.S. share a common objective: the development of trustworthy and responsible AI systems that promote economic growth while incorporating appropriate ethical safeguards. At the global level, AI governance increasingly resembles a network of regional and multilateral coordination efforts involving organizations such as the OECD, the G7, and the Council of Europe. These institutions seek to prevent regulatory fragmentation and facilitate interaction among different governance models (Organization for Economic Co-operation and Development, 2024). A particularly significant development in this context is the adoption of the Europe Framework Convention on Artificial Intelligence, Human Rights, Democracy, and the Rule of Law in 2024, which established the first binding international treaty designed to ensure that AI applications comply with fundamental rights (Council of Europe, 2024). These developments reflect a growing recognition that AI governance is not a challenge confined to individual states but a global policy issue requiring coordinated responses.
At present, the primary forum for transatlantic regulatory cooperation is the U.S.–EU Trade and Technology Council (TTC), which serves as a platform for discussions on regulatory alignment, data governance coordination, and the development of shared technical standards (Office of the United States Trade Representative, 2024). As outlined in earlier project planning, the TTC’s agenda reflects a broader objective of identifying mechanisms for transatlantic cooperation that reduce regulatory gaps without undermining innovation capacity. Nevertheless, alignment between the EU and the U.S. remains challenging due to differences in legal enforceability, risk classification, and approaches to private-sector self-regulation, as highlighted in comparative analyses of AI governance frameworks (Walter, 2024).
This paper has two primary objectives. First, it seeks to analyze the institutional and regulatory differences between the EU’s centralized, rights-based model and the U.S.’s decentralized, voluntary approach. Second, it evaluates whether cooperation through mechanisms such as the TTC and OECD structures can realistically lead to greater regulatory convergence, or whether existing divergences are likely to persist. The study employs a mixed-methods approach combining document analysis of legislative and policy texts with expert interviews conducted in Washington, DC. Ultimately, the paper aims to contribute to ongoing discussions on transatlantic AI governance by examining the EU and the U.S. not merely as regulators with differing perspectives, but as potential collaborators capable of shaping global democratic norms for AI regulation through shared principles and standards (Organization for Economic Co-operation and Development, 2024).
Methodology
This study employs a mixed-methods research design that combines systematic document analysis with expert interviews. The primary sources analyzed include the European Union’s Artificial Intelligence Act (Regulation 2024/1689), United States Executive Order 14110, the NIST Artificial Intelligence Risk Management Framework, and policy documents produced by key international and transatlantic institutions, including the U.S.–EU Trade and Technology Council, the Council of Europe Framework Convention on AI, and the OECD Global Strategy Group 2024 Background Note (Council of Europe, 2024; European Parliament & Council of the European Union, 2024; Harris & Jaikaran, 2024; National Institute of Standards and Technology, 2023; Office of the United States Trade Representative, 2024; Organization for Economic Co-operation and Development, 2024).
Each document was evaluated across five analytical categories derived from the central research questions: regulatory obligations, enforcement mechanisms, impact on innovation, coordination structures, and implementation timelines. This framework facilitates a structured comparison between the EU’s rights-based, legally binding regulatory model and the U.S.’s voluntary, innovation-oriented system. Expert interviews were conducted to supplement the document analysis and to provide practical insights into implementation challenges, regulatory alignment efforts, and institutional capacity. Findings from documentary and interview sources were compared to enhance analytical rigor, transparency, and reliability.
Ethical considerations were also addressed throughout the research process. All interview participants were informed that their participation was voluntary and were provided with a clear explanation of the study’s purpose and use of their contributions. Participants consented to the disclosure of their names and perspectives within the paper. Together, these methodological approaches provide both empirical and theoretical foundations for evaluating the relationship between the EU’s rights-based regulatory framework and the U.S.’s voluntary, innovation-supportive approach to global AI governance.
Regulatory Models and Legal Form
The European Union establishes uniform rules for all Member States through a risk-based and strictly binding framework under the Artificial Intelligence Act (Regulation 2024/1689). This framework directly binds Member States and links compliance obligations to the protection of fundamental rights (European Parliament & Council of the European Union, 2024). By contrast, the United States does not consolidate AI regulation into a single comprehensive statute. Instead, it favors a decentralized, soft-law approach primarily composed of Executive Order 14110 and the NIST Artificial Intelligence Risk Management Framework, which provide guidance to federal agencies and private-sector actors (Harris & Jaikaran, 2024; National Institute of Standards and Technology, 2023).
According to Marko Loncar, the legal cultures of the European Union and the United States reflect fundamentally different philosophies of governance. The EU system, often described as rigid but normatively consistent, prioritizes legal certainty and the protection of citizens, whereas the U.S. system evolves through agency initiatives and market feedback. Loncar further explains that the EU AI Act, much like the General Data Protection Regulation (GDPR), is grounded in the principle of ex ante risk assessment, requiring potential harms to be identified and mitigated before an AI system is placed on the market. While this approach aligns with democratic values and the rule of law, he notes that it often proves difficult to implement without delaying innovation. Susan Ariel Aaronson similarly observes that the United States does not impose mandatory federal regulations on AI technologies and companies, opting instead for a voluntary governance strategy centered on the NIST AI Risk Management Framework. She emphasizes that the framework itself is neither inherently weak nor strong but rather reflects the broader challenge of governing rapidly evolving AI technologies in the absence of a settled balance between innovation and harm prevention.
Scope and Coverage
The EU AI Act applies to the development, supply, use, importation, and distribution of AI systems that operate physically or digitally within the European Union. Its extraterritorial scope extends to situations in which EU citizens use the outputs of AI systems, even when the system is developed or deployed outside the Union. At the same time, the regulation does not encompass all sectors, explicitly excluding areas such as national security and defense. The AI Act is also designed to function in coordination with existing sector-specific regulations (European Parliament & Council of the European Union, 2024).
In contrast, the U.S. approach to AI governance relies on sectoral regulatory institutions. Sectors such as healthcare, finance, and transportation are governed by their respective regulatory bodies, while overarching federal guidance is provided through Executive Order 14110. The NIST AI Risk Management Framework, unlike the EU model, is not legally binding, but it may serve as a baseline standard for managing AI-related risks across sectors (Harris & Jaikaran, 2024; National Institute of Standards and Technology, 2023).
Rukiye Mehtap Özlü explained during her interview that the exclusion of national security from the scope of the EU AI Act reflects its classification as a “member-state exclusive competence” under EU law. While legally justified, she cautioned that this exclusion could create loopholes if Member States broadly define AI applications as national security matters in order to avoid transparency requirements. Özlü highlighted particular risks associated with biometric surveillance, predictive policing, and emotion recognition technologies, noting that although such systems may be restricted or prohibited in civilian contexts, they could still be deployed under national security justifications, thereby undermining transparency and accountability.
Risk Classification vs. Voluntary Risk Practices
The EU AI Act classifies AI systems according to the level of risk they pose. Certain applications are prohibited outright, while systems designated as high-risk are subject to strict regulatory obligations, including conformity assessments and transparency requirements (European Parliament & Council of the European Union, 2024). The United States does not establish legally mandated risk categories. Instead, the NIST AI Risk Management Framework conceptualizes risk as emerging from the interaction of system functions and organizational contexts through four core functions: Govern, Map, Measure, and Manage (National Institute of Standards and Technology, 2023). This framework operates primarily at the organizational and supply-chain levels and allows companies significant flexibility in tailoring risk management practices.
Marko Loncar noted that the EU’s risk-based model closely aligns with the preventive logic underlying the GDPR’s “privacy by design” principle. Developers are required to anticipate and document potential negative impacts throughout the AI lifecycle, even before any harm has occurred. While Loncar views this approach as ethically robust, he also warns that such obligations may disproportionately slow implementation and innovation for smaller developers compared to the more flexible U.S. system.
Ricardo Martinez added that approximately 90 to 95 percent of AI use cases fall within the limited-risk category. According to him, most industrial applications require only transparency measures and basic training obligations, while truly high-risk scenarios are concentrated in sectors such as machinery, transportation, and healthcare. This observation supports the argument that, despite perceptions of strictness, the EU’s risk-based framework imposes relatively limited restrictions on many business applications compared to the significant obligations applied to high-risk systems. Susan Ariel Aaronson further argued that U.S. policymakers should focus regulatory attention on harmful business practices—particularly surveillance capitalism—rather than targeting AI technologies themselves. She emphasized that voluntary self-governance may be insufficient in high-risk contexts unless complemented by targeted legal requirements.
Enforcement Architecture and Accountability
Within the European Union, enforcement of the AI Act is carried out through a combination of national market surveillance authorities and EU-level institutions. Non-compliance may result in significant administrative fines. High-risk AI systems are subject to conformity assessments and procedures comparable to CE marking, while post-market monitoring obligations apply to both providers and deployers (European Parliament & Council of the European Union, 2024).
In the United States, accountability mechanisms rely primarily on existing agency authorities, including procurement rules, consumer protection laws, civil rights enforcement, and guidance issued by the Office of Management and Budget and NIST. Executive Order 14110 coordinates federal action but does not establish a unified statutory enforcement regime for AI systems (Harris & Jaikaran, 2024).
Rukiye Mehtap Özlü emphasized that the EU AI Act will be overseen through a multi-level governance structure involving the newly established EU AI Office, the AI Board, sectoral regulators, and national authorities across all 27 Member States. She noted that disparities in administrative capacity—such as those between larger Member States like Germany and smaller Eastern European countries—may result in uneven implementation and enforcement. Coordination challenges between EU-level bodies and national authorities remain among the most unpredictable aspects of implementation. Susan Ariel Aaronson observed that the United States currently lacks comparable centralized AI enforcement institutions. Because federal agencies do not directly regulate AI systems as a distinct category, governance relies heavily on voluntary initiatives, contributing to weaker accountability mechanisms despite emerging state-level AI legislation.
Implementation of Timelines
The EU AI Act introduces obligations gradually, including immediate prohibitions and phased implementation of high-risk requirements. This staged approach provides companies with a relatively predictable compliance timeline (European Parliament & Council of the European Union, 2024). In contrast, the U.S. approach follows a more programmatic timeline. Executive Order 14110 establishes deadlines for agency actions, including the issuance of guidance, safety directives, and standards development. Corporate adoption is driven primarily by agency guidance and voluntary frameworks rather than a single statutory effective date (Harris & Jaikaran, 2024).
Ricardo Martinez noted that while companies are aware of the EU AI Act, many have not yet experienced significant compliance pressures because several provisions will enter into force only after extended implementation periods. He expects early enforcement to resemble the GDPR experience, in which high-profile fines initially serve as deterrents before broader compliance efforts accelerate. This observation underscores the importance of implementation timelines not only from a legal perspective but also from a psychological standpoint in shaping corporate behavior.
Implications for Companies Operating on Both Sides of the Atlantic
Many companies operate across both U.S. and EU markets. Under the EU model, firms—particularly those engaged in high-risk activities—must provide extensive ex ante documentation and assurances. This includes investments in data governance, human oversight mechanisms, and technical robustness testing. By contrast, the U.S. system emphasizes internal risk management programs, organizational governance, and adaptive learning based on operational experience. Companies often align internal controls with the NIST AI Risk Management Framework while monitoring sector-specific guidance.
These differences may generate compliance challenges and additional costs, but they may also create complementarities. NIST AI RMF documentation, for example, can support EU compliance efforts (National Institute of Standards and Technology, 2023; Walter, 2024). Marko Loncar cautioned that Europe’s strict compliance environment, particularly documentation requirements and data transfer restrictions, may “choke trade and innovation,” especially for cross-border startups. At the same time, he characterized the U.S. model as an “innovation playground” where self-regulation by large technology firms enables rapid experimentation, while warning that insufficient oversight may lead to governance gaps over time.
Alasana Camara highlighted that EU conformity resembles an accreditation process, in which the most challenging aspect is not technical compliance but the documentation of every test, decision, dataset, and mitigation step. He argued that the evidentiary burden of compliance—such as maintaining traceability records and logging model behavior—may pose significant challenges, particularly for small enterprises.
Channels for Alignment and Risk of Fragmentation
Two primary channels for alignment emerge from the analysis. The first is bilateral cooperation, primarily through the U.S.–EU Trade and Technology Council, which facilitates collaboration on standards, benchmarks, and trustworthy AI practices aimed at narrowing the gap between regulatory theory and practice despite differing legal frameworks (Office of the United States Trade Representative, 2024). The second is multilateral cooperation through institutions such as the Council of Europe and the OECD. The Council of Europe Framework Convention on AI provides a binding, rights-based instrument open to non-EU states, while the OECD Global Strategy Group emphasizes pathways for global cooperation and shared standards (Council of Europe, 2024; Organization for Economic Co-operation and Development, 2024).
Alasana Camara argued that while these bilateral and multilateral structures may reduce market divergence, they are unlikely to eliminate it entirely. He noted that the Council of Europe often operates through an “à la carte” model, allowing states to selectively adopt provisions, thereby limiting full standardization. From an industry perspective, Ricardo Martinez described these forums as constructive and necessary but expressed skepticism about their capacity to produce a single regulatory framework for both the U.S. and the EU. He emphasized that the U.S. market remains faster and more risk-tolerant, whereas the EU prioritizes caution and rule-based governance. Consequently, these platforms are best suited for developing shared principles and mutual understanding rather than full legal convergence.
Synthesis: Convergence, Complementarity, and Areas to Monitor
While the EU emphasizes legal certainty through clearly defined rules and penalties, the United States prioritizes flexibility through standards-based, innovation-oriented governance. In the short term, collaboration is most likely to occur in areas such as technical standards, documentation practices, evaluation and testing methodologies, and incident reporting. The TTC, OECD, and Council of Europe provide key venues for translating shared values into operational practices.
Three developments warrant particular attention: the issuance of EU implementing acts and guidelines clarifying high-risk sector obligations; U.S. agency guidance that further specifies regulatory expectations; and mutual recognition of conformity assessments to reduce duplicative audits (Council of Europe, 2024; Organization for Economic Co-operation and Development, 2024). Marko Loncar emphasized that effective convergence requires regulations that are not only strict but also realistic and implementable. He argued that future cooperation should prioritize systems reflecting the perspectives of administrators, businesses, and users alike, noting that the effectiveness of any law ultimately depends on those responsible for its implementation.
Both Rukiye Mehtap Özlü and Alasana Camara agreed that the EU’s so-called “Brussels Effect” is likely to influence global AI standards. Firms seeking access to the EU market may adopt EU-compliant practices even in more permissive jurisdictions such as the United States. Nevertheless, experts emphasized that differences in legal culture, innovation pace, and intellectual property sensitivity remain significant obstacles to full convergence. Ricardo Martinez noted that while a single global AI framework would be desirable, strategic competition among the U.S., EU, and China makes such convergence difficult. Similarly, Susan Ariel Aaronson suggested that further convergence depends on renewed political cooperation and efforts to address public distrust of AI, identifying transparency as the most immediate area for agreement.
Case Study: Healthcare AI Under the EU AI Act and U.S. Sectoral Model
Healthcare provides a particularly clear and instructive example for contrasting the AI governance strategies of the European Union and the United States. The healthcare sector occupies a central position in AI regulation due to its reliance on highly sensitive personal data, the potentially severe consequences of AI-driven decision-making, and the extensive regulatory oversight traditionally associated with medical practice. Under the EU Artificial Intelligence Act, AI systems used in healthcare are classified as high-risk because they may directly affect patient safety, diagnostic accuracy, and fundamental rights. As a result, developers and deployers of medical AI systems are required to implement extensive ex ante controls throughout the AI lifecycle, including risk management systems, technical documentation, human oversight mechanisms, testing protocols, and post-market monitoring obligations (European Parliament & Council of the European Union, 2024). The EU’s alignment with the General Data Protection Regulation further intensifies regulatory scrutiny, as health data are universally recognized as a particularly sensitive category of personal information.
In practice, these requirements impose significant administrative and operational burdens. Rukiye Mehtap Özlü noted that companies often find documentation and record-keeping obligations to be the most challenging aspects of compliance. She compared these obligations to accreditation processes, in which the most resource-intensive task is not achieving compliance itself but demonstrating it. Ricardo Martinez similarly observed that although 90–95 percent of AI use cases fall into low-risk categories, sectors such as healthcare, transportation, and machinery require heightened regulatory oversight. Consequently, while the EU framework differentiates among risk levels, it imposes substantially higher compliance burdens on healthcare AI suppliers than on firms operating in low-risk sectors.
In the United States, AI governance in healthcare follows a sectoral model that relies primarily on agency-based regulation rather than a single overarching framework. The Food and Drug Administration regulates AI- and machine-learning-enabled medical devices that qualify as medical products, while data protection and privacy are governed through sector-specific laws such as the Health Insurance Portability and Accountability Act (HIPAA), supplemented by voluntary governance frameworks. At the federal level, Executive Order 14110 directs agencies to integrate AI governance into their existing authorities, and the NIST AI Risk Management Framework provides voluntary guidance covering governance, risk measurement, mitigation, and incident response (Harris & Jaikaran, 2024; National Institute of Standards and Technology, 2023). Unlike the EU’s legally binding obligations, these U.S. mechanisms depend largely on corporate discretion, agency interpretation, and market incentives rather than statutory mandates. Ricardo Martinez emphasized that medical data in the United States are more commoditized and tradable than in Europe, which contributes to heightened privacy concerns.
These differing regulatory structures produce distinct outcomes for companies operating in both jurisdictions. Firms marketing medical AI systems in the EU must undergo conformity assessments, prepare extensive technical documentation, and maintain traceability records—requirements that do not exist in the same form within the U.S. regulatory framework. Susan Ariel Aaronson warned that rapid innovation without sufficient trust may ultimately undermine public acceptance, arguing that the United States risks losing public confidence if it fails to impose enforceable governance measures related to data and accountability. At the same time, EU-based companies may face challenges stemming from the bureaucratic complexity of regulatory compliance, particularly for systems requiring frequent model updates, explainability features, or clinical validation. These contrasting approaches illustrate a fundamental divergence: the EU prioritizes legal certainty and patient protection, whereas the U.S. emphasizes flexibility and market-driven technological development.
Despite these differences, healthcare remains one of the most promising sectors for transatlantic regulatory convergence. Interview findings and policy documents suggest that transparency requirements, testing standards, and incident-reporting practices represent areas of potential alignment (European Parliament & Council of the European Union, 2024; Harris & Jaikaran, 2024; National Institute of Standards and Technology, 2023). Transparency is frequently cited as a shared priority, with EU disclosure obligations for high-risk medical AI systems closely paralleling transparency principles embedded in the NIST AI Risk Management Framework. Institutions such as the U.S.–EU Trade and Technology Council are already engaged in dialogue aimed at fostering such alignment.
Looking ahead, Ricardo Martinez anticipates that medical AI systems will increasingly rely on personalized predictive analytics powered by biomarkers, genomics, and advanced wearable technologies. He emphasized the potential of AI-driven healthcare to shift the industry toward preventive care models based on long-term physiological data analysis, while also cautioning that these developments necessitate robust security measures to prevent misuse, discriminatory profiling, and unauthorized secondary use of data. These considerations underscore why healthcare remains both one of the most heavily regulated sectors in AI governance and one of the most innovation-intensive, reinforcing its central role as a policy priority in both the European Union and the United States.
Policy Recommendations
The development of an effective transatlantic approach to AI regulation requires leveraging the complementary strengths of both the European Union and United States regulatory systems, while addressing declining public trust, innovation constraints, and interoperability challenges arising from structural differences. The comparative analysis and expert interviews conducted in this study form the basis for policy recommendations aimed at establishing a coherent and reliable AI governance environment across the Atlantic.
Transparency represents the most practical and politically feasible area for short-term convergence between the EU and the U.S. Both the EU Artificial Intelligence Act and the NIST AI Risk Management Framework identify transparency as a central mechanism for ensuring user trust, system accountability, and effective risk mitigation (European Parliament & Council of the European Union, 2024; National Institute of Standards and Technology, 2023). Susan Ariel Aaronson emphasized that transparency requirements—such as informing individuals when they are interacting with AI systems, documenting training data characteristics, and clearly communicating system capabilities—constitute one of the few areas where different regulatory approaches can realistically align. She argued that without effective communication, innovation loses its societal value, stating that “innovation is pointless if people do not trust it.” Establishing baseline transatlantic transparency standards for high-impact AI systems, particularly in sensitive sectors such as healthcare, would facilitate interoperability and reduce regulatory fragmentation.
Multiple experts highlighted that regulating the underlying business practices driving AI-related risks may be more effective and sustainable than attempting to control every technical model. Susan Ariel Aaronson argued that U.S. policymakers should focus regulatory efforts on addressing “surveillance capitalism” and other harmful business practices that enable abusive AI applications, rather than restricting the technologies themselves. This perspective aligns with EU strategies emphasizing data governance, documentation requirements, and systematic risk management. Targeting practices such as automated profiling, secondary data use, and cross-contextual data transfers could mitigate systemic risks while preserving innovation capacity (European Parliament & Council of the European Union, 2024).
Rather than creating new regulatory institutions, transatlantic cooperation should be strengthened through existing mechanisms, including the U.S.–EU Trade and Technology Council, OECD technical standards groups, and the Council of Europe AI Convention. Experts noted that although full regulatory alignment may be unrealistic, sustained, and structured dialogue is essential for identifying shared approaches to testing methodologies, risk classification, documentation practices, and conformity assessment procedures (Council of Europe, 2024; Office of the United States Trade Representative, 2024; Organization for Economic Co-operation and Development, 2024). Susan Ariel Aaronson stressed that dialogue serves as the foundation for discovering common principles, norms, and standards. These institutional platforms should prioritize sectors most vulnerable to regulatory divergence, such as healthcare, transportation, and essential public services, where misalignment poses the greatest risks to both firms and individuals.
The EU AI Act’s detailed documentation and conformity requirements may disproportionately burden small and medium-sized enterprises, particularly in high-risk sectors such as healthcare. Rukiye Mehtap Özlü noted that demonstrating compliance often proves more challenging than achieving compliance itself. To mitigate these challenges, EU institutions could provide standardized templates and sector-specific compliance guidelines. On the U.S. side, agencies—especially the Food and Drug Administration—should offer clearer and more consistent guidance regarding the use of AI in medical systems to reduce regulatory uncertainty within the sectoral model. Greater clarity on both sides would help minimize compliance inefficiencies and reduce regulatory friction (Harris & Jaikaran, 2024).
Some experts suggested the potential need for a dedicated AI regulatory authority in the United States. However, Susan Ariel Aaronson cautioned that it remains too early to determine the optimal institutional model, given the pace of AI development and the absence of consensus among policymakers and experts. A gradual, adaptive approach may be more feasible. Strengthening AI oversight within existing agencies and incrementally expanding federal legislation to address regulatory gaps aligns with the framework established under Executive Order 14110, which emphasizes coordination rather than rigid centralized control (Harris & Jaikaran, 2024).
Finally, effective AI governance should be understood as a collaborative process involving a broad range of stakeholders beyond technical experts and industry leaders. Susan Ariel Aaronson observed that public perspectives are often underrepresented in AI policy debates, leading to governance outcomes shaped by narrow interests. Marko Loncar similarly emphasized that inclusive policymaking is essential for developing regulations that are not only technically sound but also socially legitimate. Encouraging stakeholder engagement—including end users, civil society organizations, and affected communities—would enhance trust and support the responsible deployment of advanced AI systems across both jurisdictions.
Given persistent differences in risk classification between the EU and the U.S., companies operating transatlantic ally face uncertainty regarding compliance expectations. While EU risk categories are mandatory, the U.S. framework relies on voluntary risk management functions under the NIST AI Risk Management Framework. Ricardo Martinez noted that most realistic AI applications fall into low-risk categories, though healthcare remains an exception requiring strict oversight. Greater coordination in risk definitions and assessment methodologies could help firms adopt consistent safety measures and reduce duplicative compliance efforts, particularly for high-risk applications.
Conclusion
This comparative analysis of the European Union’s Artificial Intelligence Act and the United States’ sectoral, voluntary AI governance model demonstrates that the two systems differ fundamentally in how they balance innovation, safety, and public trust. The EU approach, characterized by detailed and legally binding obligations and a structured risk classification system, places strong emphasis on the protection of fundamental rights and accountability throughout the AI lifecycle (European Parliament & Council of the European Union, 2024). In contrast, the U.S. framework prioritizes flexibility, agency-based oversight, and voluntary compliance mechanisms such as the NIST AI Risk Management Framework, reflecting a preference for market-driven innovation and incremental regulatory development (Harris & Jaikaran, 2024; National Institute of Standards and Technology, 2023). Expert interviews indicate that these differences are rooted not only in technical considerations but also in distinct political cultures and institutional traditions.
Despite these divergences, the analysis identifies meaningful opportunities for cooperation. Transparency emerged consistently across policy documents and expert interviews as the most attainable area of convergence, with both jurisdictions recognizing that public trust depends on clear disclosure, documentation, and explainability. Institutions such as the U.S.–EU Trade and Technology Council and OECD standard-setting bodies demonstrate shared commitment to advancing interoperability even in the absence of full regulatory harmonization (Office of the United States Trade Representative, 2024; Organization for Economic Co-operation and Development, 2024). The healthcare case study further illustrates this dynamic: while the EU and the U.S. pursue different regulatory strategies, both seek to ensure the safety and reliability of medical AI systems, differing primarily in how compliance responsibilities are structured and enforced.
Overall, the findings suggest that neither system fully resolves the central governance challenge identified by experts: how to protect individuals and societies while enabling rapid technological advancement. Given the speed, complexity, and unpredictability of AI development, even policymakers acknowledge that no definitive regulatory model currently exists. Rather than pursuing strict convergence, the most viable path forward lies in strategic complementarity—combining the EU’s strengths in rights-based regulation with the U.S.’s capacity for innovation. By expanding transparency requirements, addressing harmful business practices, and providing practical compliance support to organizations, the EU and the U.S. can jointly foster an AI governance environment that is trustworthy, equitable, and competitive at the global level.
References
Council of Europe. (2024, September 5). Council of Europe opens first ever global treaty on AI for signature. https://www.coe.int/en/web/portal/-/council-of-europe-opens-first-ever-global-treaty-on-ai-for-signature
European Parliament & Council of the European Union. (2024, June 13). Regulation (EU) 2024/1689 of the European Parliament and of the Council of 13 June 2024 laying down harmonized rules on artificial intelligence (Artificial Intelligence Act). Official Journal of the European Union. https://eur-lex.europa.eu/eli/reg/2024/1689/oj/eng
Harris, L., & Jaikaran, C. (2024, March 4). Highlights of the 2023 executive order on artificial intelligence for Congress (CRS Report No. R47843). Congressional Research Service. https://www.congress.gov/crs-product/R47843
National Institute of Standards and Technology. (2023). Artificial intelligence risk management framework (AI RMF 1.0). U.S. Department of Commerce. https://nvlpubs.nist.gov/nistpubs/ai/nist.ai.100-1.pdf
Office of the United States Trade Representative. (2024, April 8). U.S.-EU joint statement of the Trade and Technology Council. USTR. https://ustr.gov/about-us/policy-offices/press-office/press-releases/2024/april/us-eu-joint-statement-trade-and-technology-council
Organization for Economic Co-operation and Development. (2024, October 15–16). Futures of global AI governance: Co-creating an approach for transforming economies and societies. OECD Global Strategy Group. https://www.oecd.org/content/dam/oecd/en/about/programmes/strategic-foresight/GSG%20Background%20Note_GSG%282024%291en.pdf
Walter, H. (2024, November). Comparative legal frameworks for AI governance: Bridging the EU AI Act and U.S. regulatory approaches. ResearchGate. https://www.researchgate.net/publication/391950464_Comparative_Legal_Frameworks_for_AI_Governance_Bridging_the_EU_AI_Act_and_US_Regulatory_Approaches
This article was written by Osman Eren Dogan, a student at Bay Atlantic University in Washington, DC.