Overview Of The European Union Artificial Intelligence Act And Effects On Turkish Companies

November 12, 2025

Overview Of The European Union Artificial Intelligence Act And Effects On Turkish Companies

The rapid expansion of artificial intelligence (“AI”) across multiple sectors has inevitably raised questions regarding its ethical and safety dimensions. The first concrete regulatory step in this field was taken by the European Union (“EU”) through the Artificial Intelligence Act (“the Act”)[1], which was adopted by the European Parliament on 13 March 2024, approved by the Council on 21 May 2024, published in the Official Journal on 12 July 2024, and entered into force on 1 August 2024. The Act constitutes the most comprehensive legal framework on AI adopted to date. This memorandum aims to examine the scope of the Act, its risk classifications, obligations, and implementation timeline in detail.

  1. What is the Artificial Intelligence Act?

The regulation adopted by the European Parliament in March 2024 and effective as of 1 August 2024 seeks to regulate the placing on the market and use of AI systems, ensuring that such technologies are trustworthy, transparent, and respectful of fundamental rights. It is considered the most comprehensive legal framework on AI introduced so far.

The Act defines an AI system as follows:
a machine-based system that is designed to operate with varying levels of autonomy and that may exhibit adaptiveness after deployment, and that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments”.The regulation is largely based on the Ethics Guidelines for Trustworthy AI [2] published by the European Commission in 2019. The Guidelines emphasize that trustworthy AI must comply with existing law, adhere to ethical principles, and be robust from both a technical and social perspective. They also set out seven key requirements for the development of AI systems:

  • Human agency and oversight: AI systems should empower human decision-making and allow for meaningful human oversight.
  • Technical robustness and safety: Systems must be resilient and secure.
  • Privacy and data governance: Protection of personal data must be ensured.
  • Transparency: Operations of AI systems should be explainable and understandable.
  • Diversity, non-discrimination, and fairness: AI should be inclusive and accessible to all.
  • Societal and environmental well-being: AI should benefit society as a whole.
  • Accountability: Clear mechanisms must ensure responsibility for outcomes.

  1. To Whom and Under What Conditions Does the Act Apply?

Article 2 of the Act identifies six categories of actors subject to obligations: providers, importers, distributors, product manufacturers, authorised representatives, and deployers.

  • Providers: Natural or legal persons who develop an AI system or a general-purpose AI model, or place it on the market under their own name or trademark.
  • Importers: Entities placing on the EU market an AI system originating from a third country.
  • Distributors: Entities making AI systems available within the EU market.
  • Product Manufacturers: Entities manufacturing physical products with embedded AI systems.
  • Authorised Representatives: Persons mandated by providers established outside the EU to represent them within the Union.
  • Deployers: Natural or legal persons who use an AI system in the course of their activities.

It should be noted that, as stated in Article 2 of the Regulation, the obligations apply regardless of whether the actors listed therein are established within the EU. The Regulation will also apply in cases where an AI system is placed on the market or used within the EU. This demonstrates the broad and extraterritorial scope of the Regulation, showing that its impact extends beyond the borders of the EU and has a truly global effect.

In the context of Turkish companies, this means that entities established in Turkey which place AI systems on the EU market or use them within the EU must also ensure compliance with the Regulation. Adapting to these requirements is of great importance for Turkish companies engaged in commercial or technological activities connected to the EU.

  1. The Risk-Based Approach

The Act introduces obligations based on a risk-based classification of AI systems into four categories:

  • Unacceptable risk: Systems deemed to pose intolerable harm, strictly prohibited.
  • High risk: Systems with significant potential to cause harm in sensitive areas, subject to extensive requirements.
  • Limited risk: Systems with specific transparency obligations, e.g., informing users they are interacting with AI.
  • Minimal risk: Systems with negligible or no risk, not subject to regulatory obligations.

Unacceptable risk is regulated under Article 5 of the Act, entitled “Prohibited AI Practices.” AI systems that fall within the category of unacceptable risk have been prohibited as of 2 February 2025. Among the systems banned under this provision are those that impair individuals’ decision-making processes and cause harm through manipulative techniques. In addition, AI systems that collect facial images to create facial recognition databases, as well as AI systems that evaluate or classify individuals or groups based on social behavior or personal characteristics and result in adverse treatment against such persons through social scoring, have also been deemed to pose an unacceptable level of risk.

High-risk AI systems, on the other hand, include remote biometric identification, biometric categorisation and emotion recognition systems; AI systems used in the management of critical digital infrastructure such as road traffic, energy networks, or water distribution; AI systems employed in the fields of education and employment for candidate selection, student admission, or performance evaluation; and AI systems determining access to public benefits, calculating credit scores, or prioritising emergency calls. The high-risk category is the group subject to the most obligations under the Act. Providers of AI systems falling within this risk group must comply with the obligations set out in Article 16 of the Act. These obligations include, among others: implementing risk management, data governance, and transparency requirements; indicating provider identification details on the system; maintaining records; establishing a quality management system; issuing an EU declaration of conformity; and affixing the CE marking to the system.

  1. General-Purpose AI Models (GPAI)

Title V of the Act specifically regulates General-Purpose AI Models (“GPAI”). According to Article 3(63), a GPAI model means:

An AI model, including where such a model is trained with large amounts of data using self-supervision at scale, that displays significant generality and is capable of competently performing a wide range of distinct tasks, regardless of the way it is placed on the market, and that can be integrated into a variety of downstream systems or applications, except AI models used solely for research, development, or prototyping prior to being placed on the market.”

The Regulation allocates a separate section to GPAI models and imposes additional obligations on AI systems that make use of such models, beyond the general provisions applicable to other AI systems. The Regulation categorizes GPAI models into two distinct risk groups: non-systemic risk and systemic risk.

For both risk groups, compliance with copyright requirements and transparency constitute common obligations. However, for GPAI systems posing systemic risks, such as ChatGPT, Article 55 of the Regulation introduces further obligations. These include model testing, assessment and mitigation of systemic risks, reporting of serious incidents, and ensuring adequate cybersecurity protection. These additional obligations entered into force as of 2 August 2025.

  1. Sanctions

As of 2 June 2025, administrative fines for non-compliance with the obligations set out in the Act have also entered into effect. The applicable fines are as follows:

  • Up to EUR 35 million or 7% of annual global turnover (whichever is higher) for violations of Article 5 (prohibited practices).
  • Up to EUR 15 million or 3% of annual global turnover (whichever is higher) for non-compliance with obligations for high-risk systems.
  • Up to EUR 7.5 million or 1% of annual global turnover (whichever is higher) for providing incorrect, incomplete, or misleading information to authorities.

  1. Expected Developments

As we have stated, the Act entered into force on 1 August 2024. However, since a phased implementation has been adopted, the Act has not yet entered fully into effect. The provisions on prohibited practices entered into force on 2 February 2025, while the rules concerning general-purpose AI models and sanctions became applicable on 2 August 2025. The next development is expected, according to the published timeline[3], on 2 February 2026. Namely, by that date, the European Commission is required to publish guidelines explaining the practical application of Article 6 on the classification of high-risk systems. The timeline further provides that the remaining provisions of the Act (except Article 6(1)) will apply as of 2 August 2026. Through this phased timeline, a two-year transition period has effectively been granted for compliance.

Conclusion

The Artificial Intelligence Act adopted by the European Union stands as a pioneering regulation that shapes the direction of AI governance not only within Europe but also on a global scale, owing to its comprehensive, risk-based, and systematic structure. As stated earlier, the Regulation applies not only to companies operating within the borders of the European Union but also to entities from third countries that place products or services on the EU market or use AI systems within the EU. Therefore, it is evident that Turkish companies maintaining commercial or technological ties with the European Union will also be subject to the obligations set out under the Regulation.

In Turkey, the “Artificial Intelligence Law Draft” was introduced to the public on June 25, 2024; however, a legally binding regulation has not yet entered into force. Accordingly, there is currently no domestic legislation requiring local compliance. At this stage, only Turkish companies operating in the EU market or maintaining EU-based business partnerships are directly subject to compliance obligations. Although this process may present certain technical, legal, and financial challenges for Turkish companies, it is anticipated that those aligning with the Regulation will gain a significant advantage in terms of reliability, transparency, and competitiveness in the EU market.

The Regulation aims to ensure that AI systems are developed in a trustworthy, transparent, and human-rights-respecting manner, while safeguarding individuals and society from potential technological risks. By addressing ethical, legal, and technical standards within a holistic framework, it adopts a multi-layered approach that emphasizes not only technical conformity but also social responsibility and respect for fundamental rights. For Turkish companies, adopting this approach and aligning their strategies with the standards introduced by the Regulation will be crucial for both sustaining their presence in the EU market and preparing for potential national regulations expected to come into force in Turkey in the near future.

In conclusion, the EU Artificial Intelligence Act serves as more than just a regional regulatory instrument it provides a guiding reference for public authorities, the private sector, and academia in Turkey. It is expected to shed light on the national legislative efforts currently at the draft stage and contribute to the establishment of a trustworthy, ethical, and human-centric AI ecosystem in Turkey.

References

  1. The AI Act Explorer. AI Act Explorer. Accessed 30 September 2025,
    https://artificialintelligenceact.eu/ai-act-explorer/

  2. European Commission. (8 April 2019). Ethics Guidelines for Trustworthy AI. Accessed 30 September 2025,
    https://digital-strategy.ec.europa.eu/en/library/ethics-guidelines-trustworthy-ai

  3. Software Improvement Group. (August 2025). A Comprehensive EU AI Act Summary. Accessed 30 September 2025,
    https://www.softwareimprovementgroup.com/eu-ai-act-summary/#elementor-toc__heading-anchor-4

  4. Bird & Bird LLP. (December 2024). European Union Artificial Intelligence Act – A Practical Guide. Accessed 30 September 2025,
    https://www.twobirds.com/-/media/new-website-content/pdfs/capabilities/artificial-intelligence/european-union-artificial-intelligence-act-guide.pdf

All Rights Reserved © 2025 npartners.com.tr