REGULATIONS ON DEEPFAKE GLOBALLY AND THE DANISH DEEPFAKE LAW

The current state of AI-driven content generation technologies has sparked intense debates not only in terms of freedom of expression and personality rights but also within the field of intellectual property law. In this context, deepfake technology, in particular, poses a significant risk by creating realistic digital replicas of individuals without their consent, thereby potentially infringing upon both the right to privacy and original works and performances that may be subject to copyright protection.

In recent years, with the advancement of artificial intelligence technologies, there has been a notable increase in “deepfake” content, where realistic digital copies of individuals’ voices, faces, and body movements are produced. Deepfake represents a new generation of digital manipulation that threatens not only personality rights but also intellectual property rights. In response to these developments, various countries have begun preparing protective regulations against deepfake.

In particular, in Denmark, a draft law (“Draft Law”) was made available for public consultation and expert opinion on 07 July 2025. The Draft Law aims to strengthen individuals’ control over their digital likenesses by protecting personal rights under a form of “personal copyright.”[1] With this approach, Denmark has taken pioneering steps—unlike other European countries—towards including deepfake content within the scope of copyright law rather than outright banning it. If approved by parliament in the fall of 2025, the Draft Law is expected to enter into force by late 2025 or early 2026.[2]

 The Danish Example and the Draft Law

Initially announced on 26 June 2025 by Denmark’s Minister of Culture, Jacob Engel Schmidt, and subsequently made available for public and expert consultation on 7 July 2025, the Draft Law prohibits the digital imitation of individuals’ personal and physical characteristics without their consent. Article §73a of the Draft Law stipulates: “It is prohibited to digitally reproduce, in a realistic manner, a person’s distinctive personal characteristics—such as their face, voice, behavior, or gestures—without that person’s consent.” The article further specifies that this protection shall continue for 50 years following the individual’s death.

This Draft Law seeks to protect a person’s physical appearance and voice as a form of property, thereby creating a unique intersection between personal data protection and intellectual property rights.

In the Danish Deepfake Draft Law, deepfake is defined as a digitally created representation of a person’s appearance, voice, or likeness that may mislead observers. The definition of “platform” is also set out broadly as any online service, website, or application hosting user-generated content.

Accordingly, various obligations are imposed on platforms. They are required to establish a mechanism to notify users within 48 hours upon detecting unauthorized deepfake content involving imitation of voices and faces (§100a). Furthermore, platforms must retain records of such notifications and actions for a period of two years. They are also required to provide a reference number and an explanation within five days for any deleted content.

The Draft Law does not prohibit the production of deepfake content per se but conditions its publication on the explicit consent of the individuals concerned, granting individuals the right to request the removal of such content. In cases where removal is requested but not carried out, sanctions against platforms are envisaged.[3] In this respect, as the Draft Law does not directly criminalize deepfake production, it can be considered to have a liberal structure. Indeed, the Draft Law explicitly excludes from prohibition content created for purposes of caricature, satire, or parody, provided that such content does not cause serious harm to personality rights.[4]

The Draft Law also entrusts the supervision and enforcement of the legislation to a newly established Deepfake Compliance Authority under the Danish Ministry of Culture.

 The Situation in the European Union and the United States

In the European Union, the AI Act[5] introduces a framework for the classification of artificial intelligence systems; however, there is not yet a specific protective norm dedicated exclusively to deepfake content. Under the Act, only a transparency obligation is imposed on deepfake materials. In this regard, it has now become mandatory to label deepfake content.[6]

Another relevant regulation in the European Union is the Directive on Copyright in the Digital Single Market (2019/790)[7], adopted on 17 April 2019. This directive makes it possible to treat digital imitations created without the consent of the rightsholder within the framework of related rights. Furthermore, Articles 3 and 4 of the directive safeguard the rights of copyright holders in the context of text and data mining and content production. Under this framework, the commercial use of content processed by artificial intelligence requires prior authorization from the rightsholders.

In the United States, deepfake content has been addressed at the federal level with the enactment of the Take It Down Act (Tools to Address Known Exploitation by Immobilizing Technological Deepfakes on Websites and Networks Act)[8], signed into law by Donald Trump on 19 May 2025. Under this Act, the unauthorized distribution of deepfake-generated content has been criminalized at the federal level[9]. In particular, higher maximum penalties have been prescribed for the dissemination of unauthorized deepfake content involving minors, in order to ensure stronger deterrence.

 Possible Protection Mechanisms Regarding Deepfake Content Under Turkish Law

Although there is no specific regulation on deepfake content in Turkey, certain provisions of the Law on Intellectual and Artistic Works No. 5846 (FSEK) may be applied by way of analogy.

In particular, Article 25/2 of the FSEK should be examined in this context. The provision reads as follows: “The author shall also have the right to authorize or prohibit the distribution or presentation to the public of the original or reproduced copies of his/her work by wired or wireless means, as well as to authorize or prohibit the communication of the work to the public in such a way that real persons may access the work from a place and at a time individually chosen by them.”

This article, which regulates the right of communication to the public, grants the author the authority to permit or prohibit the accessibility of their work at any place and time. In cases where a work is disseminated through the use of deepfake technology, even though not expressly regulated, such dissemination would constitute a violation of this provision.

 Personal Rights and Deepfake

Within the scope of the concept of audiovisual works defined in Articles 1/B and 2 of the Law on Intellectual and Artistic Works (FSEK), deepfake content may be considered a violation of personality rights when it involves the imitation of distinctive elements such as an artist’s voice, face, or expressions.

Moreover, Article 86 of the FSEK prohibits the public disclosure of pictures and portraits without consent, even if they are not deemed works of art, thereby serving the protection of personality rights.

The Court of Cassation has also issued a decision finding a violation under Article 86 of the Law on Intellectual and Artistic Works In a case where the plaintiff’s photograph was unlawfully used in connection with a false news report about a murder, the 11th Civil Chamber of the Court of Cassation held that personality rights were violated under Article 86 of the Law on Intellectual and Artistic Works and awarded moral compensation (11th Civil Chamber, E. 2019/1770, K. 2019/8230, 16.12.2019)[10]. This ruling demonstrates that, similarly, the dissemination of deepfake content without consent would also constitute a violation of personality rights.

In addition, the wording of Article 14/3 of the Law on Intellectual and Artistic Works bears resemblance to that of the Draft Law and aims to protect the honor and reputation of the author: “If the public disclosure or manner of publication of the work is of a nature that would damage the author’s honor and reputation, the author may prohibit the public disclosure or publication of either the original or adapted version of the work, even if written consent has been given to another person. Any waiver of this right of prohibition through contract shall be null and void. The other party’s right to compensation is reserved.”

Although the provisions mentioned above do not explicitly regulate deepfake, it can be argued that they provide indirect protection against such content. Nevertheless, the inclusion of an explicit provision in the Law on Intellectual and Artistic Works regarding deepfake content would be an important step toward securing individuals’ rights and ensuring the protection of personality rights in the digital age.

 Possible Approaches in Turkish Legislation

In the age of artificial intelligence, the protection of individuals’ digital representations should be ensured not only as personal data but also as a form of property right. In this respect, similar to the Danish example, Turkish law also requires specific and direct regulations that safeguard a person’s appearance, voice, and digital identity.

Through supplementary amendments to the Law on Intellectual and Artistic Works (FSEK), the introduction of explicit provisions requiring consent for the use of real persons’ representations in digital environments would guarantee that copyright and personality rights are secured at the same level of protection in the digital sphere.

In conclusion, the protection of intellectual property in the digital age necessitates not only reliance on the classical concept of works but also the establishment of new, individual-based protection mechanisms. In this context, deepfake content emerges as a new form of infringement, making it imperative for the law to undergo a parallel transformation in response to this change.

 If you want to know more about Artificial Intelligence Law reach out to our team or NPartners: info@npartners.com.tr.


REFERENCES
1. Hofverberg, Elin. “Denmark: Political Parties Agree to Protect Danes Against Deepfakes.” Law Library of Congress — Global Legal Monitor, August 5, 2025. Accessed August 22, 2025.
https://www.loc.gov/item/global-legal-monitor/2025-08-05/denmark-political-parties-agree-to-protect-danes-against-deepfakes/

2. The Good Lobby. “Denmark gives everybody the right to their own body, facial features and voice to counter deepfakes.” The Good Lobby, July 23, 2025. Accessed August 22, 2025.
https://thegoodlobby.eu/denmark-gives-everybody-the-right-to-their-own-body-facial-features-and-voice-to-counter-deepfakes/

3. “Denmark proposes copyright laws to protect against deepfakes.” Law Society Journal, July 16, 2025. Accessed August 22, 2025. https://lsj.com.au/articles/denmark-proposes-copyright-laws-to-protect-against-deepfakes/

4. Regulation (EU) 2024/1689 — Artificial Intelligence Act. EUR-Lex, 2024. Accessed August 22, 2025.
https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX%3A32024R1689

5. “Article 50: Transparency Obligations for Providers and Deployers of Certain AI Systems.” AI Act Explorer, 2024. Accessed August 22, 2025. https://artificialintelligenceact.eu/article/50/

6. Directive (EU) 2019/790 on Copyright and Related Rights in the Digital Single Market. EUR-Lex, April 17, 2019. Accessed August 22, 2025.
https://eur-lex.europa.eu/eli/dir/2019/790/oj/eng
7. “ICYMI: President Trump Signs TAKE IT DOWN Act into Law.” The White House (Official Website), May 19, 2025. Accessed August 22, 2025. https://www.whitehouse.gov/articles/2025/05/icymi-president-trump-signs-take-it-down-act-into-law/

8. S.146 — TAKE IT DOWN Act (119th Congress). Congress.gov, 2025. Accessed August 22, 2025. https://www.congress.gov/crs-product/LSB11314

9. Folketinget (Danish Parliament). Bill Amending the Copyright Act (Protection against Realistic Digitally Generated Imitations of Personal Characteristics) – Annex. July 7, 2025. Accessed August 22, 2025. https://www.ft.dk/samling/20241/almdel/kuu/bilag/232/3050901.pdf

10. Court of Cassation, 11th Civil Chamber. E. 2019/1770, K. 2019/8230, Decision Date: December 16, 2019. Lexpera Legal Information System. Accessed August 22, 2025. https://www.lexpera.com.tr/ictihat/yargitay/11-hukuk-dairesi-e-2019-1770-k-2019-8230-t-16-12-2019

CONTRIBUTORS

Nazlı Özkul Elif Tanyeri
Founding Partner Associate
M: +90 507 604 23 25  M: +90 533 600 11 05
nazli@npartners.com.tr elif@npartners.com.tr