Deepfake in advertisements for dietary supplements. Risks and legal mechanisms for protecting doctors

Deepfake in advertisements for dietary supplements. Risks and legal mechanisms for protecting doctors

10 April 2026 - Małgorzata Furmańska
Share:

The use of new technologies, including artificial intelligence, has become common due to the acceleration of creative processes, reduction of costs and the ability to quickly create content. In addition to these benefits, however, there are serious abuses by commercial entities, especially against people who enjoy public trust, such as doctors, whose authority is sometimes used in advertisements for commercial products, including dietary supplements.

Advertisers from the big pharma sector are increasingly using deepfake technology to create promotional materials for dietary supplements with the participation of well-known doctors. The term ‘deepfake’ is defined in Article 3 (60) of the AI Act[1] and means’ AI-generated or manipulated images, audio content or video content that resembles existing persons, objects, places, entities or events and would falsely appear to a person to be authentic or truthful.

Deepfake in the world of new technologies

The technology makes it possible to create hyper-realistic audio-visual materials using images of people recognizable in a given industry as promoters of products, including dietary supplements. Manipulation of this kind is particularly harmful when it concerns doctors, i.e. people performing the profession of public trust. False recommendations of products intended for consumption can not only mislead patients and influence their purchasing decisions, but also damage the reputation of specialists, undermine trust in the medical profession and, above all, expose these people to legal consequences.

According to Article 71 (4) of the Code of Medical Ethics, ‘The doctor shall be responsible for information on the services offered published on their behalf or for their benefit by third parties’. In turn, according to Article 14 of the Code, ‘A doctor must not use their influence on a patient for any purpose other than therapeutic’. The doctor may be professionally responsible for a violation of the Code of Medical Ethics. At the end of 2019, a Code of Good Practice for Dietary Supplements was introduced. Pursuant to Article 6 of that Code, the advertising of a dietary supplement may not use the image or recommendations of a real or fictitious doctor. Therefore, there are a number of internal regulations that prohibit such advertising, the violation of which may entail certain legal consequences. Persons participating in prohibited advertising may be considered to be in violation of the law (and the Code of Medical Ethics) and may be subject to certain proceedings, e.g. disciplinary proceedings.

The use of the image of doctors in advertisements

Numerous incidents and media analyses show that deepfakes are used systematically in marketing activities to give credibility to products through alleged recommendations of experts. Despite the available legal instruments, effective enforcement of safeguards against such practices requires determination and knowledge of the possibilities of action.

The use of the image and voice of a doctor in advertising without his consent is included in the catalogue of violations of personal rights (including dignity, good name, image) and may constitute a crime (e.g. advertising fraud, defamation, using someone else’s image for profit) in accordance with the Polish legal order. The injured party has the right to claim the protection of personal rights pursuant to Articles 23 and 24 of the Civil Code, demanding, among others, the cessation of violations, removal of their effects, apology, compensation and compensation.

Depending on the specific circumstances of the use of the deepfake, it may constitute an offence provided for in the Criminal Code, such as misappropriation of identity (Article 190a (2)), force (Article 191), defamation or insult (Articles 212 and 216) and fraud (Article 286). However, analyses of supervisory authorities indicate that deepfakes often escape traditional legal categories. In the Polish legal order, there are still no provisions directly related to this technology, which makes it difficult to unequivocally qualify the law and effectively respond to new forms of violations.

Proposals for legal changes

The President of the Personal Data Protection Office (UODO) sent a letter to the Prime Minister with a request to consider the introduction of statutory solutions that will provide effective protection against the negative impact of deepfakes, especially for natural persons.

The Personal Data Protection Office and other institutions point to regulatory gaps and call for the introduction of new protective regulations. The letter indicates that effective counteracting the negative effects of the above-mentioned technology requires the involvement and cooperation of all participants in the information ecosystem. Therefore, the commitment of technology, internet and social platforms to install systems for automatic and effective detection and marking of deepfakes should also be considered.

The above solution could contribute to limiting the phenomenon of deepfakes in public space. However, given the fact that not all platforms will comply with these regulations, and the creators of deepfakes will look for alternative solutions, the greatest challenge seems to be the effective enforcement of rights by the victims and the determination of the perpetrator.

What actions can the victim of deepfake take?

The entities creating the deepfake and deriving profits from it often come from outside Poland, which requires additional activities to determine and identify the perpetrators. The data of domain registrants is also often secret, and the manufacturer of the advertised product itself is not always the perpetrator of the infringement. It happens that advertisements for products using deepfakes are used to extort consumer data or money. In these types of situations, the responsibility lies with another entity.

In the case of using the image through deepfake technology, it is important to immediately document the misleading material, for example through screenshots, notarization of prints, requesting immediate cessation of dissemination (if the perpetrator is identifiable) and requesting publishing platforms to remove the content. It is worth pursuing civil claims and considering reporting the crime to the police or the prosecutor’s office. It is also very important to correct false information through the media, e.g. by publishing a statement in front of the camera or posting in social media.

Legal conclusions and recommendations

In the era of rapidly developing technology and its almost unlimited possibilities, it is necessary to introduce a legal framework adapted to deepfake (obligation to mark the generated materials, mechanisms for rapid removal of content, liability of entities commercially using other people’s images without consent). We are faced with an urgent need to update the applicable provisions on the protection of personal rights and personal data, which are most often violated.

The use of deepfake technology to create advertisements for dietary supplements with the participation of well-known doctors poses a serious threat to both public health and trust in the medical community. It also causes serious professional consequences for the doctor, who de facto had no influence on the unauthorized use of his image. Current legal tools theoretically allow for the enforcement of rights, but there are no coherent and rapid mechanisms adequate to the scale of this phenomenon. Parallel actions are needed: raising awareness and procedures on the part of the medical community, technical and regulatory solutions on the part of platforms and urgent legislative changes to effectively counteract abuse and protect doctors and patients from the harmful consequences of deepfake.

[1] Regulation (EU) 2024/1689 of the European Parliament and of the Council of 13 June 2024 on the establishment of harmonised provisions for artificial intelligence and amending Regulations (EC) No 300/2008, (EU) No 167/2013, (EU) No 168/2013, (EU) 2018/858, (EU) 2018/1139 and (EU) 2019/2144 and Directives 2014/90/EU, (EU) 2016/797 and (EU) 2020/1828 (Artificial Intelligence Act), (OJEU.L.2024.1689).

Authors:

Małgorzata Furmańska

Aleksandra Tabaka

Treść artykułu ma na celu przedstawienie ogólnych informacji związanych z danym tematem. W przypadku konkretnej sprawy należy zasięgnąć specjalistycznej porady uwzględniającej indywidualne okoliczności.

Warszawa

JWP Patent & Trademark Attorneys
ul. Mińska 75
03-828 Warsaw
Poland
P: 22 436 05 07
E: info@jwp.pl

VAT: PL5260111868
Court Register No: 0000717985

Gdańsk

JWP Patent & Trademark Attorneys
HAXO Building
ul. Strzelecka 7B
80-803 Gdańsk
Poland
P: +48 58 511 05 00
E: gdansk@jwp.pl

Kraków

JWP Patent & Trademark Attorneys
ul. Kamieńskiego 47
30-644 Kraków
Poland
P: +48 12 655 55 59
E: krakow@jwp.pl

Wrocław

JWP Patent & Trademark Attorneys
WPT Bud. Alfa
ul. Klecińska 123
54-413 Wrocław
Poland
T: +4871 342 50 53
E: wroclaw@jwp.pl