{"id":170532,"date":"2026-01-28T12:39:21","date_gmt":"2026-01-28T11:39:21","guid":{"rendered":"https:\/\/liora.io\/en\/?p=170532"},"modified":"2026-02-06T07:30:23","modified_gmt":"2026-02-06T06:30:23","slug":"ai-watermarking-all-you-need-to-know","status":"publish","type":"post","link":"https:\/\/liora.io\/en\/ai-watermarking-all-you-need-to-know","title":{"rendered":"AI Watermarking: All you need to know"},"content":{"rendered":"<p><strong>L&#8217;AI watermarking, or AI digital watermarking, is a technique that involves embedding digital marks or indicators into machine learning models or datasets to enable their identification. Faced with the explosion of content generated by Artificial Intelligence, this approach has become essential. Discover the existing techniques and challenges to overcome&#8230;<\/strong><\/p>\nWithin the Machine Learning community, AI watermarking is a particularly active research field.\n\nAt a time when generative <a href=\"https:\/\/liora.io\/en\/learn-ai-everything-you-need-to-know\">artificial intelligences<\/a> like <a href=\"https:\/\/liora.io\/en\/autogpt-discover-the-new-tool-that-makes-chatgpt-autonomous\">ChatGPT<\/a> and DALL-E are <a href=\"https:\/\/liora.io\/en\/large-language-models-llm-everything-you-need-to-know\">generating increasingly realistic texts<\/a> and images, it is becoming urgent to create a system that can distinguish this content from that created by humans.\n\nMany techniques have already been invented by researchers, but very few are already applied in the real world. Is it really possible? Discover all the answers to your questions in this dossier!\n<h2 class=\"wp-block-heading\" id=\"h-what-is-ai-watermarking\">What is AI Watermarking ?<\/h2>\n<strong>AI Watermarking<\/strong> involves adding a message, logo, signature, or data to a physical or digital object. The goal is to determine its origin and source.\n\nThis practice has been applied to physical objects like banknotes and postage stamps for a long time to prove their authenticity. Nowadays, there are also techniques for AI watermarking digital objects such as images, audio files, or videos. Digital watermarks are also applied to data.\n\nThis. mark is sometimes visible but not always. <strong>AI Watermarking<\/strong> is frequently used for copyright management, especially to trace the origin of an image. The most sophisticated techniques allow hidden digital watermarks to be applied to digital objects, capable of resisting deletion attempts.\n<figure>\n\t\t\t\t\t\t\t\t\t\t<img decoding=\"async\" width=\"581\" height=\"350\" src=\"https:\/\/liora.io\/app\/uploads\/sites\/9\/2023\/09\/AI-Watermark1.jpg\" alt=\"\" loading=\"lazy\" srcset=\"https:\/\/liora.io\/app\/uploads\/sites\/9\/2023\/09\/AI-Watermark1.jpg 581w, https:\/\/liora.io\/app\/uploads\/sites\/9\/2023\/09\/AI-Watermark1-300x181.jpg 300w\" sizes=\"(max-width: 581px) 100vw, 581px\"><figcaption><\/figcaption><\/figure>\n<h2 class=\"wp-block-heading\" id=\"h-watermarking-of-ai-and-machine-learning-datasets\">Watermarking of AI and Machine Learning datasets<\/h2>\nAt present, researchers are exploring possibilities to apply <a href=\"https:\/\/liora.io\/en\/survival-analysis-beyond-machine-learning\">watermarking techniques to Machine Learning models<\/a> and the data used to produce them.\n\nTwo main approaches are distinguished. Firstly, &#8220;model watermarking&#8221; involves adding a watermark to a <a href=\"https:\/\/liora.io\/en\/k-means-clustering-in-machine-learning-a-deep-dive\">Machine Learning<\/a> model to detect whether it has been used to make a prediction.\n\nAs an alternative, &#8220;dataset watermarking&#8221; aims to modify a training dataset in an invisible way to detect whether a model has been trained on it.\n\nThese techniques can be implemented and used in various ways. Firstly, injecting specific data into the <strong>training dataset can modify the model, and these changes can be detected later.<\/strong>\n\nAnother method is to adjust the model&#8217;s weights during or after training. Again, this alteration can be detected subsequently.\n\nWatermarking a dataset is suitable when its creator is not involved in <a href=\"https:\/\/liora.io\/en\/chord-ai-a-helpful-tool-for-music-lovers\">training the AI<\/a>. It relies solely on adjusting the training dataset.\n\nThis allows discovering how a model was produced. In contrast, model watermarking allows detection of a model when it is deployed.\n\n<a href=\"\/formation\/data-ia\/machine-learning-engineer\">\nDevenir expert en Machine Learning\n<\/a>\n<h2 class=\"wp-block-heading\" id=\"h-challenges-of-ai-watermarking\">Challenges of AI Watermarking<\/h2>\nDataset <strong>AI Watermarking<\/strong> requires the development of new techniques because existing approaches do not work in the <a href=\"https:\/\/liora.io\/en\/unlock-your-future-dive-into-machine-learning-engineer-training\">context of Machine Learning.<\/a>\n\nFor example, when training an image classification model, any watermark present in the training images is removed because it is not relevant to the learning process.\n\nTo be useful, watermarking a Machine Learning dataset requires modifying the data in a way that is consistent with labeling. This induces changes in the model that can be detected later.\n<h2 class=\"wp-block-heading\" id=\"h-how-do-you-check-the-watermarking-of-an-ai\">How do you check the watermarking of an AI?<\/h2>\nIt is possible to verify the <strong>watermarking of an AI model<\/strong> without needing direct access. This includes determining its origin and whether it was trained on a specific dataset.\n\nTo check the watermark, one simply needs to inspect its output in response to specific data inputs designed to expose it. In theory, this method can be applied to any AI.\n<figure>\n\t\t\t\t\t\t\t\t\t\t<img decoding=\"async\" src=\"https:\/\/liora.io\/app\/uploads\/2023\/02\/AI-Watermark2.jpg\" title=\"\" alt=\"\" loading=\"lazy\">\n\n<figcaption><\/figcaption><\/figure>\n<h2 class=\"wp-block-heading\" id=\"h-ai-watermarking-techniques\">AI Watermarking techniques<\/h2>\nIn a blog post, Facebook \/ Meta researchers introduce the concept of &#8220;radioactive data&#8221; for AI watermarking. According to them, this technique helps determine which dataset was used to train a model.\n\nThis helps <strong>gain a better understanding of how different datasets<\/strong> impact the performance of various neural networks. Therefore, this type of technique provides researchers and engineers with the ability to better understand how their peers train their models.\n\nBy extension, it helps detect potential biases in these models. For example, it can prevent the misuse of specific datasets for Machine Learning purposes.\n\nIn a scientific paper titled &#8220;Open Source Dataset Protection,&#8221; Chinese researchers suggest a useful method to confirm that commercial AI models have not been trained on datasets intended for educational or scientific use.\n\nIn 2018, IBM introduced a technique to verify the ownership of neural network services using simple API queries. The goal is to protect Deep Learning models against cyberattacks. Researchers developed three different algorithms to add relevant content, random data, or<a href=\"https:\/\/liora.io\/en\/convolutional-neural-network-everything-you-need-to-know\"> noise as watermarks in the neural networks.<\/a>\n<h2 class=\"wp-block-heading\" id=\"h-comment-est-utilise-le-watermarking-d-ia\">Comment est utilis\u00e9 le Watermarking d&#8217;IA?<\/h2>\nFor now, <strong>AI watermarking<\/strong> remains mainly theoretical, but there are numerous potential use cases.\n\nModel watermarking could be used by a government agency to verify that a <a href=\"https:\/\/liora.io\/en\/machine-learning-engineer-all-about-the-job\">Machine Learning model<\/a> used in a product complies with data protection laws.\n\nA civil society organization can ensure that a decision-making model has undergone an audit. Regulators can check if a specific third-party Machine Learning model has been deployed by a commercial organization to alert it to biases and certify the product or request a recall.\n\nDataset watermarking can determine if a Machine Learning model has been trained on biased or incorrect data, warning consumers. A Data Steward can determine if a model has been trained on personal data they provided to protect it.\n\nA data publisher can determine if a model has been trained on an outdated version of the dataset to alert users to known biases or errors. Lastly, a regulator can determine which datasets are used by Machine Learning models to prioritize audits.\n\nIn general, watermarking helps determine which AI model is used by a service and which datasets are used for training. It is a valuable asset for transparency and ethics.\n\nIn some cases, other methods can achieve this goal. For example, regulators may require companies to directly state the data sources used. However, watermarking can provide a more trustworthy source.\n\nWith the rise of generative AI like DALL-E and ChatGPT, watermarking becomes indispensable. Only this technique allows us to know if content was created by AI.\n\nThis can, for example, help determine if a student cheated on an essay, or if a generative AI like MidJourney is trained on copyrighted images. Similarly, watermarking can help detect <a href=\"https:\/\/liora.io\/en\/data-poisoning-a-threat-to-machine-learning-models\">&#8220;DeepFake&#8221; videos generated using AI.<\/a>\n\n\n<div class=\"wp-block-buttons is-layout-flex wp-block-buttons-is-layout-flex is-content-justification-center\"><div class=\"wp-block-button \"><a class=\"wp-block-button__link wp-element-button \" href=\"\/formation\/data-ia\/\">Se former l&#8217;Intelligence Artificielle<\/a><\/div><\/div>\n\n<h2 class=\"wp-block-heading\" id=\"h-chatgpt-and-ai-watermarking\">ChatGPT and AI Watermarking<\/h2>\nSince its launch by OpenAI in late 2022, ChatGPT has quickly become a viral phenomenon. In a matter of seconds, this AI can answer all kinds of questions and generate text in various languages or even computer code.\n\nThis chatbot is already impressive and is likely to improve further with the launch of GPT-4 scheduled for 2023. Therefore, it&#8217;s becoming increasingly challenging to distinguish text generated by ChatGPT from human writing.\n\nIt&#8217;s essential to invent a watermarking system for this AI before the web gets flooded with text produced by a chatbot that may contain false or outdated information.\n\nInitially, OpenAI simply asked ChatGPT users to clearly indicate content generated by the AI. However, relying solely on user honesty would be naive.\n\nIn the days following the launch of this AI, many students started using it to cheat and improve their grades. This practice spread like wildfire, including in France, to the extent that Sciences Po Paris banned this tool for its students under the threat of disciplinary sanctions.\n\nOne can also expect <strong>Amazon<\/strong> merchants to use it to generate fake reviews, or governments to employ it for propaganda purposes. Likewise, cybercriminal gangs use it to craft more convincing phishing emails.\n\nGiven these serious risks, AI watermarking has become essential. A detection method has already been added by OpenAI to the DALL-E AI, to attach a visual signature to the images it generates. However, the task is much more challenging for textual content.\n\nThe most promising approach is cryptography. At a conference at the University of Texas at Austin, <strong>OpenAI<\/strong> researcher Scott Aaronson presented an experimental technique.\n\nIt involves converting words into a line of tokens representing punctuation marks, letters, or parts of words. These &#8220;strings&#8221; could consist of up to 100,000 tokens. Subsequently, GPT could arrange them to reflect the text.\n\nThis watermark could be detected using a cryptographic key known only to OpenAI. The difference would be imperceptible to the end user.\n\nIn early February 2023, OpenAI launched a classifier to detect content generated by ChatGPT or other AIs. However, its success rate is limited to 26%&#8230;\n\n?Discover also:\n<table dir=\"ltr\" border=\"1\" cellspacing=\"0\" cellpadding=\"0\">\n<colgroup>\n<col width=\"656\"><\/colgroup>\n<tbody>\n<tr>\n<td data-sheets-value=\"{&quot;1&quot;:2,&quot;2&quot;:&quot;Image Processing&quot;}\" data-sheets-hyperlink=\"https:\/\/liora.io\/en\/image-processing-fundamental-principles-and-practical-uses\"><a href=\"https:\/\/liora.io\/en\/image-processing-fundamental-principles-and-practical-uses\" target=\"_blank\" rel=\"noopener\">Image Processing<\/a><\/td>\n<\/tr>\n<tr>\n<td data-sheets-value=\"{&quot;1&quot;:2,&quot;2&quot;:&quot;Deep Learning - All you need to know&quot;}\" data-sheets-hyperlink=\"https:\/\/liora.io\/en\/all-about-deep-learning\"><a href=\"https:\/\/liora.io\/en\/all-about-deep-learning\" target=\"_blank\" rel=\"noopener\">Deep Learning &#8211; All you need to know<\/a><\/td>\n<\/tr>\n<tr>\n<td data-sheets-value=\"{&quot;1&quot;:2,&quot;2&quot;:&quot;Mushroom Recognition&quot;}\" data-sheets-hyperlink=\"https:\/\/liora.io\/en\/mushroom-recognition\"><a href=\"https:\/\/liora.io\/en\/mushroom-recognition\" target=\"_blank\" rel=\"noopener\">Mushroom Recognition<\/a><\/td>\n<\/tr>\n<tr>\n<td data-sheets-value=\"{&quot;1&quot;:2,&quot;2&quot;:&quot;Tensor Flow - Google's ML&quot;}\" data-sheets-hyperlink=\"https:\/\/liora.io\/en\/tensor-flow-all-about-googles-machine-learning-framework\"><a href=\"https:\/\/liora.io\/en\/tensor-flow-all-about-googles-machine-learning-framework\" target=\"_blank\" rel=\"noopener\">Tensor Flow &#8211; Google&#8217;s ML<\/a><\/td>\n<\/tr>\n<tr>\n<td data-sheets-value=\"{&quot;1&quot;:2,&quot;2&quot;:&quot;Dive into ML&quot;}\" data-sheets-hyperlink=\"https:\/\/liora.io\/en\/unlock-your-future-dive-into-machine-learning-engineer-training\"><a href=\"https:\/\/liora.io\/en\/unlock-your-future-dive-into-machine-learning-engineer-training\" target=\"_blank\" rel=\"noopener\">Dive into ML<\/a><\/td>\n<\/tr>\n<tr>\n<td data-sheets-value=\"{&quot;1&quot;:2,&quot;2&quot;:&quot;Data Poisoning&quot;}\" data-sheets-hyperlink=\"https:\/\/liora.io\/en\/data-poisoning-a-threat-to-machine-learning-models\"><a href=\"https:\/\/liora.io\/en\/data-poisoning-a-threat-to-machine-learning-models\" target=\"_blank\" rel=\"noopener\">Data Poisoning<\/a><\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<figure>\n\t\t\t\t\t\t\t\t\t\t<img decoding=\"async\" width=\"593\" height=\"350\" src=\"https:\/\/liora.io\/app\/uploads\/sites\/9\/2023\/09\/AI-Watermark3.jpg\" alt=\"\" loading=\"lazy\" srcset=\"https:\/\/liora.io\/app\/uploads\/sites\/9\/2023\/09\/AI-Watermark3.jpg 593w, https:\/\/liora.io\/app\/uploads\/sites\/9\/2023\/09\/AI-Watermark3-300x177.jpg 300w\" sizes=\"(max-width: 593px) 100vw, 593px\">\n\n<figcaption><\/figcaption><\/figure>\n<h2 class=\"wp-block-heading\" id=\"h-une-technique-de-detection-des-mots-preferes-de-l-ia\">Une technique de d\u00e9tection des mots pr\u00e9f\u00e9r\u00e9s de l&#8217;IA<\/h2>\nIn an article published on January 24, 2023, researchers present a watermarking technique for<strong> ChatGPT and other language generation models.<\/strong>\n\nIt relies on software maintaining two lists of words: a green one and a red one. When a chatbot like ChatGPT chooses the next word in the text it generates, it typically selects a word from the green list.\n\nTo detect if a text is generated by the AI, you simply let software count the number of green words. Beyond a certain threshold, the probability increases.\n\nThis approach proves to be more effective on longer texts. In theory, it could be integrated into a web browser extension to automatically flag AI-generated content.\n\nOf course, this tool is not foolproof. It is possible to manually modify a text to replace the words from the green list, provided, of course, that you have access to that list. Furthermore, this method requires <strong>OpenAI and other AI creators to agree to implement the tool.<\/strong>\n<h2 class=\"wp-block-heading\" id=\"h-a-watermark-for-ai-generated-voices\">A watermark for AI-generated voices<\/h2>\nIn addition to text and images, Artificial Intelligence excels in voice imitation. Tools like Vall-E, for example, can synthesize any voice to read a text.\n\nThese technologies offer <strong>many possibilities<\/strong> for voice acting or audiobooks but also pose risks. A malicious person can create fake speeches of politicians or other celebrities.\n\nTo combat the risks of abuse, Resemble AI has created a watermarking system for AI-generated voices. Its name is a combination of the words &#8220;perceptual&#8221; and &#8220;threshold&#8221;: PerTh.\n\nThis system uses a Machine Learning model to embed data packets into the audio content and retrieve them later.\n\nThese data packets are imperceptible but intertwined with the content. They are difficult to remove and provide a means to verify if a voice has been <strong>generated by AI.<\/strong> Furthermore, this technique allows for audio manipulation, such as speeding it up, slowing it down, or compressing it into a format like MP3.\n\n<a href=\"\/en\/courses\/data-ai\/\">\nDiscover our courses\n<\/a>\n\nThe watermark is, in fact, a l<strong>ow-frequency tone masked<\/strong> by higher-frequency tones to the listener&#8217;s ears. It is, therefore, below the threshold of perception.\n\nThe challenge tackled by Resemble AI is to create a Machine Learning model capable of generating these tones and placing them at the right moments in an audio so that they are imperceptible. This model can also reverse the process to retrieve the data.\n\nUnfortunately, this ingenious method currently only works with voices generated by Resemble AI and its own AI. It may take some time for a universal solution to emerge and become a security standard.\n<h2 class=\"wp-block-heading\" id=\"h-watermark-free-ai-banned-in-china\">Watermark-free AI banned in China<\/h2>\nSince January 10, 2023, China has banned the creation of AI content without watermarking. This rule was issued by the cyberspace authority, which is also responsible for internet censorship.\n\nThe authorities point to the dangers posed by &#8220;deep synthesis technology.&#8221; While this innovation can certainly meet user needs, it can also be abused to spread illegal or dangerous information, tarnish reputations, or impersonate identities.\n\nAccording to the official statement, AI-generated content endangers national security and social stability. Therefore, new products must be evaluated and approved by the authority before being commercialized.\n\nThe importance of watermarking to identify AI content is emphasized. Digital watermarks should not be removable, tampered with, or concealed. Furthermore, users must create accounts using their real names, and all generated content must be traceable back to its creators.\n<h2 class=\"wp-block-heading\" id=\"h-an-ai-capable-of-removing-watermarks\">An AI capable of removing Watermarks<\/h2>\nIt is urgent to develop AI watermarking techniques, but unfortunately, AI can also be used to remove watermarks&#8230;\n\nThe free tool WatermarkRemover.io can remove digital watermarks from images. While it can be used for legitimate purposes, there is nothing to prevent it from being exploited maliciously&#8230;\n\nThis artificial intelligence makes it easy to erase complex watermarks, with multiple colors or opacity values. In the future, we may fear the emergence of tools capable of removing watermarks from AI-generated content.\n<figure>\n\t\t\t\t\t\t\t\t\t\t<img decoding=\"async\" width=\"568\" height=\"350\" src=\"https:\/\/liora.io\/app\/uploads\/sites\/9\/2023\/09\/AI-Watermark4.jpg\" alt=\"\" loading=\"lazy\" srcset=\"https:\/\/liora.io\/app\/uploads\/sites\/9\/2023\/09\/AI-Watermark4.jpg 568w, https:\/\/liora.io\/app\/uploads\/sites\/9\/2023\/09\/AI-Watermark4-300x185.jpg 300w\" sizes=\"(max-width: 568px) 100vw, 568px\">\n\n<figcaption><\/figcaption><\/figure>\n<h2 class=\"wp-block-heading\" id=\"h-quel-est-le-futur-du-watermarking-ia\">Quel est le futur du Watermarking IA ?<\/h2>\nSeveral advancements are needed to apply AI watermarking in the real world and build an ecosystem around the theoretical techniques invented by researchers.\n\nFirst, further research is necessary to identify and refine the best techniques, <strong>establishing standards for various types of datasets<\/strong>.\n\nCommon standards must also be developed to integrate watermarking into the curation and publication of training datasets. This includes the introduction of watermarks into the data, the creation of reliable documentation, and the publication of the necessary data for verification.\n\nSimilarly, standards<strong> need to be developed for integrating watermarking<\/strong> steps into the training and publishing of machine learning models. Finally, a registry and tools must be developed to allow organizations to verify watermarks through audits.\n<h2 class=\"wp-block-heading\" id=\"h-conclusion-ai-watermarking-a-major-challenge-for-tomorrow-s-world\">Conclusion: AI Watermarking, a major challenge for tomorrow&#8217;s world<\/h2>\nIn a few decades, customs will likely have changed. We will be accustomed to the constant flow of texts, images, and videos generated by AI to the point where it will no longer be necessary to know whether content is created by humans or not.\n\nHowever,<strong> AI watermarking remains imperative<\/strong> for copyright protection, combating bias and discrimination, preventing misinformation, and for cybersecurity reasons.\n\nTo become an expert in Machine Learning and contribute to the development of watermarking techniques, you can turn to Liora. Our training programs will provide you with all the skills needed to become a <a href=\"\/en\/courses\/data-ai\/machine-learning-engineer\">Machine Learning Engineer<\/a>, <a href=\"\/en\/courses\/data-ai\/data-engineer\">Data Engineer<\/a>, or <a href=\"\/en\/courses\/data-ai\/data-scientist\">Data Scientist.<\/a>\n\nAll our programs are fully completed online via the web, and our state-recognized organization is eligible for funding. Don&#8217;t wait any longer and book an appointment with us!\n\n<a href=\"\/formation\/data-ia\/\">\nD\u00e9marrer une formation dans la Data Science\n<\/a>\n<script type=\"application\/ld+json\"><br \/>\n{<br \/>\n  \"@context\": \"https:\/\/schema.org\",<br \/>\n  \"@type\": \"FAQPage\",<br \/>\n  \"mainEntity\": [{<br \/>\n    \"@type\": \"Question\",<br \/>\n    \"name\": \"What is Watermarking?\",<br \/>\n    \"acceptedAnswer\": {<br \/>\n      \"@type\": \"Answer\",<br \/>\n      \"text\": \"Watermarking consists of adding a message, logo, signature or data to a physical or digital object. The aim is to make it possible to determine its provenance and origin.\"<br \/>\n    }<br \/>\n  },{<br \/>\n    \"@type\": \"Question\",<br \/>\n    \"name\": \"How is AI Watermarking used?\",<br \/>\n    \"acceptedAnswer\": {<br \/>\n      \"@type\": \"Answer\",<br \/>\n      \"text\": \"For the moment, AI Watermarking remains mainly theoretical. However, we can anticipate a multitude of potential use cases. Model Watermarking could be used by a government agency to check that a Machine Learning model used in a product complies with data protection laws.\"<br \/>\n    }<br \/>\n  },{<br \/>\n    \"@type\": \"Question\",<br \/>\n    \"name\": \"What is the future of AI Watermarking?\",<br \/>\n    \"acceptedAnswer\": {<br \/>\n      \"@type\": \"Answer\",<br \/>\n      \"text\": \"Several advances are needed to be able to apply AI Watermarking in the real world and build an ecosystem around the theoretical techniques invented by the researchers. It will first be necessary to continue research to identify and perfect the best techniques, in order to put in place standards for all the different types of dataset.\"<br \/>\n    }<br \/>\n  }]<br \/>\n}<br \/>\n<\/script>","protected":false},"excerpt":{"rendered":"<p>L\u2019AI watermarking, or AI digital watermarking, is a technique that involves embedding digital marks or indicators into machine learning models or datasets to enable their identification. Faced with the explosion of content generated by Artificial Intelligence, this approach has become essential. Discover the existing techniques and challenges to overcome\u2026<\/p>\n","protected":false},"author":78,"featured_media":170534,"comment_status":"open","ping_status":"open","sticky":false,"template":"elementor_theme","format":"standard","meta":{"_acf_changed":false,"editor_notices":[],"footnotes":""},"categories":[2433],"class_list":["post-170532","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-data-ai"],"acf":[],"_links":{"self":[{"href":"https:\/\/liora.io\/en\/wp-json\/wp\/v2\/posts\/170532","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/liora.io\/en\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/liora.io\/en\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/liora.io\/en\/wp-json\/wp\/v2\/users\/78"}],"replies":[{"embeddable":true,"href":"https:\/\/liora.io\/en\/wp-json\/wp\/v2\/comments?post=170532"}],"version-history":[{"count":3,"href":"https:\/\/liora.io\/en\/wp-json\/wp\/v2\/posts\/170532\/revisions"}],"predecessor-version":[{"id":205381,"href":"https:\/\/liora.io\/en\/wp-json\/wp\/v2\/posts\/170532\/revisions\/205381"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/liora.io\/en\/wp-json\/wp\/v2\/media\/170534"}],"wp:attachment":[{"href":"https:\/\/liora.io\/en\/wp-json\/wp\/v2\/media?parent=170532"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/liora.io\/en\/wp-json\/wp\/v2\/categories?post=170532"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}