{"id":190680,"date":"2026-01-28T16:22:55","date_gmt":"2026-01-28T15:22:55","guid":{"rendered":"https:\/\/liora.io\/en\/?p=190680"},"modified":"2026-02-06T07:21:23","modified_gmt":"2026-02-06T06:21:23","slug":"all-about-anthropic","status":"publish","type":"post","link":"https:\/\/liora.io\/en\/all-about-anthropic","title":{"rendered":"Anthropic: Redefining Artificial Intelligence with Ethics at its Core"},"content":{"rendered":"<strong>Founded by former OpenAI executives and strongly backed by Amazon, Anthropic has developed an AI named Claude. It seeks to set itself apart from ChatGPT or Gemini in unexpected areas.<\/strong>\n\nStarting in 2015, many tech figures, including Elon Musk and Bill Gates, began voicing their concerns about <a href=\"https:\/\/liora.io\/en\/artificial-intelligence-definition\">artificial intelligence<\/a>. As it pervades sectors like healthcare, military, or surveillance, any mistake could have severe consequences.\n\n<style><br \/>\n.elementor-heading-title{padding:0;margin:0;line-height:1}.elementor-widget-heading .elementor-heading-title[class*=elementor-size-]>a{color:inherit;font-size:inherit;line-height:inherit}.elementor-widget-heading .elementor-heading-title.elementor-size-small{font-size:15px}.elementor-widget-heading .elementor-heading-title.elementor-size-medium{font-size:19px}.elementor-widget-heading .elementor-heading-title.elementor-size-large{font-size:29px}.elementor-widget-heading .elementor-heading-title.elementor-size-xl{font-size:39px}.elementor-widget-heading .elementor-heading-title.elementor-size-xxl{font-size:59px}<\/style>\n<h3>The Amodeis<\/h3>\nIn 2019, three years before <a href=\"https:\/\/liora.io\/en\/chatgpt-how-does-this-nlp-algorithm-work\">the release of ChatGPT<\/a>, these concerns affected two vice presidents of OpenAI: <b>Dario and Daniela Amodei<\/b> (siblings).\n\nAs early as 2016, Dario Amodei voiced his concerns in an article published in collaboration with Google researchers titled <b>\u201cConcrete Problems in AI Safety\u201d<\/b>, worried about the unpredictability inherent in large-scale AI models.\n\nOpenAI was initially founded as a non-profit organization with the mission to \u201cbuild a <b>safe general AI<\/b> and share the benefits with the world.\u201d However, in 2019, it received a substantial investment of $1 billion from Microsoft. From then on, OpenAI became interested in <b>the prospect of generating profits for its investors<\/b>. The potential dangers of AI were no longer a primary concern for the leaders. This situation generated various internal tensions. As Dario Amodei testified:\n\n\u201cWe were a group within OpenAI who, after creating <b>GPT-2<\/b> and <strong>GPT-3<\/strong>, firmly believed in certain ideas. The first was that if more computing power was devoted to these models, there would be virtually no limit. The second was that beyond improving the models, we had to consider their <b>ethical alignment<\/b> and safety issues.\u201d\n\n<style><br \/>\n.elementor-widget-image{text-align:center}.elementor-widget-image a{display:inline-block}.elementor-widget-image a img[src$=\".svg\"]{width:48px}.elementor-widget-image img{vertical-align:middle;display:inline-block}<\/style>\t\t\t\t\t\t\t\t\t\t\t\t<img decoding=\"async\" width=\"992\" height=\"661\" src=\"https:\/\/liora.io\/app\/uploads\/sites\/9\/2024\/10\/Dario-et-Daniela-Amodei.jpg\" alt=\"\" loading=\"lazy\" srcset=\"https:\/\/liora.io\/app\/uploads\/sites\/9\/2024\/10\/Dario-et-Daniela-Amodei.jpg 992w, https:\/\/liora.io\/app\/uploads\/sites\/9\/2024\/10\/Dario-et-Daniela-Amodei-300x200.jpg 300w, https:\/\/liora.io\/app\/uploads\/sites\/9\/2024\/10\/Dario-et-Daniela-Amodei-768x512.jpg 768w\" sizes=\"(max-width: 992px) 100vw, 992px\">\n\n<div class=\"wp-block-buttons is-layout-flex wp-block-buttons-is-layout-flex is-content-justification-center\"><div class=\"wp-block-button \"><a class=\"wp-block-button__link wp-element-button \" href=\"\/en\/courses\/data-ai\/machine-learning-engineer\">Learn more about artificial intelligence<\/a><\/div><\/div>\n\n<h3>Founding of Anthropic<\/h3>\nThese issues related to the <b>civilizational role of AI<\/b> led Dario Amodei to leave OpenAI in December 2020. In the following weeks, 14 other researchers joined him, including his sister Daniela.\n\nThe Anthropic foundation was launched soon after, in January 2021. Its ambition is stated as: \u201cdevelop <b>an extremely powerful but safe AI<\/b> for the future of humanity.\u201d\n<h3>Constitutional AI<\/h3>\nAnthropic\u2019s endeavor to <b>integrate ethical principles into AI<\/b> and to mitigate unpredictable, unreliable, and opaque elements is summarized by a name: <b>Constitutional AI<\/b>. It incorporates 10 guiding principles ensuring that its responses are accurate, ethical, and beneficial to users, and must improve over time, respecting the various principles established by humans.\n<h3>Early Funding<\/h3>\nTo develop an AI model, funds are needed. Anthropic rapidly attracted funding. The first &#8211; <b>$124 million<\/b> &#8211; was provided by Estonian billionaire Jaan Tallinn, co-founder of Skype. In October 2021, it was Sam Bankman-Fried\u2019s turn, founder of the cryptocurrency platform FTX, to contribute <b>$500 million<\/b> to Anthropic \u2013 as we know, FTX would collapse a year later. Towards the end of the year, PitchBook, a platform that tracks private investment data, raised <b>$704 million<\/b>.\n\n<img decoding=\"async\" width=\"1000\" height=\"625\" src=\"https:\/\/liora.io\/app\/uploads\/sites\/9\/2024\/10\/anthropic_Liora_0.webp\" alt=\"\" loading=\"lazy\" srcset=\"https:\/\/liora.io\/app\/uploads\/sites\/9\/2024\/10\/anthropic_Liora_0.webp 1000w, https:\/\/liora.io\/app\/uploads\/sites\/9\/2024\/10\/anthropic_Liora_0-300x188.webp 300w, https:\/\/liora.io\/app\/uploads\/sites\/9\/2024\/10\/anthropic_Liora_0-768x480.webp 768w\" sizes=\"(max-width: 1000px) 100vw, 1000px\">\n\n<div class=\"wp-block-buttons is-layout-flex wp-block-buttons-is-layout-flex is-content-justification-center\"><div class=\"wp-block-button \"><a class=\"wp-block-button__link wp-element-button \" href=\"\/en\/courses\/data-ai\/machine-learning-engineer\">Training in artificial intelligence<\/a><\/div><\/div>\n\n<h3>Birth of Claude<\/h3>\nIt wasn\u2019t until the summer of 2022 that Anthropic unveiled <b>the first version of its<\/b> generative AI: <b>Claude<\/b>. The Amodeis chose not to release it, citing the necessity for further safety tests. Instead, OpenAI\u2019s ChatGPT would reveal to the world the potentials of artificial intelligence.\n\nAt the end of 2022, <b>$300 million was invested by Google<\/b>, securing a 10% stake in the startup. This investment facilitated access to significant computing resources.\n\nIt wasn&#8217;t until March 2023 that Anthropic introduced the first version of Claude. To differentiate it from ChatGPT, the Amodeis claimed that <b>their chatbot refused to generate harmful or biased content<\/b> and provided sources to back its responses.\n\nIn July 2023, a new version, Claude 2, surprised with <b>its creativity and sense of humor<\/b>. During that same month, Dario Amodei testified before the U.S. Senate to explain how unrestricted AI could aid in creating weapons of mass destruction.\n\nThe major event took place in September when <b>Amazon invested $1.25 billion<\/b> in Anthropic, pledging to raise the amount to $4 billion.\n\nThe work of the Amodeis was recognized by Time 100, naming them among the most influential people in the world in 2023.\n<h3>Strong Valuation<\/h3>\nThe year 2024 began with a significant development. In February, venture capital firm <b>Menlo Ventures agreed to invest $750 million<\/b> in Anthropic, raising its valuation to $18.4 billion. Then, in June of the same year, Anthropic launched the Claude 3 family \u2013 Opus, Sonnet, and Haiku. This time, the model was acclaimed as one of the top on the market.\n\nIn a new development, Anthropic and OpenAI agreed to submit their new models to the U.S. government in response to growing concerns about AI safety and ethics. Indeed, the National Institute of Standards and Technology (NIST) announced that it would now have \u201caccess to major new models from each company before and after their public release.\u201d\n\nBy emphasizing <b>AI safety and ethics<\/b>, Anthropic has initiated a significant movement in the history of this technology.\n\n<img decoding=\"async\" width=\"700\" height=\"700\" src=\"https:\/\/liora.io\/app\/uploads\/sites\/9\/2024\/10\/anthropic_Liora_1.webp\" alt=\"\" loading=\"lazy\" srcset=\"https:\/\/liora.io\/app\/uploads\/sites\/9\/2024\/10\/anthropic_Liora_1.webp 700w, https:\/\/liora.io\/app\/uploads\/sites\/9\/2024\/10\/anthropic_Liora_1-300x300.webp 300w, https:\/\/liora.io\/app\/uploads\/sites\/9\/2024\/10\/anthropic_Liora_1-150x150.webp 150w\" sizes=\"(max-width: 700px) 100vw, 700px\">\n\n<div class=\"wp-block-buttons is-layout-flex wp-block-buttons-is-layout-flex is-content-justification-center\"><div class=\"wp-block-button \"><a class=\"wp-block-button__link wp-element-button \" href=\"\/en\/courses\/data-ai\/machine-learning-engineer\">Mastering artificial intelligence<\/a><\/div><\/div>\n","protected":false},"excerpt":{"rendered":"<p>Founded by former OpenAI executives and strongly backed by Amazon, Anthropic has developed an AI named Claude. It seeks to set itself apart from ChatGPT or Gemini in unexpected areas.<\/p>\n","protected":false},"author":85,"featured_media":190682,"comment_status":"open","ping_status":"open","sticky":false,"template":"elementor_theme","format":"standard","meta":{"_acf_changed":false,"editor_notices":[],"footnotes":""},"categories":[2433],"class_list":["post-190680","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-data-ai"],"acf":[],"_links":{"self":[{"href":"https:\/\/liora.io\/en\/wp-json\/wp\/v2\/posts\/190680","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/liora.io\/en\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/liora.io\/en\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/liora.io\/en\/wp-json\/wp\/v2\/users\/85"}],"replies":[{"embeddable":true,"href":"https:\/\/liora.io\/en\/wp-json\/wp\/v2\/comments?post=190680"}],"version-history":[{"count":5,"href":"https:\/\/liora.io\/en\/wp-json\/wp\/v2\/posts\/190680\/revisions"}],"predecessor-version":[{"id":205296,"href":"https:\/\/liora.io\/en\/wp-json\/wp\/v2\/posts\/190680\/revisions\/205296"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/liora.io\/en\/wp-json\/wp\/v2\/media\/190682"}],"wp:attachment":[{"href":"https:\/\/liora.io\/en\/wp-json\/wp\/v2\/media?parent=190680"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/liora.io\/en\/wp-json\/wp\/v2\/categories?post=190680"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}