{"id":167476,"date":"2023-03-30T09:13:29","date_gmt":"2023-03-30T08:13:29","guid":{"rendered":"https:\/\/liora.io\/en\/?p=167476"},"modified":"2026-02-06T09:05:07","modified_gmt":"2026-02-06T08:05:07","slug":"pytorch-all-about-this-framework","status":"publish","type":"post","link":"https:\/\/liora.io\/en\/pytorch-all-about-this-framework","title":{"rendered":"PyTorch: all about Facebook&#8217;s Deep Learning framework"},"content":{"rendered":"<b>The Python language being one of the most used, it contains a lot of frameworks, and many of them are developed exclusively for Data Science. In this article, we will talk about one of them in detail : PyTorch.<\/b>\n\nIn the last few years, the popularity of Data Science has been growing steadily over and this has led to an explosion in the resources available to programmers: it is no longer necessary to code by hand. Programming environments such as <strong><a href=\"\/\">Pytorch<\/a><\/strong>, known as &#8220;frameworks&#8221;, allow complex models to be used in just a few lines.\n\n<style><br \/>\n.elementor-heading-title{padding:0;margin:0;line-height:1}.elementor-widget-heading .elementor-heading-title[class*=elementor-size-]>a{color:inherit;font-size:inherit;line-height:inherit}.elementor-widget-heading .elementor-heading-title.elementor-size-small{font-size:15px}.elementor-widget-heading .elementor-heading-title.elementor-size-medium{font-size:19px}.elementor-widget-heading .elementor-heading-title.elementor-size-large{font-size:29px}.elementor-widget-heading .elementor-heading-title.elementor-size-xl{font-size:39px}.elementor-widget-heading .elementor-heading-title.elementor-size-xxl{font-size:59px}<\/style>\n<h3>What\u2019s the origin of PyTorch ?<\/h3>\nFrameworks are made to <b>make programming easier<\/b>. They are usually developed in &#8220;<b>open source<\/b>&#8220;, which means that the code is available and editable by all, which allows reliability, transparency and continuous maintenance.\n\nPyTorch is no exception to this rule. Based on the former Torch library, <strong><a href=\"\/\">PyTorch<\/a><\/strong> was officially launched in 2016 by a team from <b>Facebook&#8217;s research lab<\/b>, and has been developed in open source ever since. The goal of this framework is to <b>enable the implementation and training of Deep Learning<\/b> models in a simple and efficient way. Its merger in 2018 with <strong><a href=\"\/\">Caffe2<\/a><\/strong> (another Python framework) has further improved its performance.\n\nPytorch is now used by 17% of Python developers (<a href=\"https:\/\/www.jetbrains.com\/lp\/python-developers-survey-2020\/\"><strong>Python<\/strong> <strong>Foundation 2020 study<\/strong><\/a>), and in many companies like Tesla, Uber etc.\n<h3>Why use PyTorch?<\/h3>\nPyTorch is a new machine learning library, but it has a lot of <b>manuals and tutorials<\/b> where you can find examples. It also has a community that is growing by leaps and bounds.\n\nPyTorch has a very <b>simple interface<\/b> for creating neural networks although it is necessary to work directly with tensors without needing a higher level library like Keras for Theano or Tensorflow.\n\nUnlike other machine learning tools such as Tensorflow, <b>PyTorch works with dynamic<\/b> rather than static graphs. This means that at runtime, features can be changed and the calculation of gradients will vary with them. In contrast, in Tensorflow, one must first define the computational graph and then use the session to compute the tensor results, which <b>makes debugging the code more difficult<\/b> and implementation more tedious.\n\nPyTorch is compatible with graphics cards (GPUs). It uses CUDA internally, an API that connects the CPU to the GPU which has been developed by<b> NVIDIA<\/b>.\n<h3>Advantages of PyTorch<\/h3>\nPyTorch has many advantages, here are the main ones:\n<h2><b>&nbsp;&nbsp;1. PyTorch and Python<\/b><\/h2>\nMost of the work related to Machine Learning and Artificial Intelligence is done using Python. However, <b>PyTorch and Python are related<\/b>, which means that Python developers should feel more comfortable coding with PyTorch than with other Deep Learning frameworks.\n<h2><b>&nbsp;&nbsp;&nbsp;2. Easy to learn<\/b><\/h2>\nLike the Python language, PyTorch is considered relatively <b>easier to learn<\/b> compared to other frameworks. The main reason is due to its simple and intuitive syntax.\n<h2><b>&nbsp;3. Strong community<\/b><\/h2>\nAlthough PyTorch is a relatively new framework, it has developed a dedicated community of developers very quickly. Moreover, PyTorch documentation is very <b>organized and useful<\/b> for beginners.\n<h2><b>4. Easy debugging<\/b><\/h2>\nPyTorch is deeply integrated with Python, so much so that many Python debugging tools can be easily used with it.\n\n\n<div class=\"wp-block-buttons is-layout-flex wp-block-buttons-is-layout-flex is-content-justification-center\"><div class=\"wp-block-button \"><a class=\"wp-block-button__link wp-element-button \" href=\"\/en\/courses\/data-ai\/data-scientist\">Become an expert with PyTorch<\/a><\/div><\/div>\n\n<h3>Pytorch vs Keras vs Tensorflow : What are the differences ?<\/h3>\nIt seems difficult to introduce PyTorch without mentioning the other alternatives, all created within a few years of each other with the same goal but different methods.\n\n<b>Keras<\/b> was developed in March 2015 by Fran\u00e7ois Chollet, a researcher at Google. Keras quickly gained popularity thanks to <b>its easy-to-use API<\/b>, which is heavily inspired by <b>scikit-learn<\/b>, the standard <strong><a href=\"https:\/\/liora.io\/en\/machine-learning-what-is-it-and-why-does-it-change-the-world\">Machine Learning<\/a><\/strong> library in Python.\n\nA few months later, in November 2015, Google released a <b>first version of <\/b><a href=\"https:\/\/liora.io\/en\/tensor-flow-all-about-googles-machine-learning-framework\"><b>TensorFlow<\/b><\/a>, which quickly became the reference framework for <strong><a href=\"https:\/\/liora.io\/en\/all-about-deep-learning\">Deep Learning<\/a><\/strong>, as it allows you to use Keras. Tensorflow also <b>developed a number of Deep Learning<\/b> features that researchers needed to easily create complex neural networks.\n\n<b>Keras was therefore very simple to use<\/b>, but lacked some of the &#8220;low-level&#8221; features or customizations needed for state-of-the-art models. But it didn&#8217;t look like the usual Python style and had very <b>complicated documentation<\/b> for beginners\n\nPyTorch solved these problems by <b>creating an API <\/b>that is both accessible and easy to customize, allowing the creation of new types of networks, optimizers and architectures. However, recent developments in these frameworks have <b>brought the way they work much closer together<\/b>.&nbsp;\n<h3>PyTorch and Deep Learning, does it work together ?<\/h3>\nWe have talked about the complexity of models and networks without mentioning the execution speed of algorithms. Indeed, PyTorch is designed to <b>minimize this time<\/b> and to <b>make the best use of the hardware specificities<\/b>.\n\nPyTorch represents data as multidimensional arrays, similar to NumPy arrays, called &#8220;tensors&#8221;. The tensors <b>store the inputs<\/b> of the neural network, the parameters of the hidden layers and the outputs. From these tensors, PyTorch can perform, in a <b>hidden and efficient way<\/b>, 4 steps to train the network:\n<ul>\n \t<li style=\"font-weight: 400;\" aria-level=\"1\">Assemble a graph from the tensors of the neural network, which allows a <b>dynamic structure<\/b>, it is possible to modify the <b>neural network<\/b> (number of nodes, connections between them&#8230;) during training.<\/li>\n \t<li style=\"font-weight: 400;\" aria-level=\"1\">Perform the predictions of the network (forward pass)<\/li>\n \t<li style=\"font-weight: 400;\" aria-level=\"1\">Compute the loss or error compared to the predictions<\/li>\n \t<li style=\"font-weight: 400;\" aria-level=\"1\">Go through the network in the opposite way : &#8220;<b>backpropagation<\/b>&#8220;, and adjust the tensors so that the network makes <b>more accurate predictions<\/b> based on the calculated loss\/error.<\/li>\n<\/ul>\nThis function of PyTorch called &#8220;Autograd&#8221; is very optimized, and is compatible with the use of <b>GPUs<\/b> and <b>data parallelism<\/b>, which accelerates calculations considerably. Moreover, it allows for use on all clouds, while Tensorflow is only optimized for GoogleCloud and its TPUs (Tensor Processing Unit)..\n\nRecently, and in partnership with AWS (Amazon Web Services), Pytorch has released 2 new features. The first one, named TorchServe, effectively manages the deployment of already trained neural networks. The second one, <b>TorchElastic<\/b>, allows the use of Pytorch on <b>Kubernetes clusters<\/b> while being resistant to failures.&nbsp;\n\nThese 3 frameworks have their <b>own specificity<\/b>. In particular PyTorch which is mainly adapted for complex and deep neural networks. To learn to use the framework, we offer <strong><a href=\"\/en\/courses\/data-ai\/data-scientist\">Data Scientist training<\/a><\/strong> with <b>dedicated modules to Deep Learning<\/b> with PyTorch.&nbsp;\n\n\n<div class=\"wp-block-buttons is-layout-flex wp-block-buttons-is-layout-flex is-content-justification-center\"><div class=\"wp-block-button \"><a class=\"wp-block-button__link wp-element-button \" href=\"\/en\/courses\/data-ai\/\">Find out more about our Data Science courses<\/a><\/div><\/div>\n","protected":false},"excerpt":{"rendered":"<p>The Python language being one of the most used, it contains a lot of frameworks, and many of them are developed exclusively for Data Science. In this article, we will talk about one of them in detail : PyTorch. In the last few years, the popularity of Data Science has been growing steadily over and [&hellip;]<\/p>\n","protected":false},"author":79,"featured_media":167477,"comment_status":"open","ping_status":"open","sticky":false,"template":"elementor_theme","format":"standard","meta":{"_acf_changed":false,"editor_notices":[],"footnotes":""},"categories":[2433],"class_list":["post-167476","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-data-ai"],"acf":[],"_links":{"self":[{"href":"https:\/\/liora.io\/en\/wp-json\/wp\/v2\/posts\/167476","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/liora.io\/en\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/liora.io\/en\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/liora.io\/en\/wp-json\/wp\/v2\/users\/79"}],"replies":[{"embeddable":true,"href":"https:\/\/liora.io\/en\/wp-json\/wp\/v2\/comments?post=167476"}],"version-history":[{"count":2,"href":"https:\/\/liora.io\/en\/wp-json\/wp\/v2\/posts\/167476\/revisions"}],"predecessor-version":[{"id":206432,"href":"https:\/\/liora.io\/en\/wp-json\/wp\/v2\/posts\/167476\/revisions\/206432"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/liora.io\/en\/wp-json\/wp\/v2\/media\/167477"}],"wp:attachment":[{"href":"https:\/\/liora.io\/en\/wp-json\/wp\/v2\/media?parent=167476"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/liora.io\/en\/wp-json\/wp\/v2\/categories?post=167476"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}