{"id":196759,"date":"2025-01-06T19:03:00","date_gmt":"2025-01-06T18:03:00","guid":{"rendered":"https:\/\/liora.io\/en\/?p=196759"},"modified":"2026-02-12T14:03:05","modified_gmt":"2026-02-12T13:03:05","slug":"all-about-deep-learning-with-tensorflow-playground","status":"publish","type":"post","link":"https:\/\/liora.io\/en\/all-about-deep-learning-with-tensorflow-playground","title":{"rendered":"TensorFlow Playground: Making Deep Learning Easy"},"content":{"rendered":"\n<p><strong>Deep learning is as fascinating as it is intimidating. With its equations, GPUs, and esoteric vocabulary, one might think a doctorate in mathematics is necessary to understand its logic. Yet the principle is simple: learn by example. To see it firsthand \u2014 literally \u2014 nothing beats the <a href=\"\/\">TensorFlow Playground<\/a>.<\/strong><\/p>\n\n\n\n<p>This small online tool allows you to manipulate a neural network in real time, observe the reactions, and, most importantly, understand how it learns. Just a few minutes can turn an abstract concept into a concrete experience.<\/p>\n\n\n\n<h2 class=\"wp-block-heading\" id=\"h-deep-learning-in-brief\">Deep learning in brief<\/h2>\n\n\n\n<p>For over a decade, deep learning has been prevalent in image recognition, automatic translation, and text synthesis. Yet, the foundational idea dates back to the 1950s: crudely mimicking the functioning of biological neurons. An <b>artificial neuron<\/b> receives numerical inputs, weights them, optionally adds a bias, and applies an activation function. Positioned in successive <b>layers<\/b>, these neurons gradually transform raw data into representations capable of separating, predicting, or generating.<\/p>\n\n\n\n<p>Why &#8220;deep&#8221;? Because modern networks stack dozens, even hundreds of layers, each capturing more subtle abstraction than the previous one: from edges to patterns, from patterns to objects, then from objects to the entire scene. The whole is trained using an optimization method \u2014 often gradient descent \u2014 that adjusts the weights to minimize an error measured on a sample of annotated examples.<\/p>\n\n\n\n<div class=\"wp-block-buttons is-layout-flex wp-block-buttons-is-layout-flex is-content-justification-center wp-container-core-buttons-is-layout-a89b3969\">\n<div class=\"wp-block-button\"><a class=\"wp-block-button__link wp-element-button\" href=\"https:\/\/liora.io\/en\/courses\/\">More about Deep Learning<\/a><\/div>\n<\/div>\n\n\n\n<h2 class=\"wp-block-heading\" id=\"h-tensorflow-playground-a-browser-lab\">TensorFlow Playground: a browser lab<\/h2>\n\n\n\n<p>Open <a href=\"\/\">TensorFlow Playground<\/a> and, without installing anything, a minimal network appears. On the left, <b>colored points<\/b> represent the data; in the center, <b>circles<\/b> (the neurons) are connected by <b>arrows<\/b> (the weights); on the right, the <b>hyperparameters<\/b> can be adjusted with a simple click: learning rate, activation function, regularization, batch size, etc. When you press <b>Train<\/b>, each iteration updates the decision boundary in real-time.<\/p>\n\n\n\n<figure class=\"wp-block-image\" style=\"margin-top:var(--wp--preset--spacing--columns);margin-bottom:var(--wp--preset--spacing--columns)\"><img decoding=\"async\" src=\"https:\/\/liora.io\/app\/uploads\/sites\/9\/2025\/06\/image1.webp\" alt=\"\" \/><\/figure>\n\n\n\n<p>Why is this tool so powerful for understanding?<\/p>\n\n\n\n<ul class=\"wp-block-list\">\n<li><b>Instant visualization<\/b>: the boundary evolves before your eyes, illustrating gradient descent far better than a static graph.<\/li>\n\n\n\n<li><b>Safety<\/b>: no risk of erasing a disk or overheating a GPU.<\/li>\n\n\n\n<li><b>Easy sharing<\/b>: all options are encoded in the URL; just copy it to share an exact configuration.<\/li>\n<\/ul>\n\n\n\n<h2 class=\"wp-block-heading\" id=\"h-network-anatomy-from-playground\">Network anatomy from Playground<\/h2>\n\n\n\n<figure class=\"wp-block-image\"><img decoding=\"async\" src=\"https:\/\/liora.io\/app\/uploads\/sites\/9\/2025\/06\/image2.webp\" alt=\"\" \/><\/figure>\n\n\n\n<h3 class=\"wp-block-heading\" id=\"h-1-the-datasets\">1. The datasets<\/h3>\n\n\n\n<p>Playground offers four synthetic datasets: a <b>linearly separable<\/b> cloud, two <b>non-linear<\/b> sets (circle and &#8220;moons&#8221;), and the fundamental <b>spiral<\/b> nicknamed &#8220;the snail&#8221;. These two-dimensional data are simple enough to fit in a graph, yet rich enough to test the power of a deep network.<\/p>\n\n\n\n<h3 class=\"wp-block-heading\" id=\"h-2-the-features\">2. The features<\/h3>\n\n\n\n<p>By default, only the coordinates <b>x<\/b> and <b>y<\/b> are used as inputs. However, you can enable other derived features: <b>x\u00b2<\/b>, <b>y\u00b2<\/b>, <b>x\u00b7y<\/b>, <b>sin(x)<\/b>, or <b>sin(y)<\/b>. These transformations allow the model to better capture complex patterns. For instance, a circular-shaped cloud becomes much easier to separate if you add <b>x\u00b2 + y\u00b2<\/b> as information: the decision boundary can then become circular, even with a simple network.<\/p>\n\n\n\n<h3 class=\"wp-block-heading\" id=\"h-3-the-architecture\">3. The architecture<\/h3>\n\n\n\n<p>Below the data, a slider lets you add layers and adjust the number of neurons. A network <b>without a hidden layer<\/b> is equivalent to linear regression: it only resolves linear separations. With <b>one layer<\/b> of three neurons, the model already captures curves. Three layers of eight neurons tackle the spiral dataset, but increasing depth further risks overfitting \u2014 hence the importance of regularization.<\/p>\n\n\n\n<h3 class=\"wp-block-heading\" id=\"h-4-the-hyperparameters\">4. The hyperparameters<\/h3>\n\n\n\n<p>The <b>learning rate<\/b> controls the magnitude of updates: too large, the loss oscillates; too small, the model stagnates. The <b>activation functions<\/b> \u2014 ReLU, tanh, sigmoid \u2014 inject the necessary non-linearity; ReLU often converges faster, tanh sometimes appears more stable. <b>L2 regularization<\/b> adds a penalty on the weights to prevent the network from memorizing noise.<\/p>\n\n\n\n<h3 class=\"wp-block-heading\" id=\"h-5-visualizing-results\">5. Visualizing results<\/h3>\n\n\n\n<p>Once training is initiated, two elements should be monitored: the <b>decision boundary<\/b>, which evolves visually in the plane, and the <b>loss curve<\/b> at the bottom right. The boundary shows how the network learns to separate the classes; the more it aligns with the shape of the data, the better the model&#8217;s comprehension. The loss curve indicates whether the error decreases \u2014 a good sign that learning is advancing.<\/p>\n\n\n\n<div class=\"wp-block-buttons is-layout-flex wp-block-buttons-is-layout-flex is-content-justification-center wp-container-core-buttons-is-layout-a89b3969\">\n<div class=\"wp-block-button\"><a class=\"wp-block-button__link wp-element-button\" href=\"https:\/\/liora.io\/en\/courses\/\">Training for TensorFlow Playground<\/a><\/div>\n<\/div>\n\n\n\n<h2 class=\"wp-block-heading\" id=\"h-two-challenges-to-replicate\">Two challenges to replicate<\/h2>\n\n\n\n<p>All exercise parameters below are already encoded in the links; just click to land on the described configuration.<\/p>\n\n\n\n<h3 class=\"wp-block-heading\" id=\"h-challenge-1-first-steps\"><u>Challenge 1<\/u>: First Steps<\/h3>\n\n\n\n<p>Link: <a href=\"\/#activation=tanh&amp;batchSize=10&amp;dataset=xor&amp;learningRate=0.03&amp;networkShape=2&amp;noise=0&amp;regularizationRate=0&amp;seed=0&amp;showTestData=false\">Challenge \u2013 First Steps<\/a><\/p>\n\n\n\n<p>Start the training: in a few seconds, the boundary begins to draw a separation into two distinct areas. Then try to reduce the learning rate and observe how the model learns more slowly. Also change the activation function, for example, switching from <b>tanh<\/b> to <b>ReLU<\/b>: the speed and shape of convergence may vary, even if the task remains simple. It&#8217;s a good first exercise to become accustomed to the parameters without getting lost in complexity.<\/p>\n\n\n\n<h3 class=\"wp-block-heading\" id=\"h-challenge-2-spiral\"><u>Challenge 2<\/u>: Spiral<\/h3>\n\n\n\n<p>Link: <a href=\"\/#activation=tanh&amp;batchSize=10&amp;dataset=spiral&amp;regDataset=reg-plane&amp;learningRate=0.03&amp;regularizationRate=0&amp;noise=20&amp;networkShape=4,2&amp;seed=0.17718&amp;showTestData=false&amp;x=true&amp;y=true\">Spiral<\/a><\/p>\n\n\n\n<p>In this second exercise, the network must learn to classify a spiral-shaped dataset \u2014 a pattern known for its difficulty. The intentionally limited starting configuration (only the features <b>x<\/b> and <b>y<\/b>) forces you to experiment with the architecture and hyperparameters to succeed.<\/p>\n\n\n\n<p>Start the training: the boundary is chaotic at first. It&#8217;s up to you to find a combination of layers, neurons, activation function, or even regularization, that allows the network to follow the pattern curves. It&#8217;s a good way to see how depth or a small parameter change can make a significant difference.<\/p>\n\n\n\n<p>Bonus difficulty: <b>no adding derived features<\/b>. Everything must rely on the model&#8217;s structure.<\/p>\n\n\n\n<figure class=\"wp-block-image size-large\"><img decoding=\"async\" src=\"https:\/\/liora.io\/app\/uploads\/sites\/9\/2025\/06\/image3.webp\" alt=\"\" \/><\/figure>\n\n\n\n<h2 class=\"wp-block-heading\" id=\"h-insights-from-the-playground\">Insights from the Playground<\/h2>\n\n\n\n<p>Spending roughly ten minutes in Playground teaches three fundamental lessons:<\/p>\n\n\n\n<ol class=\"wp-block-list\">\n<li><b>The network learns by adjusting<\/b> its weights to reduce the error; gradient descent is simply an automated cycle of trial and error.<\/li>\n\n\n\n<li><b>Non-linearity<\/b> \u2014 whether through features or activations \u2014 is crucial as soon as a straight line isn&#8217;t enough.<\/li>\n\n\n\n<li><b>Hyperparameters matter<\/b>: a poor learning rate or an oversized architecture can ruin training as surely as a bug in the code.<\/li>\n<\/ol>\n\n\n\n<p>These observations are seen, not guessed: the moving image imprints in the mind what three pages of algebra summarize less clearly.<\/p>\n\n\n\n<p>TensorFlow Playground is not intended to produce industrial models, but to <b>visualize the essence of deep learning<\/b>: the progressive transformation of a data space under the influence of iterative learning. By reducing the subject to colored points and a few buttons, the tool makes the mechanics accessible to anyone with a browser. From there, making the leap to Keras or <a href=\"https:\/\/liora.io\/en\/pytorch-all-about-this-framework\">PyTorch<\/a> becomes a straightforward interface change. So, open the page, play for a few minutes, adjust a parameter, observe the result, and watch the theory come to life. Machine learning, however complex, always starts with a first click on Train.<\/p>\n\n\n\n<figure class=\"wp-block-image size-large\"><img decoding=\"async\" src=\"https:\/\/liora.io\/app\/uploads\/sites\/9\/2025\/06\/middle-eastern-cybersecurity-professional-1-scaled-1.webp\" alt=\"\" \/><\/figure>\n\n\n\n<div class=\"wp-block-buttons is-layout-flex wp-block-buttons-is-layout-flex is-content-justification-center wp-container-core-buttons-is-layout-a89b3969\">\n<div class=\"wp-block-button\"><a class=\"wp-block-button__link wp-element-button\" href=\"https:\/\/liora.io\/en\/courses\/\">Discover our courses<\/a><\/div>\n<\/div>\n\n\n\n<script type=\"application\/ld+json\">\n{\n  \"@context\": \"https:\/\/schema.org\",\n  \"@type\": \"FAQPage\",\n  \"mainEntity\": [\n    {\n      \"@type\": \"Question\",\n      \"name\": \"What is TensorFlow Playground?\",\n      \"acceptedAnswer\": {\n        \"@type\": \"Answer\",\n        \"text\": \"TensorFlow Playground is an online interactive tool that lets you experiment with a neural network in your browser by adjusting its architecture and hyperparameters and watching how the model learns in real time without installing anything.\u00a0([turn0search0])\"\n      }\n    },\n    {\n      \"@type\": \"Question\",\n      \"name\": \"Why use TensorFlow Playground to learn deep learning?\",\n      \"acceptedAnswer\": {\n        \"@type\": \"Answer\",\n        \"text\": \"It helps you visualise how a neural network transforms data and learns to separate classes, making abstract concepts like gradient descent, decision boundaries and activation functions easier to understand.\u00a0([turn0search0])\"\n      }\n    },\n    {\n      \"@type\": \"Question\",\n      \"name\": \"What datasets can you use in TensorFlow Playground?\",\n      \"acceptedAnswer\": {\n        \"@type\": \"Answer\",\n        \"text\": \"Playground offers synthetic datasets such as a linearly separable cloud, two non\u2011linear shapes (circle and \u201cmoons\u201d), and a spiral pattern, letting you test how a network handles different kinds of class boundaries.\u00a0([turn0search0])\"\n      }\n    },\n    {\n      \"@type\": \"Question\",\n      \"name\": \"How do you change a neural network\u2019s architecture in Playground?\",\n      \"acceptedAnswer\": {\n        \"@type\": \"Answer\",\n        \"text\": \"You can add or remove hidden layers, adjust the number of neurons per layer with sliders and choose activation functions to see how these architectural choices affect learning on the dataset.\u00a0([turn0search0])\"\n      }\n    },\n    {\n      \"@type\": \"Question\",\n      \"name\": \"What hyperparameters can you modify in TensorFlow Playground?\",\n      \"acceptedAnswer\": {\n        \"@type\": \"Answer\",\n        \"text\": \"Hyperparameters you can adjust include the learning rate to control weight updates, activation functions (like ReLU, tanh, sigmoid), regularisation strength and batch size to influence how training progresses.\u00a0([turn0search0])\"\n      }\n    }\n  ]\n}\n<\/script>\n\n","protected":false},"excerpt":{"rendered":"<p>Deep learning is as fascinating as it is intimidating. With its equations, GPUs, and esoteric vocabulary, one might think a doctorate in mathematics is necessary to understand its logic. Yet the principle is simple: learn by example. To see it firsthand \u2014 literally \u2014 nothing beats the TensorFlow Playground. This small online tool allows you [&hellip;]<\/p>\n","protected":false},"author":87,"featured_media":196761,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"_acf_changed":false,"editor_notices":[],"footnotes":""},"categories":[2433],"class_list":["post-196759","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-data-ai"],"acf":[],"_links":{"self":[{"href":"https:\/\/liora.io\/en\/wp-json\/wp\/v2\/posts\/196759","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/liora.io\/en\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/liora.io\/en\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/liora.io\/en\/wp-json\/wp\/v2\/users\/87"}],"replies":[{"embeddable":true,"href":"https:\/\/liora.io\/en\/wp-json\/wp\/v2\/comments?post=196759"}],"version-history":[{"count":4,"href":"https:\/\/liora.io\/en\/wp-json\/wp\/v2\/posts\/196759\/revisions"}],"predecessor-version":[{"id":206640,"href":"https:\/\/liora.io\/en\/wp-json\/wp\/v2\/posts\/196759\/revisions\/206640"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/liora.io\/en\/wp-json\/wp\/v2\/media\/196761"}],"wp:attachment":[{"href":"https:\/\/liora.io\/en\/wp-json\/wp\/v2\/media?parent=196759"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/liora.io\/en\/wp-json\/wp\/v2\/categories?post=196759"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}