{"id":196410,"date":"2025-05-20T12:04:43","date_gmt":"2025-05-20T11:04:43","guid":{"rendered":"https:\/\/liora.io\/en\/?p=196410"},"modified":"2026-02-12T10:02:59","modified_gmt":"2026-02-12T09:02:59","slug":"all-about-self-organizing-maps","status":"publish","type":"post","link":"https:\/\/liora.io\/en\/all-about-self-organizing-maps","title":{"rendered":"Self-Organizing Maps (SOM): What are they and how to use them?"},"content":{"rendered":"\n<p><strong>Self-Organizing Maps, or SOM, represent a form of artificial neural network (ANN) employed for unsupervised learning. They facilitate the reduction of data dimensionality while retaining their topological structure, thus offering a robust tool for clustering and data exploration.<\/strong><\/p>\n\n\n\n<p>Unlike traditional neural networks, self-organizing maps function via <b>competitive learning<\/b> as opposed to error correction. They incorporate a neighborhood function to preserve the <b>spatial relationships of the data<\/b>.<\/p>\n\n\n\n<div class=\"wp-block-buttons is-layout-flex wp-block-buttons-is-layout-flex is-content-justification-center wp-container-core-buttons-is-layout-a89b3969\">\n<div class=\"wp-block-button\"><a class=\"wp-block-button__link wp-element-button\" href=\"https:\/\/liora.io\/en\/courses\/\">More about SOMs<\/a><\/div>\n<\/div>\n\n\n\n<h2 class=\"wp-block-heading\" id=\"h-origin-of-soms\">Origin of SOMs<\/h2>\n\n\n\n<p>Self-organizing maps were introduced in the 1980s by the Finnish researcher <b>Teuvo Kohonen<\/b>. This is why they are also referred to as <b>Kohonen maps<\/b>. Inspired by biological brain mechanisms, they emulate how neurons organize and classify information, forming meaningful structures.<\/p>\n\n\n\n<h2 class=\"wp-block-heading\" id=\"h-how-som-works\">How SOM Works?<\/h2>\n\n\n\n<p>The learning process of a Self-Organizing Map relies on multiple steps that transform complex data into an <b>organized and readable representation<\/b>. Below is a typical, step-by-step operation of a SOM.<\/p>\n\n\n\n<h3 class=\"wp-block-heading\" id=\"h-1-initialization-of-weights\">1. Initialization of Weights<\/h3>\n\n\n\n<p>Before training starts, each neuron in the map is linked to a weight vector, which is initialized randomly. This vector shares the same dimension as the <b>input data<\/b> and embodies the identity of each neuron prior to adjustment through the learning process.<\/p>\n\n\n\n<h3 class=\"wp-block-heading\" id=\"h-2-selection-of-an-input-sample\">2. Selection of an Input Sample<\/h3>\n\n\n\n<p>During each iteration, an input vector is randomly selected from the <a href=\"https:\/\/liora.io\/en\/what-is-a-dataset-how-do-i-work-with-it\">training dataset<\/a>. This vector signifies a <b>data point<\/b> that the SOM must learn to organize on the map.<\/p>\n\n\n\n<h3 class=\"wp-block-heading\" id=\"h-3-identification-of-the-best-matching-unit\">3. Identification of the Best Matching Unit<\/h3>\n\n\n\n<p>Once the sample is chosen, the algorithm identifies the neuron whose weights are closest to this input vector. This proximity is determined using the <b>Euclidean distance<\/b> between the input vector and the neurons. The closest neuron is pinpointed as the <b>BMU (Best Matching Unit)<\/b>.<\/p>\n\n\n\n<figure class=\"wp-block-image size-full\"><img decoding=\"async\" width=\"1000\" height=\"571\" src=\"https:\/\/liora.io\/app\/uploads\/sites\/9\/2025\/07\/self-organizing-maps-datascientest-1.webp\" alt=\"\" class=\"wp-image-203125\" srcset=\"https:\/\/liora.io\/app\/uploads\/sites\/9\/2025\/07\/self-organizing-maps-datascientest-1.webp 1000w, https:\/\/liora.io\/app\/uploads\/sites\/9\/2025\/07\/self-organizing-maps-datascientest-1-300x171.webp 300w, https:\/\/liora.io\/app\/uploads\/sites\/9\/2025\/07\/self-organizing-maps-datascientest-1-768x439.webp 768w, https:\/\/liora.io\/app\/uploads\/sites\/9\/2025\/07\/self-organizing-maps-datascientest-1-440x251.webp 440w, https:\/\/liora.io\/app\/uploads\/sites\/9\/2025\/07\/self-organizing-maps-datascientest-1-771x440.webp 771w, https:\/\/liora.io\/app\/uploads\/sites\/9\/2025\/07\/self-organizing-maps-datascientest-1-785x448.webp 785w, https:\/\/liora.io\/app\/uploads\/sites\/9\/2025\/07\/self-organizing-maps-datascientest-1-210x120.webp 210w, https:\/\/liora.io\/app\/uploads\/sites\/9\/2025\/07\/self-organizing-maps-datascientest-1-112x64.webp 112w\" sizes=\"(max-width: 1000px) 100vw, 1000px\" \/><\/figure>\n\n\n\n<div class=\"wp-block-buttons is-layout-flex wp-block-buttons-is-layout-flex is-content-justification-center wp-container-core-buttons-is-layout-a89b3969\">\n<div class=\"wp-block-button\"><a class=\"wp-block-button__link wp-element-button\" href=\"https:\/\/liora.io\/en\/courses\/\">More about SOMs<\/a><\/div>\n<\/div>\n\n\n\n<h3 class=\"wp-block-heading\" id=\"h-4-updating-the-weights-of-the-bmu-and-its-neighbors\">4. Updating the Weights of the BMU and Its Neighbors<\/h3>\n\n\n\n<p>After locating the BMU, the algorithm adjusts its weights to align more closely with the input vector. Neighboring neurons are also updated, albeit to a lesser extent.<\/p>\n\n\n\n<p>The extent of this update is influenced by two primary factors:<\/p>\n\n\n\n<ul class=\"wp-block-list\">\n<li><b>The learning rate (denoted as \u03b1 or alpha) <\/b>: Alpha governs the speed of adjustment for the neurons&#8217; weights. It diminishes over iterations to prevent abrupt changes.<\/li>\n\n\n\n<li><b>The neighborhood function <\/b>: The update affects neurons around the BMU, with an impact that lessens with distance. A common choice is the Gaussian function.<\/li>\n<\/ul>\n\n\n\n<p>This phase enables the BMU and its neighbors to gradually align with the characteristics of the data while maintaining the topological integrity of the relationships between <b>data points<\/b>.<\/p>\n\n\n\n<h3 class=\"wp-block-heading\" id=\"h-5-reduction-of-the-learning-rate-and-neighborhood\">5. Reduction of the Learning Rate and Neighborhood<\/h3>\n\n\n\n<p>As the iterations proceed, the <b>learning rate<\/b> and the <b>neighborhood size<\/b> decrease. This reduction allows for precise fine-tuning of weights in the final training stages and ensures effective organization of the data on the map.<\/p>\n\n\n\n<ul class=\"wp-block-list\">\n<li>Initially, the neighborhood is extensive, enabling the entire map to organize globally.<\/li>\n\n\n\n<li>Gradually, the neighborhood contracts, refining the map and stabilizing the formed clusters.<\/li>\n<\/ul>\n\n\n\n<h3 class=\"wp-block-heading\" id=\"h-6-convergence-and-stabilization\">6. Convergence and Stabilization<\/h3>\n\n\n\n<p>The training continues until the map achieves a stable state where the neurons&#8217; weights exhibit minimal change from one <b>iteration<\/b> to the next. In this stage, each neuron corresponds to a specific region of the input data.<\/p>\n\n\n\n<h3 class=\"wp-block-heading\" id=\"h-7-inference-and-visualization-of-results\">7. Inference and Visualization of Results<\/h3>\n\n\n\n<p>Once <b>the SOM is trained<\/b>, it can organize new data and facilitate visual analysis. The distance between an <b>input vector<\/b> and the neurons&#8217; weights helps determine the positioning of new data on the map.<\/p>\n\n\n\n<p>A popular method for <b>visualizing SOMs<\/b> involves assigning colors to different map regions. Darker colors indicate a higher concentration of data.<\/p>\n\n\n\n<p>Clusters of similar data become apparent on the <b>map<\/b>, providing an intuitive visualization of the interrelationships between various categories.<\/p>\n\n\n\n<div class=\"wp-block-buttons is-layout-flex wp-block-buttons-is-layout-flex is-content-justification-center wp-container-core-buttons-is-layout-a89b3969\">\n<div class=\"wp-block-button\"><a class=\"wp-block-button__link wp-element-button\" href=\"https:\/\/liora.io\/en\/courses\/\">Using SOMs in Data Science<\/a><\/div>\n<\/div>\n\n\n\n<h2 class=\"wp-block-heading\" id=\"h-advantages-and-disadvantages-of-som\">Advantages and Disadvantages of SOM<\/h2>\n\n\n\n<p>SOMs offer several significant advantages. They enable <b>dimensionality reduction<\/b> while maintaining topological organization. Their intuitive graphical representation aids the visualization and interpretation of complex datasets. They are commonly used for <b>clustering<\/b>, even without prior knowledge of the data classes.<\/p>\n\n\n\n<p>However, SOMs have certain drawbacks. They do not adapt well to purely categorical or mixed data (except with proper encoding), which makes it challenging to follow a logical representation space. Their training time may be lengthy, and their effectiveness hinges on accurate parameter tuning.<\/p>\n\n\n\n<h2 class=\"wp-block-heading\" id=\"h-applications-of-som\">Applications of SOM<\/h2>\n\n\n\n<p>SOMs find application in diverse fields to organize and analyze data. For instance, in <a href=\"https:\/\/liora.io\/en\/all-about-big-data-marketing\">the marketing sector<\/a>, they assist in grouping customers based on purchasing behavior to optimize <b>business strategies<\/b>.<\/p>\n\n\n\n<p>For dimensionality reduction, they aid in the <b>mapping of high-dimensional data<\/b>, facilitating a better understanding of internal data relationships.<\/p>\n\n\n\n<p>In <b>anomaly detection<\/b>, they are utilized to identify fraudulent transactions by pinpointing data points that deviate from predefined clusters.<\/p>\n\n\n\n<p>For data visualization, they enhance understanding of populations and relationships between different parameters. By converting a complex dataset into a <b>2D representation<\/b>, they allow for rapid identification of <b>trends<\/b> and <b>invisible patterns<\/b> in raw data tables.<\/p>\n\n\n\n<figure class=\"wp-block-image size-full\"><img decoding=\"async\" width=\"1000\" height=\"571\" src=\"https:\/\/liora.io\/app\/uploads\/sites\/9\/2025\/07\/self-organizing-maps-datascientest-2.webp\" alt=\"\" class=\"wp-image-203126\" srcset=\"https:\/\/liora.io\/app\/uploads\/sites\/9\/2025\/07\/self-organizing-maps-datascientest-2.webp 1000w, https:\/\/liora.io\/app\/uploads\/sites\/9\/2025\/07\/self-organizing-maps-datascientest-2-300x171.webp 300w, https:\/\/liora.io\/app\/uploads\/sites\/9\/2025\/07\/self-organizing-maps-datascientest-2-768x439.webp 768w, https:\/\/liora.io\/app\/uploads\/sites\/9\/2025\/07\/self-organizing-maps-datascientest-2-440x251.webp 440w, https:\/\/liora.io\/app\/uploads\/sites\/9\/2025\/07\/self-organizing-maps-datascientest-2-771x440.webp 771w, https:\/\/liora.io\/app\/uploads\/sites\/9\/2025\/07\/self-organizing-maps-datascientest-2-785x448.webp 785w, https:\/\/liora.io\/app\/uploads\/sites\/9\/2025\/07\/self-organizing-maps-datascientest-2-210x120.webp 210w, https:\/\/liora.io\/app\/uploads\/sites\/9\/2025\/07\/self-organizing-maps-datascientest-2-112x64.webp 112w\" sizes=\"(max-width: 1000px) 100vw, 1000px\" \/><\/figure>\n\n\n\n<h2 class=\"wp-block-heading\" id=\"h-conclusion\">Conclusion<\/h2>\n\n\n\n<p>SOMs are a formidable tool for unsupervised learning in <b>cluster analysis<\/b>, <b>dimensionality reduction<\/b>, and <b>data visualization<\/b>. They do have limitations regarding training time and adaptation to mixed data. They are employed in fields such as finance, marketing, healthcare, and image analysis. Their capacity to unveil hidden structures in data renders them an indispensable choice for <b>exploring unlabeled data<\/b>.<\/p>\n\n\n\n<div class=\"wp-block-buttons is-layout-flex wp-block-buttons-is-layout-flex is-content-justification-center wp-container-core-buttons-is-layout-a89b3969\">\n<div class=\"wp-block-button\"><a class=\"wp-block-button__link wp-element-button\" href=\"https:\/\/liora.io\/en\/courses\/\">See our Data Science training courses<\/a><\/div>\n<\/div>\n\n\n\n<script type=\"application\/ld+json\">\n{\n  \"@context\": \"https:\/\/schema.org\",\n  \"@type\": \"FAQPage\",\n  \"mainEntity\": [\n    {\n      \"@type\": \"Question\",\n      \"name\": \"Origin of SOMs\",\n      \"acceptedAnswer\": {\n        \"@type\": \"Answer\",\n        \"text\": \"Self\u2011organizing maps were introduced in the 1980s by the Finnish researcher Teuvo Kohonen and are also referred to as Kohonen maps, inspired by biological brain mechanisms that organize and classify information.\u00a0:contentReference[oaicite:0]{index=0}\"\n      }\n    },\n    {\n      \"@type\": \"Question\",\n      \"name\": \"How SOM Works?\",\n      \"acceptedAnswer\": {\n        \"@type\": \"Answer\",\n        \"text\": \"The learning process of a Self\u2011Organizing Map relies on multiple steps that transform complex data into an organized and readable representation, starting with initialization of weights and progressing through selection of input samples and adjustment of neuron weights.\u00a0:contentReference[oaicite:1]{index=1}\"\n      }\n    },\n    {\n      \"@type\": \"Question\",\n      \"name\": \"Advantages and Disadvantages of SOM\",\n      \"acceptedAnswer\": {\n        \"@type\": \"Answer\",\n        \"text\": \"SOMs offer advantages such as dimensionality reduction and intuitive visualization of complex datasets, but they have drawbacks including long training time and challenges with mixed data types.\u00a0:contentReference[oaicite:2]{index=2}\"\n      }\n    },\n    {\n      \"@type\": \"Question\",\n      \"name\": \"Applications of SOM\",\n      \"acceptedAnswer\": {\n        \"@type\": \"Answer\",\n        \"text\": \"SOMs find application in diverse fields to organize and analyze data, including clustering customers for business strategies, dimensionality reduction, anomaly detection, and data visualization.\u00a0:contentReference[oaicite:3]{index=3}\"\n      }\n    },\n    {\n      \"@type\": \"Question\",\n      \"name\": \"Conclusion\",\n      \"acceptedAnswer\": {\n        \"@type\": \"Answer\",\n        \"text\": \"SOMs are a powerful tool for unsupervised learning in clustering, dimensionality reduction, and data visualization, with the capacity to reveal hidden structures in data across fields like finance, marketing, healthcare, and image analysis.\u00a0:contentReference[oaicite:4]{index=4}\"\n      }\n    }\n  ]\n}\n<\/script>\n\n","protected":false},"excerpt":{"rendered":"<p>Self-Organizing Maps, or SOM, represent a form of artificial neural network (ANN) employed for unsupervised learning. They facilitate the reduction of data dimensionality while retaining their topological structure, thus offering a robust tool for clustering and data exploration.<\/p>\n","protected":false},"author":85,"featured_media":196412,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"_acf_changed":false,"editor_notices":[],"footnotes":""},"categories":[2433],"class_list":["post-196410","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-data-ai"],"acf":[],"_links":{"self":[{"href":"https:\/\/liora.io\/en\/wp-json\/wp\/v2\/posts\/196410","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/liora.io\/en\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/liora.io\/en\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/liora.io\/en\/wp-json\/wp\/v2\/users\/85"}],"replies":[{"embeddable":true,"href":"https:\/\/liora.io\/en\/wp-json\/wp\/v2\/comments?post=196410"}],"version-history":[{"count":5,"href":"https:\/\/liora.io\/en\/wp-json\/wp\/v2\/posts\/196410\/revisions"}],"predecessor-version":[{"id":206530,"href":"https:\/\/liora.io\/en\/wp-json\/wp\/v2\/posts\/196410\/revisions\/206530"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/liora.io\/en\/wp-json\/wp\/v2\/media\/196412"}],"wp:attachment":[{"href":"https:\/\/liora.io\/en\/wp-json\/wp\/v2\/media?parent=196410"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/liora.io\/en\/wp-json\/wp\/v2\/categories?post=196410"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}