Can You Use Continuous Variables in Kmeans

This article was published as a part of the Data Science Blogathon

KModes clustering

Clustering is an unsupervised learning method whose task is to divide the population or data points into a number of groups, such that data points in a group are more similar to other data points in the same group and dissimilar to the data points in other groups. It is basically a collection of objects based on similarity and dissimilarity between them.

KModes clustering is one of the unsupervised Machine Learning algorithms that is used to cluster categorical variables.

You might be wondering, why KModes when we already have KMeans.

KMeans uses mathematical measures (distance) to cluster continuous data. The lesser the distance, the more similar our data points are. Centroids are updated by Means.
But for categorical data points, we cannot calculate the distance. So we go for KModes algorithm. It uses the dissimilarities(total mismatches) between the data points. The lesser the dissimilarities the more similar our data points are. It uses Modes instead of means.

In this blog, we will learn:

  • How does the KModes algorithm work?
  • Implementation of KModes in Python

How does the KModes algorithm work?

Unlike Hierarchical clustering methods, we need to upfront specify the K.

  1. Pick K observations at random and use them as leaders/clusters
  2. Calculate the dissimilarities and assign each observation to its closest cluster
  3. Define new modes for the clusters
  4. Repeat 2–3 steps until there are is no re-assignment required

I hope you got the basic idea of the KModes algorithm by now. So let us quickly take an example to illustrate the working step by step.

Example: Imagine we have a dataset that has the information about hair color, eye color, and skin color of persons. We aim to group them based on the available information(maybe we want to suggest some styling ideas)

Hair color, eye color, and skin color are all categorical variables. Below ๐Ÿ‘‡ is how our dataset looks like.

Dataset KModes clustering
Image of our data

Alright, we have the sample data now. Let us proceed by defining the number of clusters(K)=3

Step 1: Pick K observations at random and use them as leaders/clusters

I am choosing P1, P7, P8 as leaders/clusters

pick K observations
Leaders and Observations

Step 2: Calculate the dissimilarities(no. of mismatches) and assign each observation to its closest cluster

Iteratively compare the cluster data points to each of the observations. Similar data points give 0, dissimilar data points give 1.

calculate disimilarity

Comparing leader/Cluster P1 to the observation P1 gives 0 dissimilarities.

compare clustres KModes clustering

Comparing leader/cluster P1 to the observation P2 gives 3(1+1+1) dissimilarities.

Likewise, calculate all the dissimilarities and put them in a matrix as shown below and assign the observations to their closest cluster(cluster that has the least dissimilarity)

dissimilarity matrix
Dissimilarity matrix (Image by Author)

After step 2, the observations P1, P2, P5 are assigned to cluster 1; P3, P7 are assigned to Cluster 2; and P4, P6, P8 are assigned to cluster 3.

Note: If all the clusters have the same dissimilarity with an observation, assign to any cluster randomly. In our case, the observation P2 has 3 dissimilarities with all the leaders. I randomly assigned it to Cluster 1.

Step 3: Define new modes for the clusters

Mode is simply the most observed value.

Mark the observations according to the cluster they belong to. Observations of Cluster 1 are marked in Yellow, Cluster 2 are marked in Brick red, and Cluster 3 are marked in Purple.

define new modes KModes clustering
Looking for Modes (Image by author)

Considering one cluster at a time, for each feature, look for the Mode and update the new leaders.

Explanation: Cluster 1 observations(P1, P2, P5) has brunette as the most observed hair color, amber as the most observed eye color, and fair as the most observed skin color.

Note: If you observe the same occurrence of values, take the mode randomly. In our case, the observations of Cluster 3(P3, P7) have one occurrence of brown, fair skin color. I randomly chose brown as the mode.

Below are our new leaders after the update.

new clustres
Obtained new leaders

Repeat steps 2–4

After obtaining the new leaders, again calculate the dissimilarities between the observations and the newly obtained leaders.

repeat process KModes clustering

Comparing Cluster 1 to the observation P1 gives 1 dissimilarity.

compare cluster 1 with points

Comparing Cluster 1 to the observation P2 gives 2 dissimilarities.

Likewise, calculate all the dissimilarities and put them in a matrix. Assign each observation to its closest cluster.

disimilarity matrix 2 KModes clustering

The observations P1, P2, P5 are assigned to Cluster 1; P3, P7 are assigned to Cluster 2; and P4, P6, P8 are assigned to Cluster 3.

We stop here as we see there is no change in the assignment of observations.

Implementation of KModes in Python:

Begin with Importing necessary libraries

          # importing necessary libraries import pandas as pd import numpy as np # !pip install kmodes from kmodes.kmodes import KModes import matplotlib.pyplot as plt %matplotlib inline        

Creating toy dataset

Python Code:

data
Image of our dataset

Scree Plot or Elbow curve to find optimal K value

For KModes, plot cost for a range of K values. Cost is the sum of all the dissimilarities between the clusters.

Select the K where you observe an elbow-like bend with a lesser cost value.

# Elbow curve to find optimal K cost = [] K = range(1,5) for num_clusters in list(K):     kmode = KModes(n_clusters=num_clusters, init = "random", n_init = 5, verbose=1)     kmode.fit_predict(data)     cost.append(kmode.cost_)      plt.plot(K, cost, 'bx-') plt.xlabel('No. of clusters') plt.ylabel('Cost') plt.title('Elbow Method For Optimal k') plt.show()
finding optimal k KModes clustering
Elbow Curve

We can see a bend at K=3 in the above graph indicating 3is the optimal number of clusters.

Build a model with 3 clusters

# Building the model with 3 clusters kmode = KModes(n_clusters=3, init = "random", n_init = 5, verbose=1) clusters = kmode.fit_predict(data) clusters

Finally, insert the predicted cluster values in our original dataset.

data.insert(0, "Cluster", clusters, True) data
Dataset after inserting predicted cluster values kmodes clustering
Dataset after inserting predicted cluster values

Inference from the model predictions: P1, P2, P5 are merged as a cluster; P3, P7 are merged; and P4, P6, P8 are merged.

The results of our theoretical approach are in line with the model predictions. ๐Ÿ™Œ

End Notes:

By the end of this article, we are familiar with the working and implementation of the KModes clustering algorithm. In the upcoming article, we will be learning the K-prototype algorithm.

References:

KModes algorithm

GitHub Repo link

I hope this blog helps understand the KModes clustering algorithm. Please give it a clap ๐Ÿ‘. Happy learning !!๐Ÿ˜Š

The media shown in this article are not owned by Analytics Vidhya and are used at the Author's discretion.

ramirezabnalsid.blogspot.com

Source: https://www.analyticsvidhya.com/blog/2021/06/kmodes-clustering-algorithm-for-categorical-data/

0 Response to "Can You Use Continuous Variables in Kmeans"

Post a Comment

Iklan Atas Artikel

Iklan Tengah Artikel 1

Iklan Tengah Artikel 2

Iklan Bawah Artikel