Clustering for everyday life — 2 of 2-

pfeiffer-beach-california-bestbeaches0316

In my previous post, I wrote about clustering and k-means algorithm. In this post, I want to use those concepts and TensorFlow to write a simple example. (And help myself to plan my next trip to Gotham City).

For this implementation we need Python (I use 3.7 but 2.x it’s ok) and some packages:

  • matplotlib
  • TensorFlow
  • numpy
  • pandas

to install those packages is simple:

  • for python 2.x:
    • pip install <packages name>
  • for python 3.x:
    • pip3 install <packages name>

In any case, you can follow the installation instructions on the documentation of each package.

So far so good? Ok, let’s go deep into the code:

First of all, I defined the parameters of my problem:

  • number of points: 1000
  • number of clusters: 4
  • number of computational steps: 100

In this particular example, I used as training set, a set of 1000 GPS positions generated randomly [line from 27 to 36] about the position: 45.7 9.1. If you have a file with the correct coordinates, you can load them ad use the correct ones. The lines from 34 to 36 display the training set:

Start

In line 42 The vector values are converted into constant, usable by TensorFlow.

After randomly built the training set, we need the centroid [line from 44 to 50] and converted into variable that will be manipulated by TensorFlow. This is the key of K-means algorithm, we need a set of centroids to start the iterations.

The cost function for K-means is the distance between the point and the centroid, this algorithm tries to minimize this cost function. As I wrote in my previous post, the distance between two GPS points can’t be calculated with the euclidean distance, and is necessary to introduce a more precise method to compute di distance, one of this method is the spherical cosine law. For this example, I used an approximation of the spherical cosine law. This approximation works very well for the distance like city distance and is more computationally efficient than the implementation of the entire algorithm. To know more about this approximation and the error read this interesting post. [line from 53 to 63]

And finally, the centroids are updated [line 65]

Lines from 68 to 75 initialize all the variables, instantiate the evaluation graph, run the algorithm and visualize the results:

End

Conclusion:

My last two posts are focused on an algorithm for clustering problems: K-means. This algorithm takes some assumptions on data:

  • the variance of the distribution of each attribute is spherical
  • all variable has the same variance
  • the prior probability of each cluster is the same

If one if those assumptions are violated then the algorithm fail.

A possible con of this algorithm is the necessity to define, a priori, the number of clusters. If you don’t have any idea on how are your clusters, you can choose another clustering algorithm like DBSCAN or OPTICS (those algorithms work on a density model instead of a centroid model). Or you can introduce a postprocessing step in K-means that aggregate (or split) two or more centroids and then relaunch the entire algorithm on the same training set but with the new set of centroids.

From the computational point of view, the K-means algorithm is linear on the number of data object, others clustering algorithms have a quadratic complexity. So this can be an important point to keep in mind.

Clustering for everyday life — 1 of 2-

foton1

Let’s consider this scenario: I love walking, so when I visit a city I want to walk as much as possible, but I want to optimize my time to watch as much as possible attractions. Now I want to plan my next trip to Gotham city to visit some Batman’s places. I found 1000 places in where Batman appeared and I have, at most, 4 days. I need to bucket those 1000 places into for 4 buckets, so that points are close to a center in where I can leave my car, to plan each day of my trip. How can I do this?

This kind of problem can be classified as a clustering problem. But what is clustering? Clustering or cluster analysis is the task of grouping a set of data into a selection of homogeneous or similar items. The concept of homogeneous or similar is defined in such way. So to solve this kind of problems is necessary:

  • Define the “resemblance” measure between elements (concept of similarity)
  • Find out if the subset of elements that are “similar”, in according to the measure chosen

The algorithm determines which elements form a cluster and what degrees of similarity unites them within a cluster. Refers to my previous post, clustering is a problem that can be solved with algorithms that belong to unsupervised methods, because the algorithm doesn’t know any kind of information about structure and characteristics of the clusters.

In particular, for this problem I’ll use the k-means algorithm: k-means is an algorithm that finds k groups (where k is defined) on a given dataset. Each group is described by a centroid that represents the “center” of each cluster. The concept of center is always referred to the concept of distance that we have chosen for the specific problem.

For our problem, the concept of distance is simple, because is the real distance between two points defined by a latitude and a longitude. For this reason, can’t be used the euclidean distance but is necessary to introduce the spherical law of cosine to compute the correct distance from to geographical points.

But how k-means algorithm work? Its follow an iterative procedure:

Flow graph for k-mean point

The popularity of this algorithm come from its:

  • convergence speed
  • ease of implementation

On the other hand, the algorithm doesn’t guarantee to achieve of the global optimum. The quality of the final solution strongly depends on the initial set of clusters. Since the algorithm is extremely fast, it’s possible to apply it several times and chose the best solution.

This algorithm starts with a definition of k cluster, where k is defined by the user. But how does the user know if k is the correct number? And how he know if the clusters are “good” clusters? One possible metrics to measure the quality of the clusters is SSE (Sum of square error), where error is the distance from the cluster centroid to the current point. Because this error is squared, this places more emphasis on the points far from the centroid.

In the next post, I’ll show a possible way to solve this problem in TensorFlow.