Skip to content

chariff/BayesianMixtures

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

41 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Bayesian Mixture Models.

Build Status Codecov MIT license

Python package to perform Dirichlet process mixture model clustering.
The Dirichlet process prior on the mixing distribution allows for an inference of the number of classes directly from the data thus avoiding model selection issues. Gaussian mixture models or Skew-t mixture models are available. Skew-t distributions provide robustness to outliers and non-elliptical shape of clusters. Theoretically, Gaussian distributions are nested within Skew-t distributions but in practice this increase of flexibility comes at the cost of a more approximate inference. Inference is done using a collapsed gibbs sampling scheme.

Installation.

Installation

  • From GitHub:

    pip install git+https://github.com/chariff/BayesianMixtures.git
    

Dependencies

BayesianMixtures requires:

  • Python (>= 3.5)
  • NumPy (>= 1.18.5)
  • SciPy (>= 1.4.1)
  • sklearn (>= 0.23.2)

Brief guide to using BayesianMixtures.

Checkout the package docstrings for more information.

1. Fitting an infinite Gaussian mixture model and making predictions.

from BayesianMixtures.gaussian_mixture import BayesianGaussianMixture
from sklearn.datasets import make_blobs
import numpy as np
import matplotlib.pyplot as plt
from matplotlib import cm
from itertools import cycle
color_cycle = cycle(plt.rcParams['axes.prop_cycle'].by_key()['color'])

Simulated data.

n_samples = 3000
n_components = 4
dim = 2

data_samples, labels = make_blobs(n_samples=n_samples, n_features=2,
                                  centers=4, cluster_std=1, random_state=0)

Select the number maximum number of mcmc (Monte Carlo Markov Chain) iterations.

# Maximum number of mcmc iterations 
max_iter = 1000
# Burn-in iterations
burn_in = 500

Instantiate a BayesianGaussianMixture object with default parameters.

cls = BayesianGaussianMixture(max_iter=max_iter, burn_in=burn_in,  max_components=100,
                           verbose=2, verbose_interval=500,  n_components_init=100,
                           random_state=2026, init_params='kmeans')

Perform inference.

p = cls.fit(data_samples)

Maximum a posteriori (MAP) predicted partition.

map_predict_partition = p.map_predict(data_samples)

Maximum a posteriori (MAP) clustering probabilities.

map_probas = p.map_predict_proba(data_samples)
with plt.style.context('bmh'):
    fig, axes = plt.subplots(1, 2, figsize=(10, 5))
    ax1, ax2 = axes.ravel()
    alpha = .7
    # Scatter plot of the MAP partition
    for label in set(labels):
        ax1.scatter(data_samples[labels == label, 0],
                    data_samples[labels == label, 1], 
                    c=next(color_cycle), alpha=alpha, zorder=2)
        
        ax1.set_title('Actual')
        ax1.set_xlabel('feature 1')
        ax1.set_ylabel('feature 2')

    for label in set(map_predict_partition):
        ax2.scatter(data_samples[map_predict_partition == label, 0],
                    data_samples[map_predict_partition == label, 1], 
                    c=next(color_cycle), alpha=alpha, zorder=2)
        
        ax2.set_title('Predicted (MAP)')
        ax2.set_xlabel('feature 1')
        ax2.set_ylabel('feature 2')
    plt.show()

MAP partition

Log posterior trace.

with plt.style.context('bmh'):
    fig, axes = plt.subplots(1, 2, figsize=(10, 5))
    ax1, ax2 = axes.ravel()
    ax1.plot(p.logposteriors, linewidth=1, c='black')
    ax1.set_title('trace')
    ax1.set_xlabel('mcmc iterations')
    ax1.set_ylabel('Log posterior evaluation')
    
    ax2.plot(p.logposteriors[burn_in:], linewidth=.6, c='black')

    ax2.set_title('sampling trace')
    ax2.set_xlabel('mcmc iterations')
    
    plt.show()

Log posterior trace

The sampling trace looks stationnary with low autocorrelation. Note that it took ~500 iterations for the Markov chain to reach the target space. One could run parallel chains to test whether they all converge to the same posterior distribution.

In the following section we will show an example of how to use the sampled partitions to assess the clustering uncertainty.

The following function calculates an average of the co-clustering matrices from the sampled partitions to obtain the posterior co-clustering probabilities.

from numba import jit

@jit(nopython=True)
def coclustering(partitions):
    n_mcmc_samples, n_samples = partitions.shape
    co_clustering = np.zeros(shape=(n_samples, n_samples))
    for partition in partitions:
        for i in range(n_samples):
            label_obs = partition[i]
            for j in range(n_samples):
                if partition[j] == label_obs:
                    co_clustering[i, j] += 1
    co_clustering /= n_mcmc_samples
    return co_clustering
# explored partitions in the posterior mcmc draws
partitions = p.partitions
# compute the average co-clustering matrix
# BEWARE of your sample size when running this function.
coclust = coclustering(partitions)

Heatmap of the posterior co-clustering probabilities.

# Plot co-clustering matrix.
import seaborn as sns
g = sns.clustermap(coclust, metric='euclidean', method='average', cmap='viridis', 
                   col_cluster=True, row_cluster=True,  dendrogram_ratio=0.1, 
                   xticklabels=False, yticklabels=False, figsize=(7, 7))
ax = g.ax_heatmap
ax.set_title("Average co-clustering matrix")
ax.set_xlabel("Observations")
ax.set_ylabel("Observations")

g.ax_cbar.set_position((0.01, 0.4, .05, .2))
g.ax_row_dendrogram.set_visible(False)
g.ax_col_dendrogram.set_visible(False)
plt.show()

Co-clustering probabilities

To obtain a point estimate of the clustering, one could minimize a loss function of the co-clustering matrices from the sampled partitions and the co-clustering probabilities.

References:

About

Python package to perform Dirichlet process mixture model clustering.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors

Languages