Session Program

 

  • 12 July 2017
  • 08:00AM - 10:00AM
  • Room: Giardino
  • Chairs: Marie-Jeanne Lesot, Christophe Marsala, Nikhil R. Pal and Arthur Guillon

Soft Subspace Clustering

Abstract - This paper proposes two methods. One is the cluster identification method for 3-way dissimilarity data among objects over times (or subjects) and the other is the cluster scaling method for dissimilarity data among objects. Both methods are based on the comparative quantification model which can obtain the quantitative amount of relationship between a pair of clusters or relationship between a cluster and a basis which spans a subspace constructed a scale. The merits of these methods are that we can obtain "comparability" of obtained clusters over times (or subjects) and supply an "adaptable scale" for observed dissimilarity between a pair of objects, in order to reduce the number of dimensions of the observed data and explain the dissimilarity relationships among objects in the lower dimensional subspace. Numerical examples to investigate the educational effectiveness by using the cognitive 3-way dissimilarity data of students demonstrate a better performance for the proposed methods.
Abstract - Fuzzy co-clustering is an extension of FCM-type clustering, where the within-cluster-error measure of FCM is replaced by the aggregation degree of two types of fuzzy memberships with the goal being to estimate object-item pairwise clusters from their cooccurrence information. This paper proposes a noise rejection scheme for FCM-type co-clustering models, which is constructed based on the probabilistic co-clustering concept. Noise FCM was achieved by introducing an additional noise cluster into FCM, where the noise cluster was assumed to have a uniform prototype distribution. A similar concept was implemented for probabilistic concept-based co-clustering for robust estimation. The main contribution of this paper is to demonstrate that the uniform distribution concept can also be useful in FCMtype co-clustering models, even though their objective functions are not designed based on probabilistic concepts.
Abstract - Fuzzy co-clustering is a basic technique for analyzing co-cluster structures in cooccurrence information among objects and items. When we have not only cooccurrence information among objects and items but also intrinsic relation among items and other ingredients, it is expected that we can find more useful co-cluster structures among three-modes cooccurrence relation. In this paper, the conventional fuzzy clustering for categorical multivariate data (FCCM) algorithm is extended by utilizing three-types of fuzzy memberships for objects, items and ingredients, where the aggregation degree of three elements in each co-cluster is maximized through iterative updating of memberships. The characteristic features of the proposed method is demonstrated through several numerical experiments including a school lunch calendar analysis.
Abstract - This paper studies a well-established fuzzy subspace clustering paradigm and identifies a discontinuity in the produced solutions, which assigns neighbor points to different clusters and fails to identify the expected subspaces in these situations. To alleviate this drawback, a regularization term is proposed, inspired from clustering tasks for graphs such as spectral clustering. A new cost function is introduced, and a new algorithm based on an alternate optimization algorithm, called Weighted Laplacian Fuzzy Clustering, is proposed and experimentally studied.
Abstract - Finding the appropriate number of clusters in the absence of prior information is a hard and sensitive problem in clustering and data analysis. In this paper, we present a new cluster validity index (CV I) called HF able to find the optimal number of clusters present in a given data. The HF index is based on the membership partition. It can be seen as the generalisation of the Wu-and-Li (WL) and Tang (T) indices. Its particularity is the integration of a generalised ad-hoc punishing term, on the one hand, and the involving of median between centroids multiplied by the average of data per cluster for computing the separation, on the other hand. These contributions allow avoiding the monotony from which suffer the majority of CV Is and obtaining a precise evaluation. The optimal number of clusters Cop corresponds to the minimum of the HF index. In order to ensure the effective choice of the optimal number of clusters, we propose an algorithm based on the HF and WL indices. The performance of the proposed index and algorithm are demonstrated through different experimentations on images clustering using the algorithm Fuzzy-C-Means (FCM). The HF index's ability to appropriately determine the number of clusters is compared with those of WL, T and the Xi-Beni (XB) indices with different initialisations.
Abstract - In comparison with flat clustering methods, such as K-means, hierarchical clustering and co-clustering methods are more advantageous, for the reason that hierarchical clustering is capable to reveal the internal connections of clusters, and co-clustering can yield clusters of data instances and features. Interested in organizing co-clusters in hierarchies and in discovering cluster hierarchies inside co-clusters, in this paper, we propose SHCoClust, a scalable similarity-based hierarchical co-clustering method. Except possessing the above-mentioned advantages in unison, SHCoClust is able to employ kernel functions, thanks to its utilization of inner product. Furthermore, having all similarities between 0 and 1, the input of SHCoClust can be sparsified by threshold values, so that less memory and less time are required for storage and for computation. This grants SHCoClust scalability, i.e, the ability to process relatively large datasets with reduced and limited computing resources. Our experiments demonstrate that SHCoClust significantly outperforms the conventional hierarchical clustering methods. In addition, with sparsifying the input similarity matrices obtained by linear kernel and by Gaussian kernel, SHCoClust is capable to guarantee the clustering quality, even when its input being largely sparsified. Consequently, up to 86\% time gain and on average 75\% memory gain are achieved.