Mining Mixed Data Bases Using Machine Learning Algorithms.

MCPR(2022)

引用 0|浏览10
暂无评分
摘要
We discuss numerical algorithms (based on metric spaces) to explore data bases (DB) even though such DBs may not be metric. The following issues are treated: 1. Determination of the minimum equivalent sample, 2. The encoding of categorical variables and 3. Data analysis. We illustrate the methodology with an experimental mixed DB consisting of 29,267 tuples; 15 variables of which 9 are categorical and 6 are numerical. Firstly, we show that information preservation is possible with a (possibly) much smaller sample. Secondly, we approximate the best possible encoding of the 9 categorical variables applying a statistical algorithm which extracts the code after testing an appropriate number of alternatives for each instance of the variables. To do this we solve two technical issues, namely: a) How to determine that the attributes are already normal and b) How to find the best regressive function of an encoded attribute as a function of another. Thirdly, with the transformed DB (now purely numerical) it is possible to find the regressive approximation errors of any attribute relative to another. Hence, we find those attributes which are closer to one another within a predefined threshold (85%). We argue that such variables define a cluster. Now we may use algorithms such as Multi-Layer Perceptron Networks (MLPN) and/or Kohonen maps (SOM). The classification of a target attribute "salary" is achieved with a MLPN. It is shown to yield better results than most traditional conceptual analysis algorithms. Later, by training a SOM for the inferred clusters, we disclose the characteristics of every cluster. Finally, we restore the DB original values and get visual representations of the variables in each cluster, thus ending the mining process.
更多
查看译文
关键词
Entropy, Categorical variables, Central limit theorem, Ascent algorithm, Self-oganized maps
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要