索引于
  • 学术期刊数据库
  • 打开 J 门
  • Genamics 期刊搜索
  • 期刊目录
  • 研究圣经
  • 乌尔里希的期刊目录
  • 电子期刊图书馆
  • 参考搜索
  • 哈姆达大学
  • 亚利桑那州EBSCO
  • OCLC-WorldCat
  • 学者指导
  • SWB 在线目录
  • 虚拟生物学图书馆 (vifabio)
  • 普布隆斯
  • 米亚尔
  • 日内瓦医学教育与研究基金会
  • 欧洲酒吧
  • 谷歌学术
分享此页面
期刊传单
Flyer image

抽象的

Information Theory Based Feature Selection for Multi-Relational Naive Bayesian Classifier

Vimalkumar B Vaghela, Kalpesh H Vandra and Nilesh K Modi

 Today data’s are stored in relation structures. In usual approach to mine these data, we often use to join several relations to form a single relation using foreign key links, which is known as flatten. Flatten may cause troubles such as time consuming, data redundancy and statistical skew on data. Hence, the critical issues arise that how to mine data directly on numerous relations. The solution of the given issue is the approach called multi-relational data mining (MRDM). Other issues are irrelevant or redundant attributes in a relation may not make contribution to classification accuracy. Thus, feature selection is an essential data pre-processing step in multi-relational data mining. By filtering out irrelevant or redundant features from relations for data mining, we improve classification accuracy, achieve good time performance, and improve comprehensibility of the models. We had proposed the entropy based feature selection method for Multi-relational Naïve Bayesian Classifier. We have use method InfoDist and Pearson’s Correlation parameters, which will be used to filter out irrelevant and redundant features from the multi-relational database and will enhance classification accuracy. We analyzed our algorithm over PKDD financial dataset and achieved the better accuracy compare to the existing features selection methods.