Exploring Representativeness and Informativeness for Active Learning
发布日期：2017-03-17 16:12:21 阅读次数：次 作者：
核心提示：来源出版物: IEEE TRANSACTIONS ON CYBERNETICS
作者: Du, B (Du, Bo); Wang, ZM (Wang, Zengmao); Zhang, LF (Zhang, Lefei); Zhang, LP (Zhang, Liangpei); Liu, W (Liu, Wei); Shen, JL (Shen, Jialie); Tao, DC (Tao, Dacheng)
来源出版物: IEEE TRANSACTIONS ON CYBERNETICS 卷: 47 期: 1 页: 14-26 DOI: 10.1109/TCYB.2015.2496974 出版年: JAN 2017
摘要: How can we find a general way to choose the most suitable samples for training a classifier? Even with very limited prior information? Active learning, which can be regarded as an iterative optimization procedure, plays a key role to construct a refined training set to improve the classification performance in a variety of applications, such as text analysis, image recognition, social network modeling, etc. Although combining representativeness and informativeness of samples has been proven promising for active sampling, state-of-the-art methods perform well under certain data structures. Then can we find a way to fuse the two active sampling criteria without any assumption on data? This paper proposes a general active learning framework that effectively fuses the two criteria. Inspired by a two-sample discrepancy problem, triple measures are elaborately designed to guarantee that the query samples not only possess the representativeness of the unlabeled data but also reveal the diversity of the labeled data. Any appropriate similarity measure can be employed to construct the triple measures. Meanwhile, an uncertain measure is leveraged to generate the informativeness criterion, which can be carried out in different ways. Rooted in this framework, a practical active learning algorithm is proposed, which exploits a radial basis function together with the estimated probabilities to construct the triple measures and a modified best-versus-secondbest strategy to construct the uncertain measure, respectively. Experimental results on benchmark datasets demonstrate that our algorithm consistently achieves superior performance over the state-of-the-art active learning algorithms.
联系地址： 中国·武汉市珞瑜路129号 邮编: 430079 E-mail：firstname.lastname@example.org