Query expansion using the clustering of pseudo relevant documents with query sensitive similarity

Document Type : Research Paper

Authors

Abstract

Query expansion as one of query adaptation approaches, improves retrieval effectiveness of information retrieval. Pseudo-relevance feedback (PRF) is a query expansion approach that supposes top-ranked documents are relevant to the query concept, and selects expansion terms from top-ranked documents. However, Existing of irrelevant document in top-ranked documents is possible. Many approaches have been proposed for selecting relevant documents and ignoring irrelevant ones, which use clustering or classification of documents. Important issue in query expansion approaches is using relevant documents for selecting expansion terms. In this paper, we propose clustering of pseudo-relevant documents based on query sensitive similarity, which is efficient for placing similar documents together. Query sensitive similarity obtained good results in document retrieval rather than term-based similarity, is the reason for using in this paper. Clusters are ranked based on inner similarity, and some top ranked ones are selected for query expansion. Then, we extract expansion terms from documents of selected clusters based on Term Frequency- Inverse document frequency (TF-IDF) scoring function. Conducted experiments over Medicine dataset (MED) shows that retrieval results for expanded queries with selected documents from clusters is better than basic retrieval (VSM) and Pseudo-relevance feedback. In addition, the effectiveness of retrieval is raised.

Keywords


1-      
[1]           G. O. History. (2014). Google Annual Search Statistics. Available: http://www.statisticbrain.com/google-searches/
[2]           Krovetz, R. (1997, July). Homonymy and polysemy in information retrieval. InProceedings of the 35th Annual Meeting of the Association for Computational Linguistics and Eighth Conference of the European Chapter of the Association for Computational Linguistics (pp. 72-79). Association for Computational Linguistics.
[3]           Spink, A., & Jansen, B. J. (2004). A study of web search trends. Webology,1(2), 4.
[4]           Sanderson, M. (2008, July). Ambiguous queries: test collections need more sense. In Proceedings of the 31st annual international ACM SIGIR conference on Research and development in information retrieval (pp. 499-506). ACM.
[5]           Xu, J., & Croft, W. B. (1996, August). Query expansion using local and global document analysis. In Proceedings of the 19th annual international ACM SIGIR conference on Research and development in information retrieval (pp. 4-11). ACM.
[6]           Huang, J. X., Miao, J., & He, B. (2013). High performance query expansion using adaptive co-training. Information Processing & Management, 49(2), 441-453.
[7]           Lee, K. S., & Croft, W. B. (2013). A deterministic resampling method using overlapping document clusters for pseudo-relevance feedback. Information Processing & Management, 49(4), 792-806.
[8]           Bashir, S. (2012). Improving retrievability with improved cluster-based pseudo-relevance feedback selection. Expert Systems with Applications, 39(8), 7495-7502.
[9]           Lavrenko, V., & Croft, W. B. (2001, September). Relevance based language models. In Proceedings of the 24th annual international ACM SIGIR conference on Research and development in information retrieval (pp. 120-127). ACM.
[10]         Lee, K. S., Park, Y. C., & Choi, K. S. (2001). Re-ranking model based on document clusters. Information processing & management, 37(1), 1-14.
[11]         Lee, K. S., Kageura, K., & Choi, K. S. (2004). Implicit ambiguity resolution using incremental clustering in cross-language information retrieval. Information processing & management, 40(1), 145-159.
[12]         Tombros, A., & van Rijsbergen, C. J. (2001, October). Query-sensitive similarity measures for the calculation of interdocument relationships. InProceedings of the tenth international conference on Information and knowledge management (pp. 17-24). ACM.
[13]         Rocchio, J. J. (1971). Relevance feedback in information retrieval.
[14]         Sakai, T., Manabe, T., & Koyama, M. (2005). Flexible pseudo-relevance feedback via selective sampling. ACM Transactions on Asian Language Information Processing (TALIP), 4(2), 111-135.
[15]         Jardine, N., & van Rijsbergen, C. J. (1971). The use of hierarchic clustering in information retrieval. Information storage and retrieval, 7(5), 217-240.
[16]         Na, S. H. (2013). Probabilistic co-relevance for query-sensitive similarity measurement in information retrieval. Information Processing & Management,49(2), 558-575.
 
[17]         U. o. Glascow. (2014/03). Medline collection. Available: http://ir.dcs.gla.ac.uk/resources/test_collections/medl/
 
[18]         Strohman, T., Metzler, D., Turtle, H., & Croft, W. B. (2005, May). Indri: A language model-based search engine for complex queries. In Proceedings of the International Conference on Intelligent Analysis (Vol. 2, No. 6, pp. 2-6).