October 31, 2024
mohammad hasheminejad

mohammad hasheminejad

Academic rank: Assistant professor
Address: university of Jiroft
Education: PhD. in Electrical Engineering (Telecommunication)
Phone: (034)-43347061
Faculty:

Research

Title
Instance Based Sparse Classifier fusion for Speaker verification
Type Article
Keywords
Speaker Recognition ، Speaker Verification ، Ensemble Classification ، Classifier Fusion ، IBSparse
Researchers mohammad hasheminejad, Hassan Farsi

Abstract

This paper focuses on the problem of ensemble classification for text-independent speaker verification. Ensemble classification is an efficient method to improve the performance of the classification system. This method gains the advantage of a set of expert classifiers. A speaker verification system gets an input utterance and an identity claim, then verifies the claim in terms of a matching score. This score determines the resemblance of the input utterance and pre-enrolled target speakers. Since there is a variety of information in a speech signal, state-of-the-art speaker verification systems use a set of complementary classifiers to provide a reliable decision about the verification. Such a system receives some scores as input and takes a binary decision: accept or reject the claimed identity. Most of the recent studies on the classifier fusion for speaker verification used a weighted linear combination of the base classifiers. The corresponding weights are estimated using logistic regression. Additional researches have been performed on ensemble classification by adding different regularization terms to the logistic regression formulae. However, there are missing points in this type of ensemble classification, which are the correlation of the base classifiers and the superiority of some base classifiers for each test instance. We address both problems, by an instance based classifier ensemble selection and weight determination method. Our extensive studies on NIST 2004 speaker recognition evaluation (SRE) corpus in terms of EER, minDCF and minCLLR show the effectiveness of the proposed method.