Please use this identifier to cite or link to this item:
http://148.72.244.84/xmlui/handle/xmlui/5910
Title: | A Novel Approach of Speech Recognition Using System Identification |
Authors: | Ibrahim Sadoon Fatah |
Keywords: | System Identification ARX FIR Filter Wiener Filter Speech Recognition Signal Processing |
Issue Date: | 2019 |
Publisher: | University of Diyala – College of Engineering |
Citation: | https://djes.info/index.php/djes |
Abstract: | With the growing of artificial intelligence and the usage of sound commands the needs for a high accuracy speech recognition increases. Many researches are done in this area using different kinds of methods and approaches. In this research two algorithms have been introduced. The autoregressive system identification and the FIR Wiener filter. The objective of this research is to show the robustness of system identification in terms of speech recognition.Both algorithms have been implemented and tested using MATLAB where the process is done by recording full sentences from different subjects under two conditions which are clear and noisy background. For each sentence, it has been recorded two timesfor each subject; the first one was used for testing and the second sentence was used for validation. The results show that both algorithms are giving an accurate prediction when the used data are from the same subject with clear background. The advantage of system identification over the Weiner filter is shine when using noisy signals. Another advantage of using system identification for speech recognition is it can distinguish the sound difference when same sentence from different subjects is used where the Weiner filter in some cases passes them as from the same subject. This could be a huge issue if the algorithm is used for security reasons |
URI: | http://148.72.244.84:8080/xmlui/handle/xmlui/5910 |
ISSN: | 1999-8716 |
Appears in Collections: | مجلة ديالى للعلوم الهندسية / Diyala Journal of Engineering Sciences (DJES) |
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.