Hi this looks really interesting. I am working with a team to try to improve optical character recognition on old and "noisy" text images that humans can read but OCR cannot. We have been using neural net to do this with some success on training data (https://github.com/digiah/oldOCR, my github username is rcrath) using OCRopy but I have always thought treating the text as a data stream and using department to get a noise sample of the garbage text that OCR makes when it fails and subtracting that from the signal might reduce the garbage in an OCR file to make other approaches better focused. I realize that is far from what you are doing, butdo you think it would be feasible for us to try and adapt your code?
One thing that strikes my ear in the samples, most obviously in the street noise one, is that the algorithm is acting more like a gate than noise removal since the horns and traffic are clearly audible still in the speech sections.
I would love to see this adapted to guitar noise suppression!
Re: Fab!
Date: 2017-10-02 12:27 am (UTC)One thing that strikes my ear in the samples, most obviously in the street noise one, is that the algorithm is acting more like a gate than noise removal since the horns and traffic are clearly audible still in the speech sections.
I would love to see this adapted to guitar noise suppression!
Thanks for this work.