Home > Robotics > Similarity-based Self-Learning for Image Denoising

Similarity-based Self-Learning for Image Denoising

The task of image denoising is being solved by employing deep convolutional neural networks nowadays. However, these usually require many pairs of images to train the networks. They also do not use self-similarities or repeated patterns. A recent study suggests a novel denoising method that only requires a single noisy image for training.

Noisy image. Image credit: Fernando Garcia via Flickr, CC BY 2.0

The method looks for similar patches in the image, registers them, and forms an image specific training dataset. In order to reduce the demand for computing resources, the algorithm generates a number of similar images and then randomly selects a pair of them during training. After the training, a denoised image is produced from the original image. The experiments demonstrate that the suggested approach better removes noise compared to other current image denoising methods while preserving structural subtleties.

The key idea behind denoising methods is to perform a mean/averaging operation, either locally or non-locally. An observation on classic denoising methods is that non-local mean (NLM) outcomes are typically superior to locally denoised results. Despite achieving the best performance in image denoising, the supervised deep denoising methods require paired noise-clean data which are often unavailable. To address this challenge, Noise2Noise methods are based on the fact that paired noise-clean images can be replaced by paired noise-noise images which are easier to collect. However, in many scenarios the collection of paired noise-noise images are still impractical. To bypass labeled images, Noise2Void methods predict masked pixels from their surroundings in a single noisy image only. It is pitiful that neither Noise2Noise nor Noise2Void methods utilize self-similarities in an image as NLM methods do, while self-similarities/symmetries play a critical role in modern sciences. Here we propose Noise2Sim, an NLM-inspired self-learning method for image denoising. Specifically, Noise2Sim leverages self-similarities of image patches and learns to map between the center pixels of similar patches for self-consistent image denoising. Our statistical analysis shows that Noise2Sim tends to be equivalent to Noise2Noise under mild conditions. To accelerate the process of finding similar image patches, we design an efficient two-step procedure to provide data for Noise2Sim training, which can be iteratively conducted if needed. Extensive experiments demonstrate the superiority of Noise2Sim over Noise2Noise and Noise2Void on common benchmark datasets.

Link: https://arxiv.org/abs/2011.03384


Notify of
Inline Feedbacks
View all comments
Would love your thoughts, please comment.x