d461: Recurrent Convolutional Neural Networks, RCNN Image Compression

RCNN Image Compression: “Target-Quality Image Compression with Recurrent, Convolutional Neural Networks” https://arxiv.org/abs/1705.06687

By: Michele Covell, Nick Johnston, David Minnen, Sung Jin Hwang, Joel Shor, Saurabh Singh, Damien Vincent, George Toderici

Abstract

We introduce a stop-code tolerant (SCT) approach to training recurrent convolutional neural networks for lossy image compression. Our methods introduce a multi-pass training method to combine the training goals of high-quality reconstructions in areas around stop-code masking as well as in highly-detailed areas. These methods lead to lower true bitrates for a given recursion count, both pre- and post-entropy coding, even using unstructured LZ77 code compression. The pre-LZ77 gains are achieved by trimming stop codes. The post-LZ77 gains are due to the highly unequal distributions of 0/1 codes from the SCT architectures. With these code compressions, the SCT architecture maintains or exceeds the image quality at all compression rates compared to JPEG and to RNN auto-encoders across the Kodak dataset. In addition, the SCT coding results in lower variance in image quality across the extent of the image, a characteristic that has been shown to be important in human ratings of image quality

This work has introduced a training method to allow neural network based compression systems to adaptively vary the number of symbols transmitted for different parts of the compressed image, based on the local underlying content. Our method allows the networks to remain fully convolutional, thereby avoiding the blocking artifacts that often accompany block-based approaches at low bit rates. We have shown promising results with this approach on a recursive autoencoder structure.