We're hiring!
*

Open Source meets Super Resolution, part 1

Marcus Edel avatar

Marcus Edel
September 21, 2020

Share this post:

Despite their great upscaling performance, deep learning backed Super-Resolution methods cannot be easily applied to real-world applications due to their heavy computational requirements. At Collabora we have addressed this issue by introducing an accurate and light-weight deep network for video super-resolution, running on a completely open source software stack using Panfrost, the free and open-source graphics driver for Mali GPUs. Here's an overview of Super Resolution, its purpose for image and video upscaling, and how our model came about.

Internet streaming has experienced tremendous growth in the past few years, and continues to advance at a rapid pace. Streaming now accounts for over 60% of internet traffic and is expected to quadruple over the next five years.

Video delivery quality depends critically on available network bandwidth. Due to bandwidth limitations, most video sources are compressed, resulting in image artifacts, noise, and blur. Quality is also degraded by routine image upscaling, which is required to match the very high pixel density of newer mobile devices.

The upscaling community has provided us with many fundamental advances in video and image upscaling, from classic methods such as Nearest-Neighbor, Linear and Lanczos resampling. However, no fundamentally new methods have been introduced in over 20 years. Also, traditional algorithm-based upscaling methods lack fine detail and cannot remove defects and compression artifacts.

All of this is changing thanks to the Deep Learning revolution. We now have a whole new class of techniques for state-of-the-art upscaling, called Deep Learning Super Resolution (DLSR).

Deep Learning Super Resolution (DLSR).

Super Resolution

An image's resolution may be reduced due to lower spatial resolution (for example to reduce bandwidth) or due to image quality degradation such as blurring.

Super-resolution (SR) is a technique for constructing a high-resolution (HR) image from a collection of observed low-resolution (LR) images. SR increases high frequency components and removes compression artifacts.

The HR and LR images are related via the equation:

LR = degradation(HR).

By applying the degradation function, we obtain the LR image from the HR image. If we know the degradation function in advance, we can apply its inverse to the LR image to recover the HR image. Unfortunately we usually do not know the degradation function beforehand. The problem is thus ill-posed, and the quality of the SR result is limited.

DLSR solves this problem by learning image prior information from HR and/or LR example images, thereby improving the quality of the LR to HR transformation.

The key to DLSR succsss is the recent rapid development of deep convolutional neural networks (CNNs). Recent years have witnessed dramatic improvements in the design and training of CNN models used by Super-Resolution.

Upscaling

Upscaling can be achieved using different techniques, such as the aformentioned Nearest-Neighbor, Linear and Lanczos resampling methods. The group of images below demonstrates these different options.

First, the lower resolution input image to be be upscaled:

(Photo by Jon Tyson on Unsplash)

Then, the various methods can be applied. Click on the image below to get a closer look at each result, as well as the original image before it was downscaled.

  1. The input image is upscaled by Nearest-Neighbour interpolation.
  2. The input image is upscaled by Bi-linear interpretation (the most common used method).
  3. The input image is upscaled by Lanczos' interpolation (one of the best standard methods).
  4. The input image is upscaled and improved by our Deep Learning Super Resolution model.
  5. The target image or ground truth, which was downscaled to create the lower resolution input.

The objective is to improve the quality of the LR image to approach the quality of the target, known as the ground truth. In this case, round truth is the original image which was downscaled to create the low-resolution image.

Deep Learning Super Resolution

The standard approach to Super-Resolution using Deep Learning or Convolution Neural networks (CNNs) is to use a fully supervised approach where a low-resolution image is processed by a network comprising convolutional and up-sampling layers to produce a high-resolution image. This generated HR image is then matched against the original HR image using an appropriate loss function. This approach is commonly known as "paired setting" as it uses pairs of LR and corresponding HR images for training.

More recently, and following the introduction of generative adversarial networks (GANs), GANs are one of the most utilized machine-learning architectures for Super-Resolution.

In generative adversarial networks, two networks train and compete against each other, resulting in mutual learning. The first network, called the generator, generates high-resolution inputs and tries to fool the second network, the discriminator, into accepting these as true high-quality inputs. The discriminator output predicts if an input is a real high-quality image (similar to the training set) or if it's a fake or bad upscaled image.

The technical details considerably more complex but follow these general principles.

Examples

The following shows different examples of X4 upsampling using our trained Deep Learning Super Resolution model. You can click on each image to view its original size. We also list the output for Nearest Neighbour, Bi-linear and Lanczos' interpolation for comparison.

1. Food

The model adds details to the vegetables, the plates and the background. Input, Nearest Neighbour, Bi-linear, Lanczos, Original.

Lower resolution input.
  
X4 upscaled image using Super Resolution model.

2. Hotel

The model adds details to the sky and the signs. The hotel sign is is not 100% accurate, but compared with the other upscaling methods a huge improvement. Input, Nearest Neighbour, Bi-linear, Lanczos, Original.

Lower resolution input.
  
X4 upscaled image using Super Resolution model.

3. Woman

The model was able to add even fine details to the hair and cleared up the overall image. Input, Nearest Neighbour, Bi-linear, Lanczos, Original.

Lower resolution input.
  
X4 upscaled image using Super Resolution model.

4. Man

Due to the complex lighting the output is not as sharp compared with the previous examples. Still the model was able to bring back details to the shirt and face. Input, Nearest Neighbour, Bi-linear, Lanczos, Original.

Lower resolution input.
  
X4 upscaled image using Super Resolution model.

5. Animation (Big Buck Bunny)

Since the model was trained on animation videos as well, the works on various contents. However, in our experiments a model trained on a specific content type showed even better results. Input, Nearest Neighbour, Bi-linear, Lanczos, Original.

Lower resolution input.
  
X4 upscaled image using Super Resolution model.

6. Animation (Sintel)

Another animation example, compared with the other upscaling methods, our Super-Resolution model was able to add details to the background and objects in the foreground. Input, Nearest Neighbour, Bi-linear, Lanczos, Original.

Lower resolution input.
  
X4 upscaled image using Super Resolution model.


For more examples: https://medel.pages.collabora.com/super-resolution-examples/.

Dataset

Super Resolution is one of the areas where we can fortunately rely on an almost infinite supply of data (high-quality images and videos) which we can use to create a training set. By down-sampling the high-quality images we can create low resolution and high-resolution image pairs needed to train our model.

The low-resolution image is initially a copy of the ground truth image at half the dimensions. The low-resolution image is initially upscaled using a bi-linear transformation so that its dimensions match the target image, so that it is ready to serve as input for our model.

To make the model robust against different forms of image degradation and to better generalize, the dataset can be further augmented by:

  • Randomly reducing the quality of the image within bounds
  • Taking random crops
  • Flipping the image horizontally
  • Adjusting the lighting of the image
  • Adding perspective warping
  • Randomly adding noise
  • Randomly punching small holes into the image
  • Randomly adding overlaid text or symbols

There are also several datasets available which can be used for training, such as Diverse 2K (DIV2K), which contains 800 2K resolution images and the Flickr2K and OutdoorSceneTraining (OST) datasets.

In our case we trained the model on images extracted from videos released under the Creative Commons license, such as Sintel, Elephants Dream, Spring and Arduino the Documentary.

Quality

One big question we need to answer is how to quantitatively evaluate the performance of our model.

Simply comparing video resolution doesn't reveal much about quality. In fact, it may be completely misleading. A 1080p movie of 500MB may look worse than a 720p movie at 500MB, because the former's bitrate may be too low, introducing various kinds of compression artifacts.

The same goes for comparing bitrates at similar frame sizes, as different encoders can deliver better quality at lower bitrates, or vice-versa. For example, a 720p 500MB video produced with XviD will look worse than a 500MB video produced with x264, because the latter is much more efficient.

To solve the problem, over the past decade several methods have been introduced, commonly classified as either full-reference, reduced-reference, or no-reference based on the amount of information they assess from a reference image of ostensibly pristine quality.

Video quality has traditionally been measured using either PSNR (peak-to-signal-ratio) or SSIM (Structural Similarity Index Method). However, PSNR doesn’t take human perception into account, simply measuring the mean squared error between the original clean signal and the noise of the compressed signal. SSIM does consider human perception, but was originally developed to analyze static images and doesn’t allow for human perception over time, although more recent versions of SSIM have started to address this issue.

With the rapid development of machine learning, important data-driven models have begun to emerge. One such is Netflix’s Video Multi-method Assessment Fusion (VMAF). VMAF combines multiple quality features to train a Support Vector Regressor to predict subjective judgments of video quality.

At Collabora, we use a combination of SSIM and VMAF to train and test our Deep Learning Super-Resolution models. SSIM is fast to calculate and serves as a basic indicator for how the model is performing. VMAF, on the other hand, delivers more accurate results, which are usually missed by traditional methods.

Performance

Despite their great upscaling performance, deep learning backed Super-Resolution methods cannot be easily applied to real-world applications due to their heavy computational requirements. At Collabora we have addressed this issue by introducing an accurate and light-weight deep network for video super-resolution.

To achieve a good tradeoff between computational complexity and reproduction quality, we implemented a cascading mechanism on top of a standard network architecture, producing a light-weight solution. We also used a multi-tile approach in which we divide a large input into smaller tiles to better utilize memory bandwidth and overcome size constraints posed by certain frameworks and devices. Multi-tile significantly improves inference speed. This approach can be extended from single image SR to video SR where video frames are treated as a group of multiple tiles.

We designed our solution on top of the open-source Panfrost video driver, allowing us to offload compute to the GPU.

Coming up in Part 2 of this series, we'll take a deep dive into how our model works, and how you can use free, open source software to achieve a higher level of compression than existing video compression methods. Stay tuned!

Update (Sept. 24):

By popular demand, the code to train your own model and to reproduce the results from the blog-post can be found here: https://gitlab.collabora.com/medel/super-resolution.

Due to licensing issues (a large number of images used have a research license attached to them), we can't release the pre-trained model for the second stage of the Super-Resolution method at this point. However, we are currently re-training the model to solve the issue, and will be making the updated model checkpoint available soon!

Comments (2)

  1. Giorgio B.:
    Sep 22, 2020 at 10:09 AM

    Hi, what happened to the hotel image: change also the point of view? low resolution the hotel goes to left, high resolution goes to right?

    Reply to this comment

    Reply to this comment

    1. Marcus Edel:
      Sep 22, 2020 at 08:10 PM

      Hello, great question, in this case, it's actually an optical illusion. I created a simple example page that allows us to compare the two images - https://medel.pages.collabora.com/super-resolution-examples/compare.html

      Reply to this comment

      Reply to this comment


Add a Comment






Allowed tags: <b><i><br>Add a new comment:


Search the newsroom

Latest Blog Posts

Building GStreamer text rendering and overlays on Windows with gst-build

28/09/2020

GStreamer relies on various 2D font rendering and layout libraries such as Pango and Cairo to generate text for the Pango plugin, which…

Initcalls, part 2: Digging into implementation

25/09/2020

In this second part of this blog post series on Linux kernel initcalls, we'll go deeper into implementation, with a look at the colorful…

Open Source meets Super Resolution, part 1

21/09/2020

Introducing an accurate and light-weight deep network for video super-resolution upscaling, running on a completely open source software…

Integrating libcamera into PipeWire

11/09/2020

PipeWire continues to evolve with the recent integration of libcamera, a library to support complex cameras. In this blog post, I'll explain…

Pushing pixels to your Chromebook

31/08/2020

A high-level introduction of the Linux graphics stack, how it is used within ChromeOS, and the work done to improve software rendering (while…

Using the Linux kernel's Case-insensitive feature in Ext4

27/08/2020

Last year, a (controversial) feature was added to the Linux kernel to support optimized case-insensitive file name lookups in the Ext4 filesystem.…

Open Since 2005 logo

We use cookies on this website to ensure that you get the best experience. By continuing to use this website you are consenting to the use of these cookies. To find out more please follow this link.

Collabora Ltd © 2005-2020. All rights reserved. Privacy Notice. Sitemap.