Korean academics claim breakthrough in Face Super-Resolution

How much information can hide in just 256 pixels? Turns out, quite a lot

by Max Smolaks 2 August 2019

A team of researchers from the Korea Advanced Institute of Science and Technology (KAIST) says it has perfected an algorithm that can recreate detailed portraits of human faces out of images that measure just 16 by 16 pixels.

The algorithm is the latest in a long series of experiments focusing on Face Super-Resolution, also known as “face hallucination,” a subfield of the Super Resolution domain that deals with the reconstruction of realistic face images from low-rez icons.

The KAIST team claims its take on Face Super-Resolution is more accurate than any of the current methods, thanks to a combination of advances in progressive training (that isn’t normally employed for SR) and clever use of face heatmaps that predict where certain parts of a face are usually found. See an example below:

“We propose a novel face SR method that generates photo-realistic 8x super-resolved face images with fully retained facial details,” the KAIST team said in a paper published on ArXiv. “To that end, we adopt a progressive training method, which allows stable training by splitting the network into successive steps, each producing output with a progressively higher resolution.

“We also propose a novel facial attention loss and apply it at each step to focus on restoring facial attributes in greater details by multiplying the pixel difference and heatmap values.

“Lastly, we propose a compressed version of the state-of-the-art face alignment network (FAN) for landmark heatmap extraction. With the proposed FAN, we can extract the heatmaps suitable for face SR and also reduce the overall training time.”

The new algorithms have already been subverted to generate human faces out of Twitter and Twitch emoticons, with terrifying results; these include a pizza garnished with human lips instead of pepperoni.

Story via @hackermaderas