Google can now reconstruct images from a pixelated source

It's not perfect, but it's pretty impressive!

The basic idea is that neural networks know what lots of images look like and what those images look like when they're downsized and thus can make "educated" guesses as to what a downsized images looked like before they were downsized.

Google Brain super-resolution image tech makes “zoom, enhance!” real
Google Brain creates new image details out of thin air.

  1. First you have to tell the software that the image is a face. Wonderful – we are teaching the computer how to be subjective, letting its preconception determine the way it perceives the world.

  2. +Mark Bothwell mabey it could turn into something beautyful and fofilling one day, it dosent have to be bad…

  3. +Rainheart77 yes, you are probably right, but forgive me for worrying where this may lead.

  4. On the right we have a 1990s video game. On the left we have the high end graphics of today's Minecraft. 🙂 I'm skeptical about this technology. The information has to come from somewhere. For example if I drop in Minecraft Steve's face it'll render the original picture? You know, the one that doesn't exist.

  5. So, when Archie on CSI is asked if there's any way to clean up a grainy securicam shot, he could say "well, I could run a reverse algorithmic" and mean it?