How AI can restore our forgotten past

Nvidia's in-depth image reconstruction

AI will eradicate 70% of the jobs and possibly end the human species. These are two recurring themes in the media attention for artificial intelligence.

It is scary, but AI also plays an important role in mapping our past and present. Compose machine learning, neural networks and a barrel full of data, make noise and you can achieve amazing results. Today you can restore photos, but one day we can use AI to map the past in VR.

Here we look at a part of the current research on AI that investigates a number of promising and much less threatening applications for the technology.

Restore images

Tears and folds in historical photos? Scratching eyes in a photo of your ex from 10 years ago? AI can solve all that.

Several AI and machine learning projects that are in the making, make a photo that is noisy, ripped or blurry and make this pristine with the help of recovery algorithms that do much more than with a photo editor, a felt-tip pen and some Tipp-Ex .

Deep Image Prior, a neural network created by a research team from the University of Oxford, and Nvidia's image reconstruction, demonstrated in April this year, show how AI can digitally restore a partially erased image.

The in-depth image reconstruction method of Nvidia is pretty close to the non-distorted original

The Nvidia process involves training the AI ​​by taking chunks of images from Image Library, Places2 and CelebA-HQ, which are huge repositories of images of almost every kind of common object.

Just as you could learn to draw by sketching realistic objects, the algorithms for image reconstruction are fragmented here by redrawing pieces of missing image data in these photos, and then referring to the & # 39; full & # 39; original photo to see how accurate the attempt was.

Be picturesque

These algorithms use a number of the same techniques as a restorer of oil paintings. We are talking about & # 39; in-painting & # 39; This is where cracks and other damages in a painting are distinguished from deliberate texture or brush strokes and then filled in to make the photo look like it would be when it was first painted.

Restaurateurs use X-rays, which reveal the different layers of paint, to do this manually. AI replicates part of the effect with machine learning.

Now let's not release robots with paintbrushes on old masters, but AI could also be used to reveal what a painting looked like hundreds of years ago, without the need to spend tens of thousands of dollars on restoration.

Look past the fireworks at the front, it is the pattern recognition in the background that makes the AI ​​of Nvidia work like magic

Look past the fireworks at the front, it is the pattern recognition in the background that makes the AI ​​of Nvidia work like magic

Think of a famous painting. It is probably in a beautiful art gallery behind a glass plate. Perhaps it is not even the original that is shown, because the curators know how many idiotic children with fingers, caked with fresh dust, wandering around school holidays.

Even the well-maintained paintings that you see in famous art galleries are influenced by age. Thick varnish layers on paintings from the 1500s will have dimmed them over the years. Van Gogh was too vague to pay decent oils, and the reds gradually withdraw from many of his works and the yellows turn brown.

What about an old master in a cellar, or a painting so old that it is more canvas than paint? A poor restoration can turn an old masterpiece into something that looks like it was made by an eight-year-old, like the Ecce Homo, once again forgotten by an 81-year-old amateur. AI can avoid this.

Detecting forgeries

The big names in AI have been wisely deterred from suggesting that they can improve a painting worth millions. However, artificial intelligence is already being used to ensure that we do not sell a fake version of art history.

In 2017, researchers from Rutgers University in New Jersey published a paper review software that can be used to tell a forgery of an authentic painting. It claims to be able to do the job better than the professionals.

Line drawings by Pablo Picasso, Henry Matisse and Egon Schiele were analyzed at stroke level, with a stylistic fingerprint being formulated for each artist. The authors claim that it is "100% accurate for detecting counterfeit goods in most institutions".

This is a way in which AI can become involved in the art world without having to release robot retouches paintings on paintings that work tens of millions.

Training dinosaurs

When uncovering the truth, AI sometimes bursts a few bubbles. Among other things, it tells us that the Tyrannosaurus Rex we know from Jurassic Park hardly looks like what the dinosaur actually looked like.

One of the current theory is that such large dinosaurs have been feathered rather than scaly, and an AI model from the University of Manchester now suggests that a T-Rex can not leave a jeep either.

Hollywood is lying to us all the time, but when we find out that a Tyrannosaurus Rex could not really get out of a jeep, it really hurts. Credit: University of Manchester

Hollywood is lying to us all the time, but when we find out that a Tyrannosaurus Rex could not really get out of a jeep, it really hurts. Credit: University of Manchester

The researchers mapped the bone and muscle structure of the T-Rex, and then used machine learning to see how quickly this being could get from point A to point B without breaking bones.

The findings? It was so big and heavy that a T-Rex could probably walk alone, could not run. Jumping after what children and scientists who were looking for food would simply put too much stress on their body.

Microsoft historical site mapping

The camera's deliver the texture, and the AI ​​helps with 3D mapping of this Palmyra site

The camera's deliver the texture, and the AI ​​helps with 3D mapping of this Palmyra site

Iconem is a startup that specializes in & # 39; heritage activism & # 39; – re-creating historical sites that are threatened by war, or just time, in 3D, powered by Microsoft AI. It creates photorealistic recreation of places such as the Alamut fortress in Iran and the royal cemetery El-Kurru in Sudan.

The artificial intelligence element comes in the way the 3D models are constructed. Iconem uses photogrammetry, allowing 3D modeling of objects using & # 39; flat & # 39; photo & # 39; s possible.

The team took 50,000 photos from the ancient city of Palmyra, which was occupied by Islamic State fighters in 2015. They used drones to avoid landmines, not only taking digital pictures of what was left of the site, but also the damage caused by the occupying forces.

Isis returned to Palmyra after Iconem had mapped the site and destroyed a part of the Roman theater on the site in 2017. There is a vital directness at work; it is easy to think of historic sites as frozen, sure to exist as they are in eternity, but Iconem's work shows that this is not the case.

You can view some 3D views of Palmyra on YouTube as part of a collaboration with Google Arts & Culture. If the work of Iconem is not a good content for VR, then there is nothing.

Bring the past to life

VR and AI are better friends than you might think. The DeepMind AI Lab from Google has designed a neural network that can construct a 3D environment from just a single image. It extrapolates or & # 39; imagines & # 39; the 3D scene based on recognition of objects and their most likely forms. The more images it has to work with, the more faithful a replication of the real environment that it can make.

DeepMind AI makes a 3D labyrinth of a handful of flat images in its demonstration. The AI ​​part is used to distinguish nearby surfaces from afar, something we take for granted in our recognition of scenes.

Google's DeepMind AI can map a maze faster than you can

Google's DeepMind AI can map a maze faster than you can

This technology has a dream dream of the future. Imagine that you can recreate the house in which you grew up, in VR, with the help of photos from an old photo album. Or crush your parents' marriage as if you were a time traveler.

The researchers at DeepMind described this project in detail in the Science issue of 2018. The 3D maze of the AI ​​made had a visual quality that roughly corresponded to that of the images. It looks pretty rough.

Imagine that you can recreate the house in which you grew up, or that you can make your parents' wedding crash like you're a time traveler.

However, it does not take a big leap to imagine how it could be improved with AI. Let's go back to the idea of ​​charting your parents' wedding day. Their faces are vague, only a small part of a scan of an old piece of 35mm film. But there are hundreds of photos of them in the cloud that have been uploaded over the years and that can be used to improve display even though most decades have been taken.

Their wedding car is an Austin Healey, who recognizes the AI ​​and replaces it with a rendering with many polygons of the same model. Flat cobbled structures are replaced by photorealistic structures and the AI ​​recognizes the church in the background. There is not only a Google wireframe mesh of the building, the AI ​​also retrieves thousands of photos that have been uploaded in the vicinity of the same location to map the surroundings.

It is the perfect storm of machine learning and big data. And hey presto: we have a holodeck for your memories from the past and half forgotten. Are we there yet? Of course not – but it ensures a satisfying day cream with technical refueling.

TechRadar & # 39; s Next series you will be offered in cooperation with Honor