Intel uses machine learning to make GTA V look incredibly, alarmingly realistic


One of the more impressive aspects of it Grand Theft Auto V is how close the game’s San Andreas is to the real Los Angeles and Southern California, but a new machine learning project from Intel Labs called “Enhancing Photorealism Enhancement” could take that realism in a disturbingly photorealistic direction (through Gizmodo).

Playing through the processes created by researchers Stephan R. Richter, Hassan Abu Alhaija and Vladlen Kolten produces a surprising result: a visual look that bears unmistakable similarities to the kinds of photos you casually take through your car’s smudged windshield. You have to see it in motion to really appreciate it, but the combination of slightly blurred lighting, smoother pavement, and believably reflective cars just sells the fact that you’re looking at the real street from a real dashboard, even if you are. all virtual.

The Intel researchers suggest some of that photorealism comes from the data sets they fed their neural network. The group offers a more in-depth and thorough explanation of how image enhancement really works in their paper (pdf), but as I understand it, the Dataset for cityscapes which was used – largely made up of photos of German streets – filled in many details. It’s weaker and from a different angle, but it almost captures what I imagine: a smoother, more interactive version of scrolling through Google Maps’ Street View could be. It doesn’t quite act like it’s real, but it looks like it’s made up of real things.

The researchers say their improvements go beyond what other photorealistic conversion processes are capable of by also integrating geometric information GTA V self. Those “G buffers,” as the researchers call them, can contain data such as the distance between in-game objects and the camera, and the quality of textures, such as the gloss of cars.

Although you may not see an official “photo-realism update” roll out GTA V By tomorrow, you may have already played a game or watched a video that benefits from a different kind of machine learning: AI upscaling. The process of using smart machine learning to blow up graphics to higher resolutions isn’t common everywhere, but can be seen in Nvidia’s Shield TV and in several mod projects aimed at upgrading the graphics of older games. In those cases, a neural network makes predictions to fill in missing detail pixels from a lower resolution game, movie, or TV show to achieve those higher resolutions.

Photorealism probably shouldn’t be the only graphics goal for video games (artistry aside, it looks kinda creepy), but this Intel Labs project shows there’s probably just as much room to grow on the software side as there is in the raw GPU power of new consoles and gaming PCs.