Google's AI test creates stunning photos using Street View imagery
PanArmenian.Net - Saturday 15th July, 2017
PanARMENIAN.Net - Google's latest artificial intelligence experiment is taking in Street View imagery from Google Maps and transforming it into professional-grade photography through post-processing - all without a human touch, The Verge says.
Hui Fang, a software engineer on Google's Machine Perception team, says the project uses machine learning techniques to train a deep neural network to scan thousands of Street View images in California for shots with impressive landscape potential. The software then "mimics the workflow of a professional photographer" to turn that imagery into an aesthetically pleasing panorama.
The research, posted to the pre-print server arXiv earlier this week, is a great example of how AI systems can be trained to perform tasks that aren't binary, with a right or wrong answer, and more subjective, like in the fields of art and photography. Doing this kind of aesthetic training with software can be labor-intensive and time-consuming, as it has traditionally required labeled data sets. That means human beings have to manually pick out which lighting effects or saturation filters, for example, result in a more aesthetically pleasing photograph.
Fang and his team used a different method. They were able to train the neural network quickly and efficiently to identify what most would consider superior photographic elements using what's known as a generative adversarial network. This is a relatively new and promising technique in AI research that pits two neural networks against one another and uses the results to improve the overall system.
In other words, Google had one AI "photo editor" attempt to fix professional shots that had been randomly tampered with using an automated system that changed lighting and applied filters. Another model then tried to distinguish between the edited shot the original professional image. The end result is software that understands generalized qualities of good and bad photographs, which allows it to then be trained to edit raw images to improve them.
To test whether its AI software was actually producing professional-grade images, Fang and his team used a "Turing-test-like experiment." They asked professional photographers to grade the photos its network produced on a quality scale, while mixing in shots taken by humans. Around two out of every five photos received a score on par with that of a semi-pro or pro, Fang says.
"The Street View panoramas served as a testing bed for our project," Fang says. "Someday this technique might even help you to take better photos in the real world." The team compiled a gallery of photos its network created out of Street View images, and clicking on any one will pull up the section of Google Maps that it captures. Fang concludes with a neat thought experiment about capturing photos in the real world: "Would you make the same decision if you were there holding the camera at that moment"
The so-called 'Chappaquiddick Incident,' in which Massachusetts Senator Ted Kennedy, who at the time was 37 years old and the only surviving son of Kennedy patriarch Joseph Kennedy, drove a 1967 black Oldsmobile off the edge of a wooden bridge ...