Am fascinated by a post on the Google Research Blog about using Machine Learning (ML) to create “professional level photographs.”
From the post:
To explore how ML can learn subjective concepts, we introduce an experimental deep-learning system for artistic content creation. It mimics the workflow of a professional photographer, roaming landscape panoramas from Google Street View and searching for the best composition, then carrying out various post processing operations to create an aesthetically pleasing image.
Their focus seems to be on exposure, color, and saturation. They mention composition but only as a search criteria. As in the machine learning algorithm tries to locate the best composed images in Google’s Street View library.
The images themselves are all near misses, which is impressive as hell since this seems to be the research group’s initial foray into this type of photo production. But when you look at the samples library it is readily apparent that the photographs are soulless. Which gives me solace that you still need a human to look through the viewfinder to capture anything worth talking about. Where I am stunned is that Google’s ML could easily create a library of perfectly acceptable mediocre landscapes that would rival the ones found on many stock photography sites.