Adobe’s Experimental New Features Promise a Future Where Nothing’s Real
Adobe Max 2019 wrapped up yesterday, and over the past week the company (and host John Mulaney) revealed a bunch of new automated capabilities, it’s currently developing for its various applications—both on desktop and mobile. These demos are always crowd-pleasers and tantalizing teases of how users might soon be able to further streamline their workflows. But in recent years these sneak peeks have also provided a look at how Artificial Intelligence promises to radically change all the digital tools we use, as more often than not, Adobe’s latest and greatest leverage the company’s Sensei deep learning platform to pull off their seemingly magical feats.
Not to be mistaken with the classic children’s toy where plastic pegs were stabbed into a glowing board, Adobe LightRight might be the holy grail for photographers who incessantly tweak and adjust every aspect of their photos in apps like Adobe Lightroom. Using Adobe Sensei, LightRight can be used to radically adjust the lighting in a photo after it was taken, and not just in regards to the overall exposure or brightness. The tool can calculate the 3D dimensions of objects in a 2D image and even recreate the proper shadows as the user adjusts the intensity and position of a simulated sun. One day photographers might not even have to bother looking through their camera’s viewfinder, as everything wrong with a shot could be easily fixed by an AI after the shutter button is pressed.
As progress on leveraging AI and deep learning to manipulate photos and videos accelerates at an alarming pace, so does the development of tools designed to help humans recognize what’s real, and what’s been faked. ProjectAboutFace uses machine learning to analyze photos and look for the telltale signs that a digital manipulation has taken place, like patterns of pixels that have been copied from other areas, or interpolation that’s been used to fill in missing p