During Apple’s keynote last month where it announced the iPhone 11 lineup, the company also unveiled new camera technology called “Deep Fusion” that takes four frames before you hit the shutter, four more once you do, and one long exposure shot. The 8-core Neural engine will select the best frames and create a high-quality HDR photo.
The resulting images are highly detailed, sharper, and more natural-looking. The machine learning part of the Neural processor will analyze the image being taken and process differently depending on whether it sees sky, foliage, or skin tones. Meanwhile, structure and color tones are based on ratios obtained by the Neural unit on the A13 processor.
The feature will be coming with iOS 13.2 and those with developer access to the iOS Beta can begin testing Deep Fusion now. Those on the Public Beta will be able to test this feature soon enough. We are intrigued to test this feature out and see what kind of images the iPhone 11 trio can produce with Deep Fusion.
Humans arent still objects unless they're dead. Even so, even the ones taking the picture aren't still object either so there's that. Do you get my point?