NEW-TYNOLOGY-LOGO

Why I’m Considering Upgrading to an iPhone 7 Plus

As the world collectively gasped over Apple’s choice to remove the headphone jack from their new series of smartphones – or gawk over the slick new Jet Black color option, I found myself fixed on a different feature altogether.

Smartphone cameras have come a long way in the last decade, and helped to virtually eliminate the point-and-shoot camera market.

While I try to bring my professional camera everywhere I go, especially when traveling or capturing shots of my son, having a high quality smartphone camera in my pocket at all times adds to my versatility as a photographer. For its size, the iPhone can take pretty remarkable photos of landscapes, flat lays and properly lit scenes. One thing it absolutely cannot do is shallow depth of field photographs. Blurry backgrounds and bokeh are still reserved exclusively for large sensors and fast glass.

Tynology - iPhone 7 Plus

Apple is now trying to change that, and take a chunk out of the DSLR and mirrorless market. The new iPhone 7 Plus comes with dual 12mp cameras with f1.8 apertures. One lens is a 28mm wide-angle and the other is a 56mm “telephoto”. Taking a photo is no different, the camera uses 28mm lens and allows a “2x zoom” without sacrificing optical quality by switching to the 56mm.

The real magic is in a feature that won’t be available at launch, but will be included in a future free software update. It uses the two cameras simultaneously to obtain a stereoscopic depth map, that can algorithmically determine where the subject is and the distance of the surrounding scene. The background is blurred using computational artificial intelligence and the overall effect looks to mimic high end professional cameras.

Tynology - iPhone 7 Plus

Apple was very light on the details and did not show a working version of this feature, only sample photos (which looked great), but leaves me with more questions. Does this feature only work with human subjects? What about quick moving babies? Does this only work at the 56mm field of view (2x)? That last one I’m particularly curious about because I can’t figure out how else the feature would work.

My guess is that the data from the depth map is analyzed, the 56mm lens captures the subject in focus and the 28mm captures an out of focus shot simultaneously. The two images are compiled using the depth map as a guide into a single photo using pieces of the two individual captures (and trimming the extra space from the wider soft focus 28mm shot). This would mean the widest field of view for this feature would theoretically be 56mm (an ideal focal length for portraits, conveniently).

Tynology - iPhone 7 Plus

While I likely won’t be pre-ordering the 7 Plus next week (I do have a fully paid off iPhone 6 in excellent condition after all), I’m intrigued to see this feature in the wild and get my hands on it. Having a solid portrait lens in my pocket could be a game changer when I’m in need of quality shots but don’t want to lug around my camera and bag of lenses.

I’m excited to see the future of this technology. Cameras are expensive and lenses are expensive. While I won’t be ditching my camera any time soon, this could provide a great alternative for people looking to play around with photography without investing in a full DSLR system.

What do you think?

Cheers, Ty

 

 

 

 

 

 

Facebook
Twitter
LinkedIn