So in case you’ve wondered what Apple means by adding ‘improved recognition for individuals’ as a new feature in the integrated Photos app tied to iOS 15, let’s take a look at the details.
When you or someone you’re clicking looks at the camera and poses for a picture, it’s not too difficult for the in-built software to identify the individual in it. But things get a little more complicated for machine learning algorithms to figure out if the subject of the photo is not looking at the photographer or interacting with an object or person.
For such dynamic scenes, Apple’s upgraded identification software works by locating the faces and upper bodies of the people visible in an image. For example, if a set of photos of three individuals are taken a few minutes apart and one person’s face is not clearly visible in one of them, the software looks for clues such as clothing. By matching the data from an image where that subject’s face and upper body is clearly visible, the neural network can identify the person even when their face is occluded or unclear.
But an item of clothing will be a constant identifying characteristic only for a collection of photos taken close together since people don’t wear the same clothes everyday (unless you’re talking about uniforms). Hence clustering a face with an upper body by using a full image as input is carried out only for photos clicked within the same span of time.
The mechanism utilizes on-device machine learning to ensure privacy. Apple also claims to have taken measures to make sure that the process doesn’t guzzle down memory and battery life.
If you want to sink your teeth into the finer details, check out the related post on the Apple Machine Learning Website. It’s a long, but interesting read.