Apple’s new TrueDepth camera for the iPhone X sure is impressive but it also sounds familiar. It works by using a projector to cast 30,000 dots on your face, which it then reads with an infrared camera. This sounds much like how the Microsoft Kinect works.
The Kinect uses several approaches to get a good 3D image. One is called structured light – the projection of a known pattern of dots and using machine learning to reconstruct a 3D scene. This is done with a dot projector and an IR camera, same as on the iPhone X.
But a regular color camera is also used to do something called depth from stereo. Basically, it uses depth of field to guess how far things are (things that are further away or closer than the focus distance are blurry).
Apple doesn’t explicitly say if it uses the color image sensor, but we think that it does – that’s why the flood illuminator is there, to let the color camera see in the dark (the IR camera already has the dot projector).
There are some additional tricks used by the Kinect. Its lens is astigmatic – meaning it has a different focal length horizontally and vertically. This gives it two readings per pixel, but would impact the image quality.
Anyway, all this data is passed on to a machine learning system that has been trained on thousands of examples – of body positions in the case of the Kinect and of facial expressions in the case of the iPhone X.
You can check out this slideshow if you want to learn more about how the Kinect implementation works.
Chill, people. FaceID (which is 3D BTW, unlike any android) is just an optional security feature. If you have child porn in your phone and too scared of the police finding out, then you can just use pins or passwords.
Tip us
1.5m 109k
RSS
EV
Merch
Log in I forgot my password Sign up