I do not see the high model using chords even when I change the Farof the ar camera. Tell me what to do.
YUKI93, 29 Jun 2020What you did there was basically explained computational photography in a nutshell. Sorry, but... moreYes, but still even with a large sensor, additional ones are desirable as they can improve the quality and allow more possibilities, including after picture refocusing, better handling of focus in general, depth informations to get 3D or brokeh effect etc etc
For exemple, I am currently making the 3D model of a fictional smartphone using many things I invented, it basically have a large exterior lens that feed light to two APS-C sensors, which would be by far the biggest sensor ever put on a smartphone, using one of the optical system I imagined, it basically use a flat (really thin) system that transform reflect the light 90° and focus it so it can go to a canal (a telephoto) where a really long and narrow telephoto lenses configuration can allow a really high level of zoom (probably x40 or x50 optical zoom), as even with a giant objective you can shrink down the light to work with really small lenses and grow it back (using my flat system in reverse) to feed the sensors.
And since the telephoto will be really long (half the phone long) the focus and zoom lenses can also make it work as a regular, portrait, telemacro and telephoto objective, so in theory those two are the only required main sensors.
Of course two sensors receive less light than only one as the light is split into two telephoto setup, allowing for one to capture in focus subject and the other the background or infinite focus, allowing to gather interesting informations, help refocus, fix exposure etc.
Yet, the smartphone still have 6 additional sensors on the back, this include two IR depth sensors that work as an hybrid of Time of Flight and Structured-light 3D scanner by both recording the time the powerful IR light reach back the sensor, and also, post ToF, capturing a specific light pattern being a cross with scales and a grid around it, plus, as there is two of those IR sensors allowing for stereoscopic 3D.
I imagined the sensor as a purposefully build NIR only sensor that have an analogical memory to record really fast the ToF bit, then in a slower analogical memory the IR light pattern is build (as well as a regular IR image) using exposure (though in a really short period of time) to gather both the pattern at more or less long range and a regular image, as the sensor is infrared only (and not a regular visible spectrum one with an IR/Visible filter) it would also allow for thermal and night vision.
Plus there are 3 IR focused light beams (with for goal to be not that expensive but mainly to work) that will do a special pattern (converging in a curved path) used for laser autofocus (as the ToF sensor require analog to digital conversion which would add too much delay to be used for the autofocus data), it would also work as a powerful telemeter.
Add to that at least one UV light sensor and a monochrome, the other can be many other sensors used for many other purposes, for example one of my variant have a special RGBW sensor used for precise color correction and HDR, another one would be specialized in high exposure to improve night shots but also to work well at star shots or still subject night shot, the opposite can be a really fast oriented camera that work as a blur compensation for moving subjects to allow the main sensors to have a bit longer exposure.
A fixed optical zoom camera can also be used to gather better details of the main subject.
But as you can see, the depth sensor have a really central role as secondary sensor.
Note that in my concept, the camera app would be both a camera and retouching app, the pictures would basically be lossless compressed RAW that have all the informations of the sensors allowing to do almost magic editing compared to a regular RAW image only, this would also make the picture being taken almost instantly unlike smartphones who all have a delay, rather once you are satisfy with your pic, either the RAW one or the auto retouched one, you simply press the "render" button (or can mass select) that will output the pic as the default setting of your choice, plus a "render as" feature.
Also you can literally create Auto retouching presets and output your RAW as any of them, making it the best photo oriented smartphone by a really really large margin.
Despite the two APS-C getting less light than a DSLR a powerful AI combining both plus all the other sensors can allow for such monster smartphone to output better RAW and auto retouched pics than most entry and middle range DSLR and Mirrorless camera, even maybe compete with APS-H or Full Frame one if the AI is really good.
So even a a smartphone with a sensor as big as the Nokia 808 or the Panasonic Lumix CM1 would get massive advantage by sporting additional sensors.
You can see my design here :
In this one, there is dedicated ToF sensors and another IR camera for laser autofocus and IR image, but I decided to just seriously improve the IR sensor and allow it to have multiple roles on top of depth sensor.
AnonD-909757, 27 Jun 2020See, that's exactly what I was talking about, because YOU don't see any reasons doesn't mean o... moreWhat you did there was basically explained computational photography in a nutshell. Sorry, but I much prefer the hardware to do the job. That is why the 2012 Nokia 808 PureView and 2013 Nokia Lumia 1020 can still give modern smartphones a run for their money despite lacking versatility and the latest hardware in terms of chipset and camera stabilization.
YUKI93, 27 Jun 2020Well it still is. I don't do AR on my phone, so I see no reason for having a depth sensor. Hec... moreSee, that's exactly what I was talking about, because YOU don't see any reasons doesn't mean other won't.
And if you don't know what the depth sensor allow to do, then you probably don't know how photography on smartphone work.
Unlike a DSLR and other camera who can have bigger sensors, a smartphone tiny sensor isn't outputting really good RAW pictures, and anyway since the pictures are made to be usable as they are taken, the phone do multiple things using specific algorithms and AI, it including combining information from multiple sensors to enhance the default RAW picture, and to do auto retouching so the user have a pic ready to be posted.
If you look at Portrait shots of smartphones, you'll notice they are often spots where the hairs blend with the background, this is because of how smartphone's tiny lenses have a different requirement to handle the focus than how a DSLR would, so you need software and AI to fix stuff, which doesn't work flawlessly, hence the blending part, having a depth sensor is one of the way the phone get additional informations to fix that issue.
You know why depth sensors don't do much ?
Because they are too low resolution, put a 12MP depth sensor and you'll see wonders with a 12MP (real or binned) sensor, but since peoples are complaining about depth sensors before they were able to reach useful resolution, well, here it is, we got stuck with puny 2MP depth sensors...
So many things could be done with the additional information of a good depth sensors, from dynamic focus after the picture, parallax compensation, good focus spots, put everything in focus in the picture from really close to really far, use the main camera as a portrait and macro objective, bokeh effect, pseudo 3D and stereoscopy, holography.
If you have a compatible editing software, the RAW data of the sensor could be used for extremely powerful editing abilities, etc.
A combinaison of Time of Flight and Structured-light 3D (DLP using a grid) scanner where two 12MP depth sensors (also giving stereoscopy advantages) would read a particular schema while also reading pulses to combine both would give incredible results, and it would make 3D facial recognition being a default feature.
Literally this app doesn't in all oios and androids. Others will launch this application before google
AnonD-909757, 26 Jun 2020Remember when peoples were like : "DePtH sEnSoR iS uSeLeSs" Feel old yet ?Well it still is. I don't do AR on my phone, so I see no reason for having a depth sensor. Heck, even in smartphone photography it isn't much of a help.
Remember when peoples were like :
"DePtH sEnSoR iS uSeLeSs"
Feel old yet ?
Nick.B, 26 Jun 2020Huawei will surpass you in the future, Google. Thanks to Trump.I already heard this thing since 5 years ago but where are they now? How long we must wait for that 'future' to come? Lol. Future my as*
Huawei will surpass you in the future, Google. Thanks to Trump.
Al-Aqsa Lover, 26 Jun 2020I think that's already there for iPad Lidar Yeah but what's impressive is that it only uses 1 camera.
Anyone know any cool AR game? And which phone support ARcore?