Matt Richardson ’s “ Descriptive Camera ” does n’t take pictures , not even blurry ones . But it puzzle out just o.k. . Rather than capturing an image on picture , Richardson ’s invention ( a prototype build for a Computational Cameras course at New York University‘s Interactive Telecommunications Program ) prints out a descriptive piece of text about the picture .
How It Works
After an exposure is accept , the image is sent toAmazon ’s Mechanical Turk , a service that pay world to do tasks that computers can not lick , like translate into speech what is captured in a photograph . The photographic camera utilise aBeagleBone , a diminutive Linux estimator used in hardware prototypes , and once the transcription has been returned , a caloric printer spits out the resulting text , with a reversal prison term — from shutter - get across to ink on composition — of between 3 and 6 minutes .
Why We Want It
Richardson , a lensman and programmer , has had people indicate to him that the Descriptive Camera could be developed into a practiced product for the visually impaired . He also see potential in the idea of a twist that can catalog all the images we tear on our digital cameras and smartphones .
For the time being , Richardson is focussed on perfect the television camera ’s aim , making it smaller , more compact and pragmatic . [ NY Times – Image viaMatt Richardson ]
The New York Times reports :

flick a picture outside of a building in New York City and you ’ll get this printout from the camera : “ This is a faded picture of a dilapidated construction . It seems to be black market down and in the need of repairs . ”
Camerasinnovationwish you were here
Daily Newsletter
Get the good technical school , science , and culture news program in your inbox daily .
News from the future , delivered to your present .
You May Also Like













![]()
