Video cameras to deliver ‘shape & sound maps’ for image tracking

Image Credit: TED

Microsoft’s Future Decoded 2016 event in London this November featured a fascinating presentation delivered by Abe Davis in his role as postdoctoral researcher at Stanford University (above) – Davis gave the audience some insight into the intelligence that Internet of Things (IoT) cameras might soon be delivering.

Nothing is ever completely still

In the world of motion tracking and the IoT, nothing is ever completely still and this is because ‘real objects’ are always subject to some kind of force… so micro-movements are detectable by ultra sensitive cameras.

An ‘actual video’ of a person’s wrist may appear not to be moving… but close examination would show the movement of the skin driven by the human pulse. A ‘baby-cam’ video of an infant sleeping very still may appear not to be moving… but close examination would show the child’s chest rising and falling as it breathes.

Davis is working with new and emerging technologies to give us ways to create video images of even the most seemingly immobile objects.

The applications for video monitoring inside the IoT are immense i.e. objects that never really move right down to (for example) an important screw bolt or equipment housing can now be analysed for micro-movements with a view to understanding when they may need attention in some way.

“Humans are amazingly adept at detecting some of this movement: our eyes pick up large motions, like the passing of a vehicle or the wave of a hand, while our ears alert us to the smaller, faster motion of sound. Our senses are limited though – we don’t hear shapes, or see tones, and we are constantly surrounded by motion that eludes our perception altogether,” said Davis.

Structural health monitoring SHM

This development of technologies that can track these micro-movements (and convert them into a kind of ‘sound map’ soundwave) could fundamentally change the way that we use video suggests Davis. The applications in IoT engineering could be immense if we apply this to structural health monitoring SHM, which is now regarded as a discipline in and of itself.

“Now we have a way to picture the vibrations of an object and we can use this to learn…about the object itself, which provides us with information that is fundamentally different to that we’re used to catching with cameras,” said Davis.

Where we go next with this could be approaches which seamlessly mix real and augmented reality worlds.

“This technology adds a new dimension to the way that we imagine the world… much of the greatest impact is in applications that we haven’t even thought of yet. This work will have a lot of impact on the way we work with digital content. The applications for diagnostics and design are immense,” said Davis.

From video, comes sound

As mentioned above, Davis will now work to recover sound from silent video, interact with recorded objects and create richer dynamic blends of the real and virtual world.

Once a video is recorded it can translated into a sound file which ‘describes’ the movement of the object itself and this can be played back whenever required as a means of digitized (and more structured) data analysis than the pure video itself.

NOTE: Additional reporting by Beth Munns.

Adrian Bridgwater: I am a technology journalist with over two decades of press experience. Primarily I work as a news analysis writer dedicated to a software application development ‘beat’; but, in a fluid media world, I am also an analyst, technology evangelist and content consultant.
Related Post