A photograph is, by its very nature, a still image. Good ones perform their task very well. As technology has evolved the lines between photo and video have started to become a little blurred. We’ve got live photos, Instagram supports video, and now wants to be SnapChat. Then we have augmented reality. The combination of computer generated data and real world imagery in real time. The big problem with augmented reality is that the two don’t really mix together too fluidly yet. MIT PhD student Abe Davis has figured out a possible way to solve this problem. It’s a process he calls Interactive Dynamic Video. It uses tiny vibrations picked up in video to simulate real world movement in still images.

It’s a very cool concept. The principle of the whole thing seems fairly straightforward. I would imagine the implementation is far more complex, though. A video of anywhere from a few seconds to a few minutes is recorded and processed to detect “vibration modes”. Each of these modes represents a different way that an object can move. Analysing the vibration modes then allows the software to predict how the object will act in new situations. In short, you can click and drag sections of it and it’ll react somewhat as it would in reality.

The process can work with some pretty complicated objects, too. One of the markets this technology targets is the low budget filmmaker. Indeed, it could be an inexpensive way to have CG interact with the real world in your footage. Complex subjects with lots of detail are still difficult and expensive to 3D scan. This becomes especially so when you can’t take them into a controlled and lit studio environment. While interactive dynamic video doesn’t allow the same versatility as a 3D scanned digital model, it will get you part of the way there. – Abe Davis

This technology means you could have green screened actors, or completely CG characters interacting with the environment around them. This principle also holds true for augmented reality, as you can see in one of Abe’s more amusing demonstrations.

It’s a fascinating technology, and I really wish I fully understood the technology behind it. The implications of what it offers are huge, far beyond the filmmaker or augmented reality examples mentioned above. A slightly more serious and potentially life saving application of this technology is for structural health monitoring. This is a process of monitoring structures to ensure that they’re still sound. Structures get weak over time, or become damaged due to accidents or nature. These weaknesses aren’t always easy to detect. In the future, they may become obvious after recording just a few minutes of video.

Wherever the technology goes, the possibilities and applications for it seem pretty wide. The only question we have to ask now is how much will Facebook attempt to buy it for? What ideas can you think of where this technology could be useful? Let us know in the comments. [via BBC]