The Prototype Of The Light-field Camera Made It Possible To "disassemble" The Video

Video: The Prototype Of The Light-field Camera Made It Possible To "disassemble" The Video

Video: The Prototype Of The Light-field Camera Made It Possible To "disassemble" The Video
Video: What's inside? We take apart a 4K TV | Crutchfield video 2023, June
The Prototype Of The Light-field Camera Made It Possible To "disassemble" The Video
The Prototype Of The Light-field Camera Made It Possible To "disassemble" The Video
Anonim
Image
Image

Google programmers assembled a prototype of a light-field camera from 16 GoPro cameras, and also developed software that allows you to lay out the filmed scene on many planes and rebuild it to obtain unusual visual effects. For example, in this way you can create frames from new angles, stabilize the video or remove objects from it. The article was presented at the SIGGRAPH 2019 conference.

Typically, a photograph is generated solely based on raw data from the matrix, and may also include basic preprocessing, such as color correction. However, in recent years, an approach called computational photography has been actively developed, in which the final image is most often formed on the basis of several frames, which makes it possible to achieve quite impressive visual effects. For example, in many smartphones there is a function of creating images with a high dynamic range, in which several frames with different exposures are combined into a single image without overexposure or strongly darkened areas. In addition, some smartphones can combine multiple frames to get a clearer image during the day or brighter at night.

Since many smartphone manufacturers began to equip them with two or three cameras, it became possible to take shots with them simultaneously from two or more angles, which opens up quite large possibilities. For example, last year, developers from Google and the University of California at Berkeley taught a neural network to create new images from other angles based on two such frames. In a new work, developers from Google, including the author of last year's article, showed that the same principle can be used in practice to remove objects and stabilize video.

The method, presented by researchers in 2018, in short, is that a neural network breaks a single flat frame into many separate planes, lined up in accordance with the distance from the camera. Each plane contains data about the objects on it during shooting - their color and transparency. The difference between the new work lies primarily in the data source - the engineers assembled a prototype consisting of 16 GoPro cameras mounted on one frame. This allows you to initially shoot video from multiple parallel angles and simplify further processing.

Image
Image

Multi-plane representation of a frame

After shooting, the developers suggested loading sets of planes into Nuke digital compositing software. In it, a new video is formed using a virtual camera that can move relative to planes. This allows, for example, to stabilize handheld video on the go, as well as to zoom in, not just cutting off areas of the frame at the edges, but changing the focal length of the virtual lens. In addition, dividing a scene into layers that are distant from the camera at a certain distance, allows you to create the effect of a low depth of field, in which everything near and far is blurred, and only objects at a certain distance look clear.

Finally, the authors showed how the multi-plane representation of a scene allows you to manipulate objects on it. The editor can remove the planes located at a certain distance, and thereby remove the object on them. As an example, on the video published by the authors, you can see how the method made it possible to remove the grid from the video. Also, programmers have shown that using this method, video editors may not work not only with one multi-plane video, but also with several, combining them into one. For example, in this way they were able to turn two static footage, in one of which a dog was running around the yard, and in the second, a man was watering the lawn, into a single video with a camera moving in space, in which the dog runs under a hose stream.

We recently talked about the work of Russian developers, in which they also taught the algorithm to create videos shot from new angles, but their method is based on a different approach. Instead of splitting the scene into planes, they suggested creating a volumetric point cloud from the original video. After that, the neural network can create a frame from a new angle, calculating a two-dimensional projection for it and "coloring" it.

Popular by topic