Volograms launches the Volu app for easy creation of AR and VR content from your phone

[ad_1]

Artificial intelligence (AI) and volumetric video company Volograms launched the public version of its 3D content creation application, Volu. Using deep learning and computer vision technologies, its AI-powered mobile content creation platform enables smartphone users to easily create, share and play with immersive and dynamic augmented reality content. (AR) and virtual reality (VR).

Volu is an extension of Volograms’ mission to make AR / VR content creation more accessible. Despite, according to Statista, the number of AR users surpassing 90 million in 2021 in the United States alone, and an estimated 2.4 billion mobile AR users worldwide by 2023, the company notes that the authoring tools of Content for AR and VR has so far been too inaccessible, expensive, complicated or rudimentary to gain mass appeal.

“Augmented and virtual reality will change the way we communicate and our daily lives, like the advent of the Internet, social media and smartphones,” said Rafael PAGES, CEO and co-founder of Volograms. “Eventually, it will be ubiquitous. However, like any other technological leap, it must first become more accessible. Putting the power of dynamic 3D content creation in every hand, pocket or purse carrying a smartphone with our Volu app is the first step. We enable user-generated content for augmented reality by turning standard smartphone cameras into augmented reality-ready cameras. “

Based on feedback from thousands of Volu beta users around the world, Volograms refined the key features of the new version of the app ahead of its general release. This included improving reliability, speeding up 3D reconstruction, incorporating an auto-capture timer, and adding new creative effects and algorithms to improve the quality of critical details such as strokes. of the face. Additionally, the company is working on adding more advanced sharing functionality for easier co-creation and access for Android devices.

Features of the app include single-view volumetric capture, which enables 3D reconstruction from a single mobile camera point of view. Automatic foreground segmentation eliminates the need for a green screen and allows the application to be used in uncontrolled environments, even outdoors. Markerless estimation allows estimation of the 3D skeleton to correctly capture human movement without additional equipment or sensors.

The app also offers compatibility with advanced sensors, including LiDAR, to provide depth-based perspective correction and generate more accurate results; full cloud computing processing and keyframe based sequence encoding and compression; and integration with machine learning tools to make processing mobile and potentially enable the switch to real-time streaming with 5G.

[ad_2]
Source link

Jenny T. Curlee

Leave a Reply

Your email address will not be published.