Authors

Presenter(s)

Sankarshan Dasgupta

Comments

Presentation: 9:00-10:15, Kennedy Union Ballroom

Files

Download

Download Project (1.0 MB)

Description

In recent years, 3D reconstruction has garnered significant attention, driven by the ever-growing demand for immersive experiences and realistic digital environments. With the development of advanced algorithms and substantial processing resources, it is now possible to convert static 2D images into dynamic, navigable 3D spaces. Using an emphasis on tackling the difficulties involved in precisely converting the subtleties of a 2D car front view image into an extensive multi-view 3D representation. This study explores the nexus between computer vision and spatial cognition. The proposed approach identifies the way we perceive and interact in a car with digital imagery, especially in contexts where depth perspective and spatial awareness. By advancing the synthesis of 3D representations from 2D images, we aim to elevate the capabilities of computer vision systems, enabling them to provide more immersive, realistic, and contextually accurate virtual environments.The goal of this project is to completely transform how we view and engage with 3D environments. Our research goes beyond basic understanding of computer vision and artificial intelligence; By establishing a seamless communication between the in-car driver and the virtual world, where we try to blend in the experience to replicate an real-world scenario. This aims to transform street view into an interactive 3D experience and mimic conventional in-car experiences. This approach is defined in game engine as specific sub tasks: (i) Extract the shape and texture of the scene objects. (ii) Implement advanced computer vision algorithms to extract depth information from 2D images. Thus, utilizing depth sensing mechanism for accurate depth measurement. (iii) Develop a real-time rendering engine to create a 3D representation of the scene. In this process, ensure the algorithm is computationally efficient for on-the-fly processing, considering the real-time computational ability required for in-car experience. In order to ensure resilience in identifying and reacting to a variety of driving circumstances, it is important to handle real-time processing constraints for immediate responsiveness and variations in surrounding factors that may effect the accuracy of depth sensing and scene reconstruction.

Publication Date

4-17-2024

Project Designation

Graduate Research

Primary Advisor

Ju Shen

Primary Advisor's Department

Computer Science

Keywords

Stander Symposium, College of Arts and Sciences

Dimensional Vision Synthesis: An Aesthetic Transformation of 2D Views into Dynamic 3D Realities for In-Car experience

Share

COinS