EmpathyBytes VIP

VR Photogrammetry Museum

Spring 2023

This project was one of my most improved yet. My main takeaway from it was the immense amount of familiarity I gained with many of Blender’s tools compared to where I was at the start. I started learning Blender by following a couple of tutorials over Christmas break and creating a wreath based on my neighbor’s as a small personal project. The other massive takeaway from this project was my experimentation with many photogrammetry apps and workflows, opening up many possibilities for other exploratory projects, like a large-room-scale digital twin of Georgia Tech’s Klaus Atrium, or making a digital twin of myself to use as a weird VR avatar. I experimented with Apple’s Object Capture API, Polycam, Reality Scan, Reality Capture, and lastly, Luma AI, which I discovered near the end of the semester. It was excitingly better than everything I had tried before because of its use of neural radiance fields (NeRF). 

At the very beginning of the semester, the Unity GitHub repo was left in a poor state. There were problems with the .gitignore file, many of the assets created in previous semesters had not been uploaded properly, and there were a couple of things with the VR implementation that had to be fixed. We fixed it, got it working in VR, both on PC and standalone on the Quest 2, and this was the project that we built upon throughout the semester. I specifically fixed a lot of the VR functionality, such as making the play space track to the floor instead of the player’s head so their height doesn’t impact the floor level.

Throughout the semester, my main focus wasn’t on the Unity museum, but on creating models for it to actually grow its content, as I felt that was the area with the most potential for improvement. I introduced photogrammetry to the project, as all previous models had been manually created in Blender. Photogrammetry seemed to be an improvement in most ways, as I was able to make models with less time and expertise and therefore produce more higher quality models faster. Overall, I was able to make 10 different highly realistic scanned models. by working with the Georgia Tech archivists.

Halfway through the semester I decided to take a break from making models and made a long 40-minute tutorial on the photogrammetry workflow I had been constantly improving, as other members of the Blender team wanted to learn how to do it and I learned a lot throughout the semester, too much to explain thoroughly in person. I edited this video, uploaded it to YouTube, and referenced and linked many helpful videos I used to learn my workflow while explaining things I learned myself that I found very helpful. Unfortunately, I don’t think anyone else was able to make photogrammetry models as the software I was using was locked to Apple devices. Though I did try and do a lot of research to try and find something that worked. Hopefully, team members in future semesters will find it helpful.

Near the end of the semester, I wanted to increase the challenge so I scanned three articles of clothing on a mannequin. These models were also the first ones I tested with NeRF through the app Luma AI. This worked much better than any other method because it produced much cleaner lines on the edges of models, didn’t glitch out with reflective surfaces, and made a model that I didn’t have to manually reduce the poly count of. I’m excited to work with it more in the future. 

https://github.com/EmpathyBytes/VR-Archive