Posts

I did something cool!

Image
So I downloaded the package linked in previous post and followed the steps required to make the application work, and lo and behold: the HOLOGRAM!!!! This is surreal! And like I said, anyone can view my AR point cloud stream on their phones as well! I can't wait to share this with the rest of the class

Adding AR to our project (research stuff)

We have been going back and forth with professor for the future of the project, and he pointed us in the direction of streaming the live point cloud data from Unity into an AR application. First off, I found these two research articles talking about Point Cloud data for Mixed Reality purposes: http://papers.cumincad.org/data/works/att/ecaade2018_145.pdf slideshare.net/TomohiroFukuda/point-cloud-stream-on-spatial-mixed-reality-toward-telepresence-in-architectural-field Both the papers are from the same person and talk about almost the same thing, which is visualizing static point cloud data in the real world. In addition to that, I also referred to Erjun's blog to get a sense of how he accomplished his model to appear in AR. But I decided to do my own research and I came across this wonderful YouTube channel that uses a range of applications to livestream point cloud data and volumetric video as a FREAKIN HOLOGRAM!!!! It completely blew my mind!! Link:  https://www.youtube.com/user/

Outline

Disclaimer:  At the time of this writing, I have looked at research papers concerning the uncanny valley and what type of research has been done. More research needs to be done in terms of using real-time rendering systems and facial animations itself. Differentiating between key framed and motion captured facial movement in realistic 3D characters under varying lighting set-ups and render systems Abstract :  Uncanny valley is one of the major issues in virtual production, movies and video games. Even major studios cannot escape this phenomenon; some prominent examples include Sonic, Benjamin Button,  Grand Moff Tarkin  in Rogue one and many others. This study aims to analyze the factors that can be used to mitigate the uncanny valley effect in hyper realistic characters, the factors: 1) key frame facial animation vs mocap facial capture, 2) different light setups (debatable, maybe I should remove it?), and 3) real-time render engines like UE4 vs traditional render software like Arnold

Research paper for main thesis #2

This week's research papers: Paper 1:   https://search-proquest-com.ezproxy2.library.drexel.edu/docview/2017207715?pq-origsite=summon Summary: This is a qualitative research paper that discusses the effect of different body channels to convey emotions like happiness, sadness, anger and so on. It confirms the notion that for a full body shot, the body alone can convey most of the emotion and facial expressions are not that necessary. Paper 2:   http://uu.diva-portal.org/smash/get/diva2:537265/FULLTEXT02.pdf Summary: This is a Bachelor's thesis. But even so, this is what gave me the idea for my potential thesis topic. It talks about creating an original character in an existing universe by researching the universe, thinking about character background, concept art and stuff.  Outline: I don't have the basics yet, and the research paper I linked are not related to what I'm about to propose, but this is definitely something to look into. "Avoiding uncanny valley in 3D r

Long awaited update..

Image
I haven't posted in a while. Thought I might hop in and just catch everyone up on what's going on. Truth be told, I haven't been feeling motivated in the past week. And I feel terrible. And horrible. I feel like Steve is doing the heavy lifting; and I want to help in every possible way I can. But I just feel like I'm not doing enough. This project was supposed to be fun. But I feel like I'm the slowest component in a chemical reaction which slows down the entire reaction and delays the end result. Enough rambling though. Onto the fun stuff. While Steve was trying to figure out stuff in Unity, I decided to give the good ol' UE4 a whirl. And I found this plugin that streams Kinect data into the engine:  https://www.opaque.media/kinect-4-unreal It didn't come as a surprise that this version wasn't compatible with the one I had. And surprisingly enough, I ran out of space AGAIN, so I couldn't install the older one. Maybe I'll do it next week. In case

Research paper for thesis

https://unitylist.com/browse?search=kinect https://assetstore.unity.com/packages/3d/characters/kinect-v2-examples-with-ms-sdk-and-nuitrack-sdk-18708 https://assetstore.unity.com/packages/3d/characters/kinect-v2-examples-with-ms-sdk-and-nuitrack-sdk-18708?q=kinect&orderBy=0 https://structure.io/openni Research paper 1 :  https://search-proquest-com.ezproxy2.library.drexel.edu/docview/1865901511?pq-origsite=summon Abstract: In recent years, many character designs made for movies and video games have been carried out using complex computer-based processes. User-friendly software has made it easier to produce high computation artwork and multiple texture maps. With higher graphic performance provided by rapidly improving hardware, the continuing demand for innovation poses new requirements for the entertainment industry. Science fiction (sci-fi) in video games and movies has limitless capabilities, and can be created to achieve a wide variety of visual goals. One important arg

Alembic export and hardware limitations

Image
Today I spent most of the day working on the Radiohead video to get a better grasp and understanding of what will and won't work. I linked the process of getting the data into Blender in my previous post, and i successfully managed to get it into Blender and animate it. Here's a screenshot: Note: if you do try attempting this, make sure to use Sverchok Node addon. It's basically visual programming. Now there were 1000 csv files, so I naturally imported all of them, being the dumbass I am. I'm not sure how, but I managed to run out of space on my laptop, so I made every change carefully, because as it turns out, Blender does not like to handle these many csv files, and every change added up to the total file size (even deletion!) Anyway, everything went fine and dandy up to this point. In the midst of all this, I realized that importing alembic cache animations in UE is really easy! So I tried to export this data as an alembic file, hoping that it'd work.