Case Studies

Technology Vision and Solutions

Check out our case studies to learn about our clients’ successes:

 Quick Links:

Markerless Motion Tracking Facial Reconstruction
First 360 VR video player: 210 frames per second+
Intel MR. Configurator Tool: Vive Optimiztion
Applying Optical Engineering to a Motion Graphic Comic: WBY
Honda Concept Car: AR Visualization 
Self-led Virtual Reality Therapy Ecosystem

Markerless Motion Tracking Facial Reconstruction

December 2015
https://www.youtube.com/watch?v=Rc5BBel8nmk

Problem: Create software that can track a human’s facial movements and transform those into an animated character. Client specified that the software needed to work on mobile at a speed of at least 15fps and recommended using a proprietary 48 identification point methodology on each face, in accordance with the standard for facial tracking software at the time. The intention was to use the animated character software as an MVP for a platform that would allow their technology team to perform face swaps between stunt doubles and actors in feature films.

Investigation:

  • How many identification points are required to allow the animation to show realistic emotions? – What is the lowest number of points / facial expressions can we do to provide the most realistic solution.
  • How to optimize the balance between reflecting accurate emotions given the hardware restrictions? – How do we make a product that creates a more realistic solution that the competition but still works on lower end devices and is easier to maintain?
  • Given a mobile environment, how much of an effect would lighting have in facial tracking? – How do we provide a product that provides a solution that works in all conditions no matter the exterior environment, without training.

Results: Underminer found research that implied that at 75 facial identification points, it is possible to replicate emotions sufficiently to pass a Turing test. However, using 75 facial identification points would cause files to be roughly 1.5GB – far too large for a mobile device to run. Optimization testing and analysis would need to be performed to find not only the number of facial identification points, but also the location of those points, in order to accurately depict a large range of emotions in various environments.

Solution: The final delivery was a facial tracking software that used 56 facial identification points, allowing for 156 blend shapes (aka “emotions”) to be reflected – this is an increase from the 62 blend shapes otherwise available at the time – meaning that “microemotions”, such as a tensed jaw or furrowing, could now be detected. Files were roughly 420MB, down from 1.5GB, and ran at a speed of 90fps, up from the 15fps specified, which allow Android and IoS users (iPhone 5 and later) to use it. Furthermore, ULSee was 36% more accurate in very bright light and 43% more accurate in dim lighting than competitors.

Ultimately, Underminer’s MVP was accepted. ULSee’s technology team and Underminer then collaborated to build out the final product, which is now used for facial mapping between stunt doubles and actors in Taiwanese feature films. Underminer also worked with the actors and stunt doubles that would be subjects to the platform to create a training framework for its use.

First 360 VR Video Player: 210 frames per second+

May 2016

Problem: create a VR video player that could run smoothly on all major head mounted displays (HMDs) on the market with an easy to use user interface. At a time when VR content typically ran at 60fps, the hardware manufacturer required 90fps for desktop and 60fps for console / mobile.

Investigation: Underminer focused on three major areas during the investigation phase:

  • Given the tight timeframe, what resources could be leveraged to maximize the quality of the prototype?
    • For open source code, what licensing rules needed to be followed?
  • What other non-VR decoders were doing
  • What technical specifications needed to be met to achieve 90 fps on all HMDs

Results: Underminer determined that each of the major HMDs on the market had different technical specifications, and therefore would need to create three separate video players; one player for mobile, one for Vive/Rift and one for Playstation VR. Each player would require independent analysis on how to optimize downloading from servers and each playback function included caching, buffering and decoding (converting multiple input file types into a single output file type). Furthermore, there were a number of licensing concerns that would need to be addressed that were specific to using open course platforms.

Solution: No other video players had taken the technical route Underminer designed; therefore, each HMD’s VR video player was a wholly new development process. When the initial design proved to have slow download and playback speeds, Underminer took a new path to build decoding functions with unprecedented speed. The final build is limited only by driver speed (roughly 210fps with GPUs of the time), but has a maximum theoretical speed of 1,100fps. This build not only satisfied the technical requirements, but has done so in way that allows hardware and software to upgrade significantly before any changes to the executable are required.

 

Intel MR. Configurator Tool: Vive Optimization

October 2016

 

Problem: simplify the setup and breakdown of mixed reality (MR) demos at trade shows faster and easier.

Investigation:

  • What steps are generally involved in the setup and breakdown of a MR demo?
  • What processes or calculations cause the most difficulty?

Results: a typical MR demo setup took roughly an hour. The configuration of the camera between the user and the physical environment’s objects and defining the field of view for the user were the major pain points.

Solution: Underminer created a configuration methodology that reduced the configuration time by 95%. A steampunk style introductory experience would walk users through a series of movements in a fun and engaging way, all while completing the configuration of the environment on the back end by creating a composite image from the camera and virtual reality headset inputs. The final result shows the user both the horizontal and vertical fields of views whereas most MR experiences only show the horizontal field of view.

Breakout Article for the MR. Configurator

 

Applying Optical Engineering to a Motion Graphic Comic: Wild Blue Yonder

March 2015

 

Problem: create an engaging augmented reality experience using the Intel RealSense camera in the style of a motion graphic comic.

Investigation: in collaboration with an ex-NASA mathematician from Two Bit Circus, Underminer’s CTO reverse engineered how human optics process images in relative space and compare that with how cameras perform the same task. The intention was to confirm if it was possible to develop a visual construct in an AR environment that interacts with users and the users’ motions such that the users have the same perception of relative space as they do in real life.

Results: the team identified two elements to build into the AR experience: parallax, the perception that an object in the distance moves its relative position in space as the viewer moves, and bounce, the emotional “feel” produced based on how tightly the mapping is between planes in a parallax; a close mapping may create a sense of urgency whereas a loose mapping may produce a sense of dreamy calm.

Solution: the final result contains seven scenes with various parallax paradigms and bounce rates – better seen than explained.

 

Honda Concept Car: AR Visualization

November 2016

 

Problem: create a demo their 2D concept car designs in an augmented reality experience using Tango tablets.

Investigation:

  • How to effectively model 2D sketches into 3D models?
  • What are the hardware limitations of using Tango tablets?

Results: Underminer worked with the client in an iterative design process to transform the 2D sketches into an interactive 3D model. For example, ensuring the seats in the concept car was sized such that it could securely fit a person and scaling the objects in the concept car’s environment proportionally to the car.

Solution: the final product was shown at the LA Auto Show:

 

Self-led Virtual Reality Therapy Ecosystem

September 2016

https://assets-global.htcvive.com/vr_developer_published_assets/app/9eebecc5-e9f7-4e69-8482-b725e83c93d4/video/video_1_v1509096991.mp4

 

Problem: create an easily accessible program for people to begin treating various mental health conditions in a self led, self paced virtual reality environment.

Investigation:

  • What conditions can be safely self diagnosed and effectively treated in VR?
  • How should user progress through the program?
  • What safeguards need to be in place?
  • What distribution channel should be utilized and what infrastructure, if any, is required?

Results: Underminer partnered with Erik Johnson, a medical profession that had been using VR exposure therapy to treat PTSD patients in the military for 10 years, and identified a series of proven methods to incorporate into the final program. Underminer also defined the scope of conditions that can be effectively treated using VR. While government issues mobile phones were originally identified as the target distribution channel, Intel urged development of a Vive based demo.

Solution: Underminer opted to pursue a common phobia: fear of heights. The full program was developed for the Vive in six weeks and debuted at Austin Game Conference. The program contains four gradated levels, each with increased levels of exposure to the stimulus, heights, and increasingly realistic imagery. All levels also included included psychoacoustic sounds (tones above and below the human sound registry that stimulates the neurological system responsible for the fight or flight response). A safeguard was put in place in the form of a “Calm World” which allows users to “escape” their fear and enter a safe, calming space within VR.

Virtual Engagements was released on Viveport in October 2017.