Digital Healthcare, Augmented Reality, Mobile Apps and more! Andreas Jakl is a lecturer for Digital Healthcare & Smart Engineering @ St. Pölten University of Applied Sciences, Microsoft MVP for Windows Development and Amazon AWS Educate Cloud Ambassador.
In the first part, we took a look at how an algorithm identifies keypoints in camera frames. These are the base for tracking & recognizing the environment.
For Augmented Reality, the device has to know more: its 3D position in the world. It calculates this through the spatial relationship between itself and multiple keypoints. This process is called “Simultaneous Localization and Mapping” – SLAM for short.
Sensors for Perceiving the World
The high-level view: when you first start an AR app using Google ARCore, Apple ARKit or Microsoft Mixed Reality, the system doesn’t know much about the environment. It starts processing data from various sources – mostly the camera. To improve accuracy, the device combines data from other useful sensors like the accelerometer and the gyroscope.
Creating apps that work well with Augmented Reality requires some background knowledge of the image processing algorithms that work behind the scenes. One of the most fundamental concepts involves anchors. These rely on keypoints and their descriptors, detected in the recording of the real world.
You’ll learn about the latest trends for developers – including artificial intelligence, progressive web apps and more. Of course, Mixed Reality is also on the agenda!
As a Microsoft MVP for Windows Development, I’ll take over the Mixed Reality session. You’ll see live demos of getting started with both VR headsets, as well as the Microsoft HoloLens. 150 attendees have signed up – so it’ll certainly be a great event!
In addition, Microsoft has released amazing hands-on labs for everyone to follow up and dive deeper into the content presented at the sessions. The Mixed Reality Lab includes controllers, spatial sound and spatial mapping. It’s a great way to get started with some of the most exciting features of MR. Check it out!
In the previous blog posts, we’ve used a simple grayscale threshold to define the model surface for visualizing a MRI / CT / Ultrasound in 3D. In many cases, you need to have more control over the 3D model generation, e.g., to only visualize the brain, a tumor or a specific part of the scan.
So far, we’ve created a volume rendering of a MRI / CT / Ultrasound scan. This is based on Voxels. For 3D printing and highly performant visualization in AR / VR scenarios, we need to create and export a polygon-based model. For the first step, we will use the Grayscale Model Maker and export the 3D Model as .stl to further prepare the model.
To create a 3D model, we have two main options in 3D Slicer:
Grayscale Model Maker: directly uses grayscale values from the image data. A threshold defines the surfaces. The model maker also takes care of smoothing the surfaces and reducing the polygon count.
Model Maker: this requires labels or discrete data to build a 3D model, meaning you have to segment the image data.
After importing the MRI / CT / Ultrasound data into 3D Slicer in part 1, we’re ready for the first 3D visualization inside the medical software through 3D Volume Rendering. This is an important step to ultimately export the 3D model to Unity for visualization through Google ARCore or Microsoft HoloLens, or for 3D printing.
Some of the best showcases of Mixed Reality / VR / AR include 3D visualizations of MRI (magnetic resonance imaging), CT (computer tomography) or ultrasound scans. 3D brings tremendous advantages for analyzing the scanned images compared to only viewing 2D slices. Additionally, a good visualization brings value to patients who can gain a better understanding if they can easily explore their own body.
Update 13. November 2017: The latest source code of the Mixed Reality toolkit now combines both HoloLens and Mixed Reality headsets into a single toolkit that works with one Unity version: 2017.2.0p1 MRTP 4. It’s a special fork of Unity that is optimized for the “Mixed Reality Toolkit Preview”. A later version of Unity will hopefully combine all environments into a single release again. Read more about the environment setup at the GitHub pull request.
Update 19. October 2017: In the meantime, Unity 2017.2 final has been released, and the dev branch of the Mixerd Reality toolkit has been merged back to the master. You should now be fine using the following versions for HoloLens development: Unity 2017.2.0f3+, Mixed Reality Toolkit (master branch), Visual Studio 2017.+4, Windows 10.0.15063.0 SDK.
Original Article: Lately, the tools required for HoloLens / Mixed Reality development have been undergoing profound changes. All three tools involved in building HoloLens apps are being restructured:
Unity 2017 unifies Virtual / Augmented Reality APIs, making them flexible enough to target all platforms (e.g., phones with ARKit / ARCore, VR, AR). This also involves new and renamed APIs.
HoloToolkit has been renamed to Mixed Reality Toolkit, as Microsoft expands the scope to include the new VR headsets with inside-out tracking going on sale this fall.
Visual Studio 2017.3 also introduced some major changes under the hood. This is combined with the C# engine used in Unity slowly being migrated from the old Mono runtime to more recent versions of C#.
When working on Unity HoloLens-projects in teams, sometimes merge conflicts in Unity scenes are unavoidable. Even though the Unity scene file format is text-based, the automatic merge of a standard GIT merge tool wouldn’t always correctly recognize the changes from different versions.