Categories
AR / VR HoloLens Image Processing

Basics of AR: SLAM – Simultaneous Localization and Mapping

In the first part, we took a look at how an algorithm identifies keypoints in camera frames. These are the base for tracking & recognizing the environment.

For Augmented Reality, the device has to know more: its 3D position in the world. It calculates this through the spatial relationship between itself and multiple keypoints. This process is called “Simultaneous Localization and Mapping” – SLAM for short.

Sensors for Perceiving the World

The high-level view: when you first start an AR app using Google ARCore, Apple ARKit or Microsoft Mixed Reality, the system doesn’t know much about the environment. It starts processing data from various sources – mostly the camera. To improve accuracy, the device combines data from other useful sensors like the accelerometer and the gyroscope.

Based on this data, the algorithm has two aims:

  1. Build a map of the environment
  2. Locate the device within that environment
Categories
AR / VR HoloLens Image Processing

Basics of AR: Anchors, Keypoints & Feature Detection

Creating apps that work well with Augmented Reality requires some background knowledge of the image processing algorithms that work behind the scenes. One of the most fundamental concepts involves anchors. These rely on keypoints and their descriptors, detected in the recording of the real world.

Anchor Virtual Objects to the Real World

AR development APIs hide much of the complexity. As a developer, you simply anchor virtual objects to the world. This ensures that the hologram stays glued to the physical location where you put it.

Categories
Events HoloLens

Mixed Reality @ Microsoft Insider Dev Tour

On June 20th, the Microsoft Insider Dev Tour will come to Vienna, Austria. It’s a world-wide event series for developers, organized by Microsoft together with Microsoft Developer MVPs.

You’ll learn about the latest trends for developers – including artificial intelligence, progressive web apps and more. Of course, Mixed Reality is also on the agenda!

As a Microsoft MVP for Windows Development, I’ll take over the Mixed Reality session. You’ll see live demos of getting started with both VR headsets, as well as the Microsoft HoloLens. 150 attendees have signed up – so it’ll certainly be a great event!

In addition, Microsoft has released amazing hands-on labs for everyone to follow up and dive deeper into the content presented at the sessions. The Mixed Reality Lab includes controllers, spatial sound and spatial mapping. It’s a great way to get started with some of the most exciting features of MR. Check it out!

 

Categories
AR / VR HoloLens Image Processing

Capturing a 3D Point Cloud with Intel RealSense and Converting to a Mesh with MeshLab

When dealing with Augmented and Virtual Reality, one of the most important tasks is capturing real objects and creating 3D models out of these. In this guide, I will demonstrate a quick method using the Intel RealSense camera to capture a point cloud. Next, I’ll convert the point cloud to a mesh using MeshLab. This mesh can then be exported to an STL file for 3D printing. Another option is visualization in 3D for AR / VR, where I’ll also cover how to preserve the vertex coloring from transferring the original point cloud to Unity.

Categories
3D Printing AR / VR Digital Healthcare HoloLens Image Processing Windows

Visualizing MRI & CT Scans in Mixed Reality / VR / AR, Part 4: Segmenting the Brain

In the previous blog posts, we’ve used a simple grayscale threshold to define the model surface for visualizing an MRI / CT / Ultrasound in 3D. In many cases, you need to have more control over the 3D model generation, e.g., to only visualize the brain, a tumor, or a specific part of the scan.

In this blog post, I’ll demonstrate how to segment the brain of an MRT image; but the same method can be used for any segmentation. For example, you can also build a model of the skull based on a CT by following the steps below.

Categories
3D Printing AR / VR Digital Healthcare HoloLens Image Processing Windows

Visualizing MRI & CT Scans in Mixed Reality / VR / AR, Part 3: 3D Model Maker

So far, we’ve created a volume rendering of an MRI / CT / Ultrasound scan. This is based on Voxels. For 3D printing and highly performant visualization in AR / VR scenarios, we need to create and export a polygon-based model. For the first step, we will use the Grayscale Model Maker and export the 3D Model as .stl to further prepare the model.

To create a 3D model, we have two main options in 3D Slicer:

  • Grayscale Model Maker: directly uses grayscale values from the image data. A threshold defines the surfaces. The model maker also takes care of smoothing the surfaces and reducing the polygon count.
  • Model Maker: this requires labels or discrete data to build a 3D model, meaning you have to segment the image data.

As a first step, we will use the Grayscale Model Maker, and later explore the more advanced options offered by segmentation and the Model Maker.

Categories
3D Printing AR / VR Digital Healthcare HoloLens Image Processing Windows

Visualizing MRI & CT Scans in Mixed Reality / VR / AR, Part 2: 3D Volume Rendering

After importing the MRI / CT / Ultrasound data into 3D Slicer in part 1, we’re ready for the first 3D visualization inside the medical software through 3D Volume Rendering. This is a major step to export the 3D model to Unity for visualization through Google ARCore or Microsoft HoloLens, or for 3D printing.

Slices in 3D View

After optimizing brightness and contrast of the image data, the easiest way of showing the data in 3D is to visualize the three visible slices (planes: axial / top / red; sagittal / side / yellow; coronal / frontal / green view) in the 3D view. This gives a good overview of the position and the relation of the slices to each other.

Categories
3D Printing AR / VR Digital Healthcare HoloLens Image Processing Windows

Visualizing MRI & CT Scans in Mixed Reality / VR / AR, Part 1: Importing Data

Some of the best showcases of Mixed Reality / VR / AR include 3D visualizations of MRI (magnetic resonance imaging), CT (computer tomography) or ultrasound scans. 3D brings tremendous advantages for analyzing the scanned images compared to only viewing 2D slices. Additionally, a good visualization brings value to patients who can gain a better understanding if they can easily explore their own body.

As part of the 3D information visualization lecture at the FH St. Pölten, I’m giving an overview of the process of converting an MRI / CT / ultrasound scan into a hologram that you can view on the Microsoft HoloLens or with Google ARCore. This blog post series explains the hands-on parts, so that you can easily re-create the same results using freely available tools.

Categories
App Development AR / VR Digital Healthcare HoloLens

How to Combine the Mixed Reality Toolkit, Unity 2017 and Visual Studio 2017

Update 20. December 2017: A new release of the Mixed Reality toolkit is now out as an official release. It’s recommended to use this, along with Unity 2017.2.1f1.

Update 13. November 2017: The latest source code of the Mixed Reality toolkit now combines both HoloLens and Mixed Reality headsets into a single toolkit that works with one Unity version: 2017.2.0p1 MRTP 4. It’s a special fork of Unity that is optimized for the “Mixed Reality Toolkit Preview”. A later version of Unity will hopefully combine all environments into a single release again. Read more about the environment setup at the GitHub pull request.

Update 19. October 2017: In the meantime, Unity 2017.2 final has been released, and the dev branch of the Mixed Reality toolkit has been merged back to the master. You should now be fine using the following versions for HoloLens development: Unity 2017.2.0f3+, Mixed Reality Toolkit (master branch), Visual Studio 2017.+4, Windows 10.0.15063.0 SDK.

Original Article: Lately, the tools required for HoloLens / Mixed Reality development have been undergoing profound changes. All three tools involved in building HoloLens apps are being restructured:

  • Unity 2017 unifies Virtual / Augmented Reality APIs, making them flexible enough to target all platforms (e.g., phones with ARKit / ARCore, VR, AR). This also involves new and renamed APIs.
  • HoloToolkit has been renamed to Mixed Reality Toolkit, as Microsoft expands the scope to include the new VR headsets with inside-out tracking going on sale this fall.
  • Visual Studio 2017.3 also introduced some major changes under the hood. This is combined with the C# engine used in Unity slowly being migrated from the old Mono runtime to more recent versions of C#.

With the latest Unity 2017.2.0b11 release, everything should now be coming together. In this blog post, I’m describing how to use the latest versions of the tools for creating and deploying a HoloLens app.

Categories
AR / VR HoloLens

Resoving Unity Scene Merge Conflicts with UnityYAMLMerge (Smart Merge) and TortoiseGit

When working on Unity HoloLens-projects in teams, sometimes merge conflicts in Unity scenes are unavoidable. Even though the Unity scene file format is text-based, the automatic merge of a standard GIT merge tool wouldn’t always correctly recognize the changes from different versions.

Luckily, Unity comes with a merging tool that is specialized on scene files: UnityYAMLMerge / Smart Merge. However, it’s not straight-forward to integrate into a workflow.