Categories
Android App Development AR / VR Digital Healthcare

Real-Time Light Estimation with Google ARCore

ARCore has an excellent feature – light estimation. The ARCore SDK estimates the global lighting, which you can use as input for your own shaders to make the virtual objects fit in better with the captured real world. In this article, I’m taking a closer look at how the light estimation works in the current ARCore preview SDK.

Note: this article is based on the ARCore developer preview 1. Some details changed in the developer preview 2 – although the generic process is still similar.

Categories
Android App Development AR / VR

Getting Started with Google ARCore, Part 2: Visualizing Planes & Placing Objects

Following the basic project setup of the first part of this article, we now get to the fascinating details of the ARCore SDK. Learn how to find and visualize planes. Additionally, I’ll show how to instantiate objects and how to anchor them to the real world using Unity.

Finding Planes with ARCore

The ARCore example contains a simple script to visualize planes, point clouds and to place the Android mascot. We’ll create a shorter version of the script here.

Categories
Android App Development AR / VR

Getting Started with Google ARCore, Part 1: Project Setup & ARCore SDK

ARCore by Google is still in preview and only runs on a select few phones including the Google Pixel 2. In this article, I’m creating a demo app for ARCore using the ARCore SDK for Unity (Preview 1).

It’s following up on the blog post series where I segmented a 3D model of the brain from an MRI image. Instead of following these steps, you can download the final model used in this article for free from Google Poly.

ARCore vs Tango

Previously, the AR efforts of Google were focused on the Tango platform. It included additional hardware depth sensors for accurate recognition of the environment. Unfortunately, only two phones are commercially available equipped with the necessary hardware to run Tango – the Asus ZenFone AR and the Lenovo Phab 2 Pro.

Categories
Android App Development AR / VR

Showing a 360° Photo in Google Daydream VR based on Unity, Part 2

In the first part of the article, we captured a 360° photo using a Samsung Gear 360 camera. Now, we’ll create a new Unity project for Android. Using the right shader and material, we can assign the cylindric projection to a Skybox. This is the perfect 360° photo viewer for Unity, which can then be easily deployed to a Google Daydream / Cardboard VR headset!

Loading the 360° Photo in Unity

The Skybox in Unity is the easiest way to show a 360° photo in VR. Note that 360° 2D and 3D video will be supported out-of-the-box in the upcoming Unity 2017.3 release, according to the current Unity roadmap.

For setting up a 360° panorama as a Skybox, the following guides are extremely helpful if you need further pointers: SimplyVR, Tales from the Rift. The instructions below outline all the necessary steps you need to create your own 360° photo viewer in Unity!

Categories
Android App Development AR / VR

Showing a 360° Photo in Google Daydream VR based on Unity, Part 1

Capturing a 360° photo / video and viewing it in VR is one of the most immersive use cases. As the user is part of a captured real world, the virtual experience is the most life-like possible.

In this article, I will show how to capture a 360° photo using the new Samsung Gear 360 camera (2017 version), then load the photo into a Unity project for Google Daydream / Cardboard to view it in VR on an Android phone (in this case, the Google Pixel 2).

Categories
AR / VR HoloLens Image Processing

Capturing a 3D Point Cloud with Intel RealSense and Converting to a Mesh with MeshLab

When dealing with Augmented and Virtual Reality, one of the most important tasks is capturing real objects and creating 3D models out of these. In this guide, I will demonstrate a quick method using the Intel RealSense camera to capture a point cloud. Next, I’ll convert the point cloud to a mesh using MeshLab. This mesh can then be exported to an STL file for 3D printing. Another option is visualization in 3D for AR / VR, where I’ll also cover how to preserve the vertex coloring from transferring the original point cloud to Unity.

Categories
3D Printing AR / VR Digital Healthcare

3D Printing MRI / CT / Ultrasound Data, Part 2: Splitting the Brain

Are there any other ways to 3D print segmented medical data coming from MRI / CT / Ultrasound by splitting it in two halves?

In the first part of this article, the result was that the support structures required by a standard 3D printer significantly reduce the details present on the surface of the printed body part.

Christoph Braun had the idea for another method to reduce the support structures to a minimum: by splitting the object in two halves, each has a flat surface area that can be used as the base for the 3D print.

Importing and Scaling the STL Model

For processing the 3D object, we’ll use OpenSCAD – The Programmers Solid 3D CAD Modeler. It’s a free open source tool, aimed more at developers, with the advantage that the processes can easily be automated.

Categories
3D Printing AR / VR Digital Healthcare

3D Printing MRI / CT / Ultrasound Data, Part 1: Support Material

Based on the 4-part tutorial where we segmented the brain from an MRI image, one of the most interesting application areas is printing such 3D models. In that sense, it makes no difference if the data is coming from an MRI (e.g., a brain or tumor), CT (e.g., the skull) or ultrasound. In this article, we’ll look at how to prepare the 3D model for 3D printing.

In the preparation phase, we segmented the model from the original DICOM medical data using 3D Slicer. Afterwards, we reduced the level of detail using the built-in tools in Windows 10.

In this part, we print the MRI brain model using the Witbox 2 3D printer with plastic and deal with support structures. The aim is to make this process accessible for everyone – so you don’t need specialized and expensive software & hardware; we’ll instead use open source and free tools as much as possible.

Special thanks to Christoph Braun from the FH St. Pölten, who is the resident 3D printing expert and prepared the steps to produce the amazing results!

Categories
3D Printing AR / VR Digital Healthcare HoloLens Image Processing Windows

Visualizing MRI & CT Scans in Mixed Reality / VR / AR, Part 4: Segmenting the Brain

In the previous blog posts, we’ve used a simple grayscale threshold to define the model surface for visualizing an MRI / CT / Ultrasound in 3D. In many cases, you need to have more control over the 3D model generation, e.g., to only visualize the brain, a tumor, or a specific part of the scan.

In this blog post, I’ll demonstrate how to segment the brain of an MRT image; but the same method can be used for any segmentation. For example, you can also build a model of the skull based on a CT by following the steps below.

Categories
3D Printing AR / VR Digital Healthcare HoloLens Image Processing Windows

Visualizing MRI & CT Scans in Mixed Reality / VR / AR, Part 3: 3D Model Maker

So far, we’ve created a volume rendering of an MRI / CT / Ultrasound scan. This is based on Voxels. For 3D printing and highly performant visualization in AR / VR scenarios, we need to create and export a polygon-based model. For the first step, we will use the Grayscale Model Maker and export the 3D Model as .stl to further prepare the model.

To create a 3D model, we have two main options in 3D Slicer:

  • Grayscale Model Maker: directly uses grayscale values from the image data. A threshold defines the surfaces. The model maker also takes care of smoothing the surfaces and reducing the polygon count.
  • Model Maker: this requires labels or discrete data to build a 3D model, meaning you have to segment the image data.

As a first step, we will use the Grayscale Model Maker, and later explore the more advanced options offered by segmentation and the Model Maker.