Using Natural Language Understanding, Part 1: Introduction & Architecture

Vital Signs Checklist Architecture - 5

During the last few years, cognitive services became immensely powerful. Especially interesting is natural language understanding. Using the latest tools, training the computer to understand real spoken sentences and to extract information is reduced to a matter of minutes. We as humans no longer need to learn how to speak with a computer; it simply understands us.

I’ll show how to use the Language Understanding Cognitive Service (LUIS) from Microsoft. The aim is to build an automated check-list for nurses working at hospitals. Every morning, they record the vital sign of every patient. At the same time, they document the measurements on paper checklists.

With the new app developed in this article, the process is much easier. While checking the vital signs, nurses usually talk to the patients about their assessments. The “Vital Signs Checklist” app filters out the relevant data (e.g., the temperature or the pupillary response) and marks it in a checklist. Nurses no longer have to pick up a pen to manually record the information.

The Final Result: Vital Signs Checklist

In this article, we’ll create a simple app that uses the natural language understanding APIs (“LUIS”) of the Microsoft Cognitive Services on Microsoft Azure. The service extracts the relevant data from freely spoken assessments.

LUIS just went from preview state to general availability. This important milestone brings SLAs and more worldwide availability regions. So, it’s a great time to start using it! Continue reading “Using Natural Language Understanding, Part 1: Introduction & Architecture”

Real-Time Light Estimation with Google ARCore

ARCore: Light Estimation is an average of the overall image luminosity

ARCore has a great feature – light estimation. The ARCore SDK estimates the global lighting, which you can use as input for your own shaders to make the virtual objects fit in better with the captured real world. In this article, I’m taking a closer look at how the light estimation works in the current ARCore preview SDK.

Note: this article is based on the ARCore developer preview 1. Some details changed in the developer preview 2 – although the generic process is still similar. Continue reading “Real-Time Light Estimation with Google ARCore”

Getting Started with Google ARCore, Part 2: Visualizing Planes & Placing Objects

Models of Brains (segmented from an an MRI) placed in the real world using Google ARCore

Following the basic project setup of the first part of this article, we now get to the fascinating details of the ARCore SDK. Learn how to find and visualize planes. Additionally, I’ll show how to instantiate objects and how to anchor them to the real world using Unity.

Finding Planes with ARCore

The ARCore example contains a simple script to visualize planes, point clouds and to place the Android mascot. We’ll create a shorter version of the script here. Continue reading “Getting Started with Google ARCore, Part 2: Visualizing Planes & Placing Objects”

Getting Started with Google ARCore, Part 1: Project Setup & ARCore SDK

ARCore - Plane Detection running on the Google Pixel 2

ARCore by Google is still in preview and only runs on a select few phones including the Google Pixel 2. In this article, I’m creating a demo app for ARCore using the ARCore SDK for Unity (Preview 1).

It’s following up on the blog post series where I segmented a 3D model of the brain from an MRI image. Instead of following these steps, you can download the final model used in this article for free from Google Poly.

ARCore vs Tango

Previously, the AR efforts of Google were focused on the Tango platform. It included additional hardware depth sensors for accurate recognition of the environment. Unfortunately, only two phones are commercially available equipped with the necessary hardware to run Tango – the Asus ZenFone AR and the Lenovo Phab 2 Pro. Continue reading “Getting Started with Google ARCore, Part 1: Project Setup & ARCore SDK”

Showing a 360° Photo in Google Daydream VR based on Unity, Part 2

360° Photo in the Google Daydream VR (2017) headset, Unity app running on the Google Pixel 2

In the first part of the article, we captured a 360° photo using a Samsung Gear 360 camera. Now, we’ll create a new Unity project for Android. Using the right shader and material, we can assign the cylindric projection to a Skybox. This is the perfect 360° photo viewer for Unity, which can then be easily deployed to a Google Daydream / Cardboard VR headset!

Loading the 360° Photo in Unity

The Skybox in Unity is the easiest way to show a 360° photo in VR. Note that 360° 2D and 3D video will be supported out-of-the-box in the upcoming Unity 2017.3 release, according to the current Unity roadmap.

For setting up a 360° panorama as a Skybox, the following guides are very helpful if you need further pointers: SimplyVR, Tales from the Rift. The instructions below outline all the necessary steps you need to create your own 360° photo viewer in Unity! Continue reading “Showing a 360° Photo in Google Daydream VR based on Unity, Part 2”

Showing a 360° Photo in Google Daydream VR based on Unity, Part 1

Stitched 360° Photo taken with the Samsung Gear 360 in full resolution

Capturing a 360° photo / video and viewing it in VR is one of the most immersive use cases. As the user is part of a captured real world, the virtual experience is the most life-like possible.

In this article, I will show how to capture a 360° photo using the new Samsung Gear 360 camera (2017 version), then load the photo into a Unity project for Google Daydream / Cardboard to view it in VR on an Android phone (in this case, the Google Pixel 2). Continue reading “Showing a 360° Photo in Google Daydream VR based on Unity, Part 1”

Capturing a 3D Point Cloud with Intel RealSense and Converting to a Mesh with MeshLab

Depth and Color images from the RealSense Viewer

When dealing with Augmented and Virtual Reality, one of the most important tasks is capturing real objects and creating 3D models out of these. In this guide, I will demonstrate a quick method using the Intel RealSense camera to capture a point cloud. Next, I’ll convert the point cloud to a mesh using MeshLab. This mesh can then be exported to an STL file for 3D printing. Another option is visualization in 3D for AR / VR, where I’ll also cover how to preserve the vertex coloring from transferring the original point cloud to Unity. Continue reading “Capturing a 3D Point Cloud with Intel RealSense and Converting to a Mesh with MeshLab”

3D Printing MRI / CT / Ultrasound Data, Part 2: Splitting the Brain

Combined brain halves, 3D printed without support structures

Are there any other ways to 3D print segmented medical data coming from MRI / CT / Ultrasound by splitting it in two halves?

In the first part of this article, the result was that the support structures required by a standard 3D printer significantly reduce the details present on the surface of the printed body part.

Christoph Braun had the idea for another method to reduce the support structures to a minimum: by splitting the object in two halves, each has a flat surface area that can be used as the base for the 3D print.

Importing and Scaling the STL Model

For processing the 3D object, we’ll use OpenSCAD – The Programmers Solid 3D CAD Modeller. It’s a free open source tool, aimed more at developers, with the advantage that the processes can easily be automated. Continue reading “3D Printing MRI / CT / Ultrasound Data, Part 2: Splitting the Brain”

3D Printing MRI / CT / Ultrasound Data, Part 1: Support Material

Support material for the 3D printed brain in Cura

Based on the 4-part tutorial where we segmented the brain from an MRI image, one of the most interesting application areas is printing such 3D models. In that sense, it makes no difference if the data is coming from an MRI (e.g., a brain or tumor), CT (e.g., the skull) or ultrasound. In this article, we’ll look at how to prepare the 3D model for 3D printing.

In the preparation phase, we segmented the model from the original DICOM medical data using 3D Slicer. Afterwards, we reduced the level of detail using the built-in tools in Windows 10.

In this part, we print the MRI brain model using the Witbox 2 3D printer with plastic and deal with support structures. The aim is to make this process accessible for everyone – so you don’t need specialized and expensive software & hardware; we’ll instead use open source and free tools as much as possible.

Special thanks to Christoph Braun from the FH St. Pölten, who is the resident 3D printing expert and prepared the steps to produce the amazing results! Continue reading “3D Printing MRI / CT / Ultrasound Data, Part 1: Support Material”

Visualizing MRI & CT Scans in Mixed Reality / VR / AR, Part 4: Segmenting the Brain

3D Builder: show 3D model of brain segmented from MRI / MRT image

In the previous blog posts, we’ve used a simple grayscale threshold to define the model surface for visualizing a MRI / CT / Ultrasound in 3D. In many cases, you need to have more control over the 3D model generation, e.g., to only visualize the brain, a tumor or a specific part of the scan.

In this blog post, I’ll demonstrate how to segment the brain of an MRT image; but the same method can be used for any segmentation. For example, you can also build a model of the skull based on a CT by following the steps below. Continue reading “Visualizing MRI & CT Scans in Mixed Reality / VR / AR, Part 4: Segmenting the Brain”