Using Natural Language Understanding, Part 2: Node.js Backend & User Interface

User Interface for our Vital Sign Checklist app that uses the LUIS Language Understanding Service from Microsoft

The vision: automatic checklists, filled out by simply listening to users explaining what they observe. The architecture of the sample app is based on a lightweight architecture: HTML5, Node.js + the LUIS service in the cloud.

Such an app would be incredibly useful in a hospital, where nurses need to perform and log countless vital sign checks with patients every day.

In part 1 of the article, I’ve explained the overall architecture of the service. In this part, we get hands-on and start implementing the Node.js-based backend. It will ultimately handle all the central messaging. It communicates both with the client user interface running in a browser, as well as the Microsoft LUIS language understanding service in the Azure Cloud.

Creating the Node Backend

Node.js is a great fit for such a service. It’s easy to setup and uses JavaScript for development. Also, the code runs locally for development, allowing rapid testing. But, it’s easy to deploy it to a dedicated server or the cloud later.

I’m using the latest version of Node.js (currently 9.3) and the free Visual Studio Code IDE for editing the script files. Continue reading “Using Natural Language Understanding, Part 2: Node.js Backend & User Interface”

Augmented Reality Christmas Tree with Google ARCore Developer Preview 2 – in 5 Minutes

Christmas Tree with Google ARCore

We don’t have a Christmas tree in our apartment. But in today’s world, this is what Augmented Reality is for, right? Therefore, I decided to create an AR Christmas Tree in 5 minutes. This also gave me an opportunity to check out the new Google ARCore Developer Preview 2.

Christmas Tree 3D Model

First off, you need a 3D model of a Christmas tree. Two of the most accessible sources are Google Poly and Microsoft Remix 3D. Sticking to models created directly by Google and Microsoft, these two are the choices:

Christmas Tree by Poly by Google
Christmas Tree by Poly by Google

Continue reading “Augmented Reality Christmas Tree with Google ARCore Developer Preview 2 – in 5 Minutes”

Using Natural Language Understanding, Part 1: Introduction & Architecture

Vital Signs Checklist Architecture - 5

During the last few years, cognitive services became immensely powerful. Especially interesting is natural language understanding. Using the latest tools, training the computer to understand real spoken sentences and to extract information is reduced to a matter of minutes. We as humans no longer need to learn how to speak with a computer; it simply understands us.

I’ll show how to use the Language Understanding Cognitive Service (LUIS) from Microsoft. The aim is to build an automated check-list for nurses working at hospitals. Every morning, they record the vital sign of every patient. At the same time, they document the measurements on paper checklists.

With the new app developed in this article, the process is much easier. While checking the vital signs, nurses usually talk to the patients about their assessments. The “Vital Signs Checklist” app filters out the relevant data (e.g., the temperature or the pupillary response) and marks it in a checklist. Nurses no longer have to pick up a pen to manually record the information.

The Final Result: Vital Signs Checklist

In this article, we’ll create a simple app that uses the natural language understanding APIs (“LUIS”) of the Microsoft Cognitive Services on Microsoft Azure. The service extracts the relevant data from freely spoken assessments.

LUIS just went from preview state to general availability. This important milestone brings SLAs and more worldwide availability regions. So, it’s a great time to start using it! Continue reading “Using Natural Language Understanding, Part 1: Introduction & Architecture”

Real-Time Light Estimation with Google ARCore

ARCore: Light Estimation is an average of the overall image luminosity

ARCore has a great feature – light estimation. The ARCore SDK estimates the global lighting, which you can use as input for your own shaders to make the virtual objects fit in better with the captured real world. In this article, I’m taking a closer look at how the light estimation works in the current ARCore preview SDK.

Note: this article is based on the ARCore developer preview 1. Some details changed in the developer preview 2 – although the generic process is still similar. Continue reading “Real-Time Light Estimation with Google ARCore”

Getting Started with Google ARCore, Part 2: Visualizing Planes & Placing Objects

Models of Brains (segmented from an an MRI) placed in the real world using Google ARCore

Following the basic project setup of the first part of this article, we now get to the fascinating details of the ARCore SDK. Learn how to find and visualize planes. Additionally, I’ll show how to instantiate objects and how to anchor them to the real world using Unity.

Finding Planes with ARCore

The ARCore example contains a simple script to visualize planes, point clouds and to place the Android mascot. We’ll create a shorter version of the script here. Continue reading “Getting Started with Google ARCore, Part 2: Visualizing Planes & Placing Objects”

Getting Started with Google ARCore, Part 1: Project Setup & ARCore SDK

ARCore - Plane Detection running on the Google Pixel 2

ARCore by Google is still in preview and only runs on a select few phones including the Google Pixel 2. In this article, I’m creating a demo app for ARCore using the ARCore SDK for Unity (Preview 1).

It’s following up on the blog post series where I segmented a 3D model of the brain from an MRI image. Instead of following these steps, you can download the final model used in this article for free from Google Poly.

ARCore vs Tango

Previously, the AR efforts of Google were focused on the Tango platform. It included additional hardware depth sensors for accurate recognition of the environment. Unfortunately, only two phones are commercially available equipped with the necessary hardware to run Tango – the Asus ZenFone AR and the Lenovo Phab 2 Pro. Continue reading “Getting Started with Google ARCore, Part 1: Project Setup & ARCore SDK”

Showing a 360° Photo in Google Daydream VR based on Unity, Part 2

360° Photo in the Google Daydream VR (2017) headset, Unity app running on the Google Pixel 2

In the first part of the article, we captured a 360° photo using a Samsung Gear 360 camera. Now, we’ll create a new Unity project for Android. Using the right shader and material, we can assign the cylindric projection to a Skybox. This is the perfect 360° photo viewer for Unity, which can then be easily deployed to a Google Daydream / Cardboard VR headset!

Loading the 360° Photo in Unity

The Skybox in Unity is the easiest way to show a 360° photo in VR. Note that 360° 2D and 3D video will be supported out-of-the-box in the upcoming Unity 2017.3 release, according to the current Unity roadmap.

For setting up a 360° panorama as a Skybox, the following guides are very helpful if you need further pointers: SimplyVR, Tales from the Rift. The instructions below outline all the necessary steps you need to create your own 360° photo viewer in Unity! Continue reading “Showing a 360° Photo in Google Daydream VR based on Unity, Part 2”

Showing a 360° Photo in Google Daydream VR based on Unity, Part 1

Stitched 360° Photo taken with the Samsung Gear 360 in full resolution

Capturing a 360° photo / video and viewing it in VR is one of the most immersive use cases. As the user is part of a captured real world, the virtual experience is the most life-like possible.

In this article, I will show how to capture a 360° photo using the new Samsung Gear 360 camera (2017 version), then load the photo into a Unity project for Google Daydream / Cardboard to view it in VR on an Android phone (in this case, the Google Pixel 2). Continue reading “Showing a 360° Photo in Google Daydream VR based on Unity, Part 1”

Capturing a 3D Point Cloud with Intel RealSense and Converting to a Mesh with MeshLab

Depth and Color images from the RealSense Viewer

When dealing with Augmented and Virtual Reality, one of the most important tasks is capturing real objects and creating 3D models out of these. In this guide, I will demonstrate a quick method using the Intel RealSense camera to capture a point cloud. Next, I’ll convert the point cloud to a mesh using MeshLab. This mesh can then be exported to an STL file for 3D printing. Another option is visualization in 3D for AR / VR, where I’ll also cover how to preserve the vertex coloring from transferring the original point cloud to Unity. Continue reading “Capturing a 3D Point Cloud with Intel RealSense and Converting to a Mesh with MeshLab”

3D Printing MRI / CT / Ultrasound Data, Part 2: Splitting the Brain

Combined brain halves, 3D printed without support structures

Are there any other ways to 3D print segmented medical data coming from MRI / CT / Ultrasound by splitting it in two halves?

In the first part of this article, the result was that the support structures required by a standard 3D printer significantly reduce the details present on the surface of the printed body part.

Christoph Braun had the idea for another method to reduce the support structures to a minimum: by splitting the object in two halves, each has a flat surface area that can be used as the base for the 3D print.

Importing and Scaling the STL Model

For processing the 3D object, we’ll use OpenSCAD – The Programmers Solid 3D CAD Modeller. It’s a free open source tool, aimed more at developers, with the advantage that the processes can easily be automated. Continue reading “3D Printing MRI / CT / Ultrasound Data, Part 2: Splitting the Brain”