Categories
App Development Artificial Intelligence Digital Healthcare

Using Natural Language Understanding, Part 4: Real-World AI Service & Socket.IO

In this last part, we bring the vital sign check list to life. Artificial Intelligence interprets assessments spoken in natural language. It extracts the relevant information and manages an up-to-date, browser-based checklist. Real-time communication is handled through Web Sockets with Socket.IO.

The example scenario focuses on a vital signs checklist in a hospital. The same concept applies to countless other use cases.

In this article, we’ll query the Microsoft LUIS Language Understanding service from a Node.js backend. The results are communicated to the client through Socket.IO.

Connecting LUIS to Node.JS

In the previous article, we verified that our LUIS service works fine. Now, it’s time to connect all components. The aim is to query LUIS from our Node.js backend.

Categories
App Development Artificial Intelligence Digital Healthcare

Using Natural Language Understanding, Part 3: LUIS Language Understanding Service

Training Artificial Intelligence to perform real-life tasks has been painful. The latest AI services now offer more accessible user interfaces. These require little knowledge about machine learning. The Microsoft LUIS service (Language Understanding Intelligent Service) performs an amazing task: interpreting natural language sentences and extracting relevant parts. You only need to provide 5+ sample sentences per scenario.

In this article series, we’re creating a sample app that interprets assessments from vital signs checks in hospitals. It filters out relevant information like the measured temperature or pupillary response. Yet, it’s easy to extend the scenario to any other area.

Language Understanding

After creating the backend service and the client user interface in the first two parts, we now start setting up the actual language understanding service. I’m using the LUIS Language Understanding service from Microsoft, which is based on the Cognitive Services of Microsoft Azure.

Categories
App Development Artificial Intelligence Digital Healthcare

Using Natural Language Understanding, Part 2: Node.js Backend & User Interface

The vision: automatic checklists, filled out by simply listening to users explaining what they observe. The architecture of the sample app is based on a lightweight architecture: HTML5, Node.js & the LUIS service in the cloud.

Such an app would be incredibly useful in a hospital, where nurses need to perform and log countless vital sign checks with patients every day.

In part 1 of the article, I’ve explained the overall architecture of the service. In this part, we get hands-on and start implementing the Node.js-based backend. It will ultimately handle all the central messaging. It communicates both with the client user interface running in a browser, as well as the Microsoft LUIS language understanding service in the Azure Cloud.

Creating the Node Backend

Node.js is a great fit for such a service. It’s easy to setup and uses JavaScript for development. Also, the code runs locally for development, allowing rapid testing. But, it’s easy to deploy it to a dedicated server or the cloud later.

I’m using the latest version of Node.js (currently 9.3) and the free Visual Studio Code IDE for editing the script files.

Categories
App Development Artificial Intelligence Digital Healthcare

Using Natural Language Understanding, Part 1: Introduction & Architecture

During the last few years, cognitive services became immensely powerful. Especially interesting is natural language understanding. Using the latest tools, training the computer to understand spoken sentences and to extract information is reduced to a matter of minutes. We as humans no longer need to learn how to speak with a computer; it simply understands us.

I’ll show how to use the Language Understanding Cognitive Service (LUIS) from Microsoft. The aim is to build an automated checklist for nurses working at hospitals. Every morning, they record the vital sign of every patient. At the same time, they document the measurements on paper checklists.

With the new app developed in this article, the process is much easier. While checking the vital signs, nurses usually talk to the patients about their assessments. The “Vital Signs Checklist” app filters out the relevant data (e.g., the temperature or the pupillary response) and marks it in a checklist. Nurses no longer have to pick up a pen to manually record the information.

The Final Result: Vital Signs Checklist

In this article, we’ll create a simple app that uses the natural language understanding APIs (“LUIS”) of the Microsoft Cognitive Services on Microsoft Azure. The service extracts the relevant data from freely spoken assessments.

LUIS just went from preview state to general availability. This important milestone brings SLAs and more worldwide availability regions. So, it’s a wonderful time to start using it!

Categories
Android App Development AR / VR Digital Healthcare

Real-Time Light Estimation with Google ARCore

ARCore has an excellent feature – light estimation. The ARCore SDK estimates the global lighting, which you can use as input for your own shaders to make the virtual objects fit in better with the captured real world. In this article, I’m taking a closer look at how the light estimation works in the current ARCore preview SDK.

Note: this article is based on the ARCore developer preview 1. Some details changed in the developer preview 2 – although the generic process is still similar.

Categories
3D Printing AR / VR Digital Healthcare

3D Printing MRI / CT / Ultrasound Data, Part 2: Splitting the Brain

Are there any other ways to 3D print segmented medical data coming from MRI / CT / Ultrasound by splitting it in two halves?

In the first part of this article, the result was that the support structures required by a standard 3D printer significantly reduce the details present on the surface of the printed body part.

Christoph Braun had the idea for another method to reduce the support structures to a minimum: by splitting the object in two halves, each has a flat surface area that can be used as the base for the 3D print.

Importing and Scaling the STL Model

For processing the 3D object, we’ll use OpenSCAD – The Programmers Solid 3D CAD Modeler. It’s a free open source tool, aimed more at developers, with the advantage that the processes can easily be automated.

Categories
3D Printing AR / VR Digital Healthcare

3D Printing MRI / CT / Ultrasound Data, Part 1: Support Material

Based on the 4-part tutorial where we segmented the brain from an MRI image, one of the most interesting application areas is printing such 3D models. In that sense, it makes no difference if the data is coming from an MRI (e.g., a brain or tumor), CT (e.g., the skull) or ultrasound. In this article, we’ll look at how to prepare the 3D model for 3D printing.

In the preparation phase, we segmented the model from the original DICOM medical data using 3D Slicer. Afterwards, we reduced the level of detail using the built-in tools in Windows 10.

In this part, we print the MRI brain model using the Witbox 2 3D printer with plastic and deal with support structures. The aim is to make this process accessible for everyone – so you don’t need specialized and expensive software & hardware; we’ll instead use open source and free tools as much as possible.

Special thanks to Christoph Braun from the FH St. Pölten, who is the resident 3D printing expert and prepared the steps to produce the amazing results!

Categories
3D Printing AR / VR Digital Healthcare HoloLens Image Processing Windows

Visualizing MRI & CT Scans in Mixed Reality / VR / AR, Part 4: Segmenting the Brain

In the previous blog posts, we’ve used a simple grayscale threshold to define the model surface for visualizing an MRI / CT / Ultrasound in 3D. In many cases, you need to have more control over the 3D model generation, e.g., to only visualize the brain, a tumor, or a specific part of the scan.

In this blog post, I’ll demonstrate how to segment the brain of an MRT image; but the same method can be used for any segmentation. For example, you can also build a model of the skull based on a CT by following the steps below.

Categories
3D Printing AR / VR Digital Healthcare HoloLens Image Processing Windows

Visualizing MRI & CT Scans in Mixed Reality / VR / AR, Part 3: 3D Model Maker

So far, we’ve created a volume rendering of an MRI / CT / Ultrasound scan. This is based on Voxels. For 3D printing and highly performant visualization in AR / VR scenarios, we need to create and export a polygon-based model. For the first step, we will use the Grayscale Model Maker and export the 3D Model as .stl to further prepare the model.

To create a 3D model, we have two main options in 3D Slicer:

  • Grayscale Model Maker: directly uses grayscale values from the image data. A threshold defines the surfaces. The model maker also takes care of smoothing the surfaces and reducing the polygon count.
  • Model Maker: this requires labels or discrete data to build a 3D model, meaning you have to segment the image data.

As a first step, we will use the Grayscale Model Maker, and later explore the more advanced options offered by segmentation and the Model Maker.

Categories
3D Printing AR / VR Digital Healthcare HoloLens Image Processing Windows

Visualizing MRI & CT Scans in Mixed Reality / VR / AR, Part 2: 3D Volume Rendering

After importing the MRI / CT / Ultrasound data into 3D Slicer in part 1, we’re ready for the first 3D visualization inside the medical software through 3D Volume Rendering. This is a major step to export the 3D model to Unity for visualization through Google ARCore or Microsoft HoloLens, or for 3D printing.

Slices in 3D View

After optimizing brightness and contrast of the image data, the easiest way of showing the data in 3D is to visualize the three visible slices (planes: axial / top / red; sagittal / side / yellow; coronal / frontal / green view) in the 3D view. This gives a good overview of the position and the relation of the slices to each other.