In this last part, we bring the vital sign check list to life. Artificial Intelligence interprets assessments spoken in natural language. It extracts the relevant information and manages an up-to-date, browser-based checklist. Real-time communication is handled through Web Sockets with Socket.IO.
The example scenario focuses on a vital signs checklist in a hospital. The same concept applies to countless other use cases.
In this article, we’ll query the Microsoft LUIS Language Understanding service from a Node.js backend. The results are communicated to the client through Socket.IO.
Connecting LUIS to Node.JS
In the previous article, we verified that our LUIS service works fine. Now, it’s time to connect all components. The aim is to query LUIS from our Node.js backend. Continue reading “Using Natural Language Understanding, Part 4: Real-World AI Service & Socket.IO”
Training Artificial Intelligence to perform real-life tasks has been painful. The latest AI services now offer more accessible user interfaces. These require little knowledge about machine learning. The Microsoft LUIS service (Language Understanding Intelligent Service) performs an amazing task: interpreting natural language sentences and extracting relevant parts. You only need to provide 5+ sample sentences per scenario.
In this article series, we’re creating a sample app that interprets assessments from vital signs checks in hospitals. It filters out relevant information like the measured temperature or pupillary response. Yet, it’s easy to extend the scenario to any other area.
After creating the backend service and the client user interface in the first two parts, we now start setting up the actual language understanding service. I’m using the LUIS Language Understanding service from Microsoft, which is based on the Cognitive Services of Microsoft Azure. Continue reading “Using Natural Language Understanding, Part 3: LUIS Language Understanding Service”
The vision: automatic checklists, filled out by simply listening to users explaining what they observe. The architecture of the sample app is based on a lightweight architecture: HTML5, Node.js + the LUIS service in the cloud.
Such an app would be incredibly useful in a hospital, where nurses need to perform and log countless vital sign checks with patients every day.
In part 1 of the article, I’ve explained the overall architecture of the service. In this part, we get hands-on and start implementing the Node.js-based backend. It will ultimately handle all the central messaging. It communicates both with the client user interface running in a browser, as well as the Microsoft LUIS language understanding service in the Azure Cloud.
Creating the Node Backend
I’m using the latest version of Node.js (currently 9.3) and the free Visual Studio Code IDE for editing the script files. Continue reading “Using Natural Language Understanding, Part 2: Node.js Backend & User Interface”
During the last few years, cognitive services became immensely powerful. Especially interesting is natural language understanding. Using the latest tools, training the computer to understand real spoken sentences and to extract information is reduced to a matter of minutes. We as humans no longer need to learn how to speak with a computer; it simply understands us.
I’ll show how to use the Language Understanding Cognitive Service (LUIS) from Microsoft. The aim is to build an automated check-list for nurses working at hospitals. Every morning, they record the vital sign of every patient. At the same time, they document the measurements on paper checklists.
With the new app developed in this article, the process is much easier. While checking the vital signs, nurses usually talk to the patients about their assessments. The “Vital Signs Checklist” app filters out the relevant data (e.g., the temperature or the pupillary response) and marks it in a checklist. Nurses no longer have to pick up a pen to manually record the information.
The Final Result: Vital Signs Checklist
In this article, we’ll create a simple app that uses the natural language understanding APIs (“LUIS”) of the Microsoft Cognitive Services on Microsoft Azure. The service extracts the relevant data from freely spoken assessments.
LUIS just went from preview state to general availability. This important milestone brings SLAs and more worldwide availability regions. So, it’s a great time to start using it! Continue reading “Using Natural Language Understanding, Part 1: Introduction & Architecture”
ARCore has a great feature – light estimation. The ARCore SDK estimates the global lighting, which you can use as input for your own shaders to make the virtual objects fit in better with the captured real world. In this article, I’m taking a closer look at how the light estimation works in the current ARCore preview SDK.
Note: this article is based on the ARCore developer preview 1. Some details changed in the developer preview 2 – although the generic process is still similar. Continue reading “Real-Time Light Estimation with Google ARCore”
Are there any other ways to 3D print segmented medical data coming from MRI / CT / Ultrasound by splitting it in two halves?
In the first part of this article, the result was that the support structures required by a standard 3D printer significantly reduce the details present on the surface of the printed body part.
Christoph Braun had the idea for another method to reduce the support structures to a minimum: by splitting the object in two halves, each has a flat surface area that can be used as the base for the 3D print.
Importing and Scaling the STL Model
For processing the 3D object, we’ll use OpenSCAD – The Programmers Solid 3D CAD Modeller. It’s a free open source tool, aimed more at developers, with the advantage that the processes can easily be automated. Continue reading “3D Printing MRI / CT / Ultrasound Data, Part 2: Splitting the Brain”
Based on the 4-part tutorial where we segmented the brain from an MRI image, one of the most interesting application areas is printing such 3D models. In that sense, it makes no difference if the data is coming from an MRI (e.g., a brain or tumor), CT (e.g., the skull) or ultrasound. In this article, we’ll look at how to prepare the 3D model for 3D printing.
In the preparation phase, we segmented the model from the original DICOM medical data using 3D Slicer. Afterwards, we reduced the level of detail using the built-in tools in Windows 10.
In this part, we print the MRI brain model using the Witbox 2 3D printer with plastic and deal with support structures. The aim is to make this process accessible for everyone – so you don’t need specialized and expensive software & hardware; we’ll instead use open source and free tools as much as possible.
Special thanks to Christoph Braun from the FH St. Pölten, who is the resident 3D printing expert and prepared the steps to produce the amazing results! Continue reading “3D Printing MRI / CT / Ultrasound Data, Part 1: Support Material”
In the previous blog posts, we’ve used a simple grayscale threshold to define the model surface for visualizing a MRI / CT / Ultrasound in 3D. In many cases, you need to have more control over the 3D model generation, e.g., to only visualize the brain, a tumor or a specific part of the scan.
In this blog post, I’ll demonstrate how to segment the brain of an MRT image; but the same method can be used for any segmentation. For example, you can also build a model of the skull based on a CT by following the steps below. Continue reading “Visualizing MRI & CT Scans in Mixed Reality / VR / AR, Part 4: Segmenting the Brain”
So far, we’ve created a volume rendering of a MRI / CT / Ultrasound scan. This is based on Voxels. For 3D printing and highly performant visualization in AR / VR scenarios, we need to create and export a polygon-based model. For the first step, we will use the Grayscale Model Maker and export the 3D Model as .stl to further prepare the model.
To create a 3D model, we have two main options in 3D Slicer:
- Grayscale Model Maker: directly uses grayscale values from the image data. A threshold defines the surfaces. The model maker also takes care of smoothing the surfaces and reducing the polygon count.
- Model Maker: this requires labels or discrete data to build a 3D model, meaning you have to segment the image data.
As a first step, we will use the Grayscale Model Maker, and later explore the more advanced options offered by segmentation and the Model Maker. Continue reading “Visualizing MRI & CT Scans in Mixed Reality / VR / AR, Part 3: 3D Model Maker”
After importing the MRI / CT / Ultrasound data into 3D Slicer in part 1, we’re ready for the first 3D visualization inside the medical software through 3D Volume Rendering. This is an important step to ultimately export the 3D model to Unity for visualization through Google ARCore or Microsoft HoloLens, or for 3D printing.
Slices in 3D View
After optimizing brightness and contrast of the image data, the easiest way of showing the data in 3D is to visualize the three visible slices (planes: axial / top / red; sagittal / side / yellow; coronal / frontal / green view) in the 3D view. This gives a good overview of the position and the relation of the slices to each other. Continue reading “Visualizing MRI & CT Scans in Mixed Reality / VR / AR, Part 2: 3D Volume Rendering”