Training Artificial Intelligence to perform real-life tasks has been painful. The latest AI services now offer more accessible user interfaces. These require little knowledge about machine learning. The Microsoft LUIS service (Language Understanding Intelligent Service) performs an amazing task: interpreting natural language sentences and extracting relevant parts. You only need to provide 5+ sample sentences per scenario.
In this article series, we’re creating a sample app that interprets assessments from vital signs checks in hospitals. It filters out relevant information like the measured temperature or pupillary response. Yet, it’s easy to extend the scenario to any other area.
After creating the backend service and the client user interface in the first two parts, we now start setting up the actual language understanding service. I’m using the LUIS Language Understanding service from Microsoft, which is based on the Cognitive Services of Microsoft Azure.
To get started using LUIS, you need to:
- Make sure you have a Microsoft Account (usually you should have one from your Windows 10 login).
If not, sign up for a Microsoft through this link for OneDrive, which gives you an extra free 0.5 GB for your OneDrive storage.
- Now, sign in at LUIS.ai using your Microsoft Account.
Warning: for some strange reason, Microsoft has created two different versions of LUIS. If you plan to publish your AI service to Europe, create the LUIS service at: https://eu.luis.ai/ – otherwise, use https://www.luis.ai/.
Apps can only be manually migrated between the two locations. Read more in the Publishing regions & endpoints section of the documentation.
You need an Azure account to use LUIS. However, you can use LUIS in the free tier (“F0”), so it does not eat up your credits and you can also use it after initial credits end and you switch to pay-as-you-go. It serves up to 10,000 endpoints per month.
If one of the following arguments apply to you, upgrade your cognitive service through the Azure configuration to for example:
- Serve more than 10,000 endpoints per month
- You need SLAs for the service
The LUIS dashboard can create the Azure Cognitive Services account for you later when you publish your trained language understanding model. That’s what I would recommend.
However, if you want to have even more control over the process, you can already do so now through the Azure portal. Follow these steps:
- Sign up for free to Azure using your Microsoft Account.
- Create a cognitive service for LUIS, selecting the free pricing tier F0 (which gives you 10K calls per month). Choose the location closest to you. Keep in mind that you can only use a server in Europe if you use LUIS from eu.luis.ai, as explained in the previous section.
Give the cognitive service any name you like (e.g., “Dhc-Luis”) and create a new resource group (e.g., “dhc-apps”)
Create the LUIS Service
In Luis.ai, navigate to My Apps > New App. For the application name, enter “Patient Check” and use the locale you prefer. English currently offers most features. This page lists all supported features per language.
Understanding how LUIS works
Once your app is created, it’s time to look at the options you have in the portal:
- Intents: think of these as distinct commands that your language understanding service differentiates. In our scenario, we’ll have three: for recognizing the patient’s age, the body temperature, and the pupillary response.
- Utterances: when you create an Intent, you need to specify sample sentences, called utterances. This gives the AI some examples on what the user would say to trigger the intent.
Utterances for the same “temperature” intent could be: “Your body temperature is [entity]” or “I measured [entity]”
- Entities: specific information to extract from the utterances. For the body temperature, that’s the actual measured temperature, e.g., “36 degrees”.
LUIS comes with several pre-built entities for common data: e.g., date & time, dimension, numbers, email and many more. However, you can also create your own entities.
Setting up LUIS – Entities
It’s best to start at the lowest level – the entities. Navigate to the page and create 3 different entities:
- age: Add prebuilt entity > Age
- pupilLightReaction: Create > List > Name: pupilLightReaction
- temperature: Add prebuilt entity > Temperature
After you created the entities, proceed to the intents. This is the most crucial step. Here, we define the different “commands” that the service distinguishes.
Intents with Pre-Built Entities
First, add an intent and call it “Temperature”. In the upper text box, provide some sample utterances: complete example sentences of what you could say to trigger this scenario. Press enter after each example to add it to the list below. Some examples:
- your temperature is 36 degrees
- i measured 38 degrees celsius
- the thermometer says you have 98.6 degrees fahrenheit
For the pre-built entities, the service automatically recognizes the entities in the sentence you entered. In our case, temperatures are marked below the line.
Once you have added all the sample utterances you can think of, do the same for an “Age” intent.
Intents with List Entities
These are entities we train ourselves – as such, it’s even more important to provide as many sample utterances as you can think of.
Create an intent with the name “PupillaryResponse”. I’d recommend that you don’t use special characters in the names, as these names get directly sent to your app later. It’s easy that special characters get messed up when your code runs on different operating systems and languages. So better stick to simple names using only standard characters.
This intent describes the measured response of the eye to light. One of the key factors is the speed of the pupillary light reflex – broadly speaking, how fast your pupils adapt to a bright penlight the nurse or doctor shines on your eyes.
Before we can provide utterances for the intent, go back to the pupilLightReaction entity, and add list items.
For each of the items, the “normalized value” is the one you’ll also get back to your app when it’s recognized. The synonyms are other ways how the user can call the list item. In the screenshot above, you can find some examples for the three main normalized values “no response”, “sluggish” and “prompt”.
Next, go back to the intent and define sample sentences which utilize some of the list items that we just defined for the custom entity. Some examples:
- no response of your pupils
- your pupillary response is prompt
- your left eye has a slow response to light
- the pupillary light reflex is sluggish
I’d recommend that you provide more sample sentences in this scenario, compared to the ~5 sentences that contain pre-built entities.
Training your Language Understanding Service
Now, your service is ready to be trained. LUIS trains the AI to understand the natural language using the examples we provided. It generalizes the utterances we have labeled.
Training usually takes a few seconds. After the process is finished, it’s time to test the app. Click on the blue “Test” button in the upper right corner. Next, enter some test sentences.
Our service should recognize intents correctly. Click on “Inspect” to see more details, including the entities extracted from the test sentence. If not, you need to adapt the examples – maybe provide more examples for some intents or see if any utterances or entities are at the wrong place.
Once you’re satisfied with the results, it’s time to publish your app to move it from the testing environment to the real world.
Publishing your LUIS Service
Finally, we’re ready to publish our LUIS app to the world. So far, you could only play with your app in the browser while logged in to LUIS. This step makes the LUIS service accessible to our Node.js backend, which can run anywhere in the world.
Click on the “Publish” tab in the top bar. As we’re still in the early steps of development, we don’t need to distinguish between staging and production slots yet. This is necessary once the app is in real-world use and you want to test your changes before letting your users access the new version. Therefore, it’s OK to stich with the Production slot.
In the “Manage” section of the dashboard, you can then access the “Azure Resources” blade. To access your LUIS service from our Node.js app, you need to create the prediction resource key.
In the dialog that pups up after you press the button to add the prediction resource, click on the “Create a new LUIS prediction resource?” if it’s your first LUIS service that you deploy and none is available in the corresponding drop-down box so far. In the dialog, you can then give it a name, choose a location and a pricing tier. As mentioned before, the free F0 tier is fine for us.
After the prediction resource has been created, you should see access information and keys in the dashboard. This includes access keys, the endpoint URL and a full URL for doing an example query right from the browser.
Testing Your Language Understanding App
It’s simple to test your public AI service. In the “Azure Resources” section of the “Manage” page, click on the button to copy the example query URL. Open this URL in a new browser tab. By default, the likelihood of the intents should be very low.
The URL ends with
query=YOUR_QUERY_HERE . Simply update the textual query and press enter, for example “query=Your temperature is 39 degrees celsius”. You will immediately see the JSON results in your browser window. In this case, the top scoring intent is correctly identified as “Temperature”, and the extracted entity is “39 degrees celsius”.
Keep in mind that you’re directly editing a URI here, so be careful with special characters. To quickly convert a string to a properly URL encoded variant, check out the w3schools tool.
The biggest parts are done: we’ve created the HTML5 front-end UI for the user, the back-end server using Node.js, as well as the AI service based on LUIS.
The only remaining task is connecting everything together. That’ll be the focus of the final part of the article series!