Node.js and Cloud NoSQL Databases: Azure Cosmos DB

Azure Cosmos DB Quickstart

Learn how to access a cloud-based NoSQL database from Node.js. The Azure Cosmos DB stores documents (e.g., JSON) and allows scaling for improved performance plus geo-redundancy with one click. The access interface also allows well-known SQL queries.

This guide uses the latest Azure Cosmos DB JavaScript module (released as final version just 17 days ago). Additionally, this article is based on the ES 2017 standard. The async / await syntax makes the code short and readable. In contrast to many other tutorials, this article focuses on the minimum code required to understand the concepts.

The complete source code of this article is available on GitHub.

Continue reading “Node.js and Cloud NoSQL Databases: Azure Cosmos DB”

Asynchronous JavaScript with Promises & Async/Await in JavaScript

From the perspective of a C# developer, the introduction of Async and Await into the latest JavaScript version (ECMAScript 2017+) is a welcome addition. It makes asynchronous code a lot cleaner and more readable.

However, a lot of legacy libraries and code snippets are out there. It’s usually difficult to go all-in with async/await. This article is a short intro to error handling and the evolution of asynchronous development in JavaScript.

Error Handling in JavaScript

Most asynchronous operations like web requests can cause an error. Thus, let’s spend a minute reviewing the basics of the code flow.

Continue reading “Asynchronous JavaScript with Promises & Async/Await in JavaScript”

How to Record a Video from a Unity ARCore App on Android

ARCore Recorded Video converted to an Animated GIF

A video is a great way to showcase your Unity app. To capture the full visual fidelity of your app, you need to record at the highest possible quality with a smooth frame rate.

Several screen recording apps are available in the Google Play Store. However, there’s an easy and completely free way that provides the highest possible quality.

This short guide demonstrates how to record the screen with an APK file generated by Unity. Of course, it works for both AR and Non-AR Apps. Continue reading “How to Record a Video from a Unity ARCore App on Android”

Remote ARCore with Unity’s Experimental ARInterface

Overall, the AR ecosystem is still small. Nevertheless, it’s fragmented. Google develops ARCore, Apple creates ARKit and Microsoft is working on the Mixed Reality Toolkit. Fortunately, Unity started unifying these APIs with the ARInterface.

At Unite Austin, two of the Unity engineers introduced the new experimental ARInterface. In November 2017, they released it to the public via GitHub. It looks like this will be integrated into Unity 2018 – the new features of Unity 2018.1 include “AR Crossplatfom Kit (ARCore/ARKit API)“.

Remote Testing of AR Apps

The traditional mobile AR app development cycle includes compiling and deploying apps to a real device. That takes a long time and is tedious for quick testing iterations.

A big advantage of ARKit so far has been the ARKit Unity Remote feature. The iPhone runs a simple “tracking” app. It transmits its captured live data to the PC. Your actual AR app is running directly in the Unity Editor on the PC, based on the data it gets from the device. Through this approach, you can run the app by simply pressing the Play-button in Unity, without native compilation.

This is similar to the Holographic Emulation for the Microsoft HoloLens, which has been available for Unity for some time.

The great news is that the new Unity ARInterface finally adds a similar feature to Google ARCore: ARRemoteInterface. It’s available cross-platform for ARKit and ARCore.

ARInterface Demo App

In this article, I’ll explain the steps to get AR Remote running on Google ARCore. For reference: “Pirates Just AR” also posted a helpful short video on YouTube. Continue reading “Remote ARCore with Unity’s Experimental ARInterface”

NFC Tags, NDEF and Android (with Kotlin)

Android: Launch App through NFC Tags

In this article, you will learn how to add NFC tag reading to an Android app. It registers for auto-starting when the user taps a specific NDEF NFC tag with the phone. In addition, the app reads the NDEF records from the tag.

NFC & NDEF

Apple added support for reading NFC tags with iOS 11 in September 2017. All iPhones starting with the iPhone 7 offer an API to read NFC tags. While Android included NFC support for many years, this was the final missing piece to bring NFC tag scenarios to the masses. Continue reading “NFC Tags, NDEF and Android (with Kotlin)”

How To: RecyclerView with a Kotlin-Style Click Listener in Android

Android RecyclerView - Click Listener - Flow

In this article, we add a click listener to a RecyclerView on Android. Advanced language features of Kotlin make it far easier than it has been with Java. However, you need to understand a few core concepts of the Kotlin language.

To get started with the RecyclerView, follow the steps in the previous article or check out the finished project on GitHub. Continue reading “How To: RecyclerView with a Kotlin-Style Click Listener in Android”

Kotlin & RecyclerView for High Performance Lists in Android

Android: RecyclerView - Adapter Flow

RecyclerView is the best approach to show scrolling lists on Android. It ensures high performance & smooth scrolling, while providing list elements with flexible layouts. Combined with modern language features of Kotlin, the code overhead of the RecyclerView is greatly reduced compared to the traditional Java approach.

Sample Project: PartsList – Getting Started

In this article, we’ll walk through a sample scenario: a scrolling list for a maintenance app, listing machine parts: “PartsList”. However, this scenario only affects the strings we use – you can copy this approach for any use case you need. Continue reading “Kotlin & RecyclerView for High Performance Lists in Android”

Using Natural Language Understanding, Part 4: Real-World AI Service & Socket.IO

The final vital sign checklist app with natural language understanding

In this last part, we bring the vital sign check list to life. Artificial Intelligence interprets assessments spoken in natural language. It extracts the relevant information and manages an up-to-date, browser-based checklist. Real-time communication is handled through Web Sockets with Socket.IO.

The example scenario focuses on a vital signs checklist in a hospital. The same concept applies to countless other use cases.

In this article, we’ll query the Microsoft LUIS Language Understanding service from a Node.js backend. The results are communicated to the client through Socket.IO.

Connecting LUIS to Node.JS

In the previous article, we verified that our LUIS service works fine. Now, it’s time to connect all components. The aim is to query LUIS from our Node.js backend. Continue reading “Using Natural Language Understanding, Part 4: Real-World AI Service & Socket.IO”

Using Natural Language Understanding, Part 3: LUIS Language Understanding Service

Pre-built entities in intents, in use with LUIS

Training Artificial Intelligence to perform real-life tasks has been painful. The latest AI services now offer more accessible user interfaces. These require little knowledge about machine learning. The Microsoft LUIS service (Language Understanding Intelligent Service) performs an amazing task: interpreting natural language sentences and extracting relevant parts. You only need to provide 5+ sample sentences per scenario.

In this article series, we’re creating a sample app that interprets assessments from vital signs checks in hospitals. It filters out relevant information like the measured temperature or pupillary response. Yet, it’s easy to extend the scenario to any other area.

Language Understanding

After creating the backend service and the client user interface in the first two parts, we now start setting up the actual language understanding service. I’m using the LUIS Language Understanding service from Microsoft, which is based on the Cognitive Services of Microsoft Azure. Continue reading “Using Natural Language Understanding, Part 3: LUIS Language Understanding Service”

Using Natural Language Understanding, Part 2: Node.js Backend & User Interface

User Interface for our Vital Sign Checklist app that uses the LUIS Language Understanding Service from Microsoft

The vision: automatic checklists, filled out by simply listening to users explaining what they observe. The architecture of the sample app is based on a lightweight architecture: HTML5, Node.js + the LUIS service in the cloud.

Such an app would be incredibly useful in a hospital, where nurses need to perform and log countless vital sign checks with patients every day.

In part 1 of the article, I’ve explained the overall architecture of the service. In this part, we get hands-on and start implementing the Node.js-based backend. It will ultimately handle all the central messaging. It communicates both with the client user interface running in a browser, as well as the Microsoft LUIS language understanding service in the Azure Cloud.

Creating the Node Backend

Node.js is a great fit for such a service. It’s easy to setup and uses JavaScript for development. Also, the code runs locally for development, allowing rapid testing. But, it’s easy to deploy it to a dedicated server or the cloud later.

I’m using the latest version of Node.js (currently 9.3) and the free Visual Studio Code IDE for editing the script files. Continue reading “Using Natural Language Understanding, Part 2: Node.js Backend & User Interface”