Basics of AR: SLAM – Simultaneous Localization and Mapping

How HoloLens sees the World

In the first part, we took a look at how an algorithm identifies keypoints in camera frames. These are the base for tracking & recognizing the environment.

For Augmented Reality, the device has to know more: its 3D position in the world. It calculates this through the spatial relationship between itself and multiple keypoints. This process is called “Simultaneous Localization and Mapping” – SLAM for short.

Sensors for Perceiving the World

The high-level view: when you first start an AR app using Google ARCore, Apple ARKit or Microsoft Mixed Reality, the system doesn’t know much about the environment. It starts processing data from various sources – mostly the camera. To improve accuracy, the device combines data from other useful sensors like the accelerometer and the gyroscope.

Based on this data, the algorithm has two aims:

  1. Build a map of the environment
  2. Locate the device within that environment
Continue reading “Basics of AR: SLAM – Simultaneous Localization and Mapping”

Basics of AR: Anchors, Keypoints & Feature Detection

Detected Anchors by Google ARCore

Creating apps that work well with Augmented Reality requires some background knowledge of the image processing algorithms that work behind the scenes. One of the most fundamental concepts involves anchors. These rely on keypoints and their descriptors, detected in the recording of the real world.

Anchor Virtual Objects to the Real World

AR development APIs hide much of the complexity. As a developer, you simply anchor virtual objects to the world. This ensures that the hologram stays glued to the physical location where you put it. Continue reading “Basics of AR: Anchors, Keypoints & Feature Detection”

Mixed Reality @ Microsoft Insider Dev Tour

Microsoft Insider Dev Tour

On June 20th, the Microsoft Insider Dev Tour will come to Vienna, Austria. It’s a world-wide event series for developers, organized by Microsoft together with Microsoft Developer MVPs.

You’ll learn about the latest trends for developers – including artificial intelligence, progressive web apps and more. Of course, Mixed Reality is also on the agenda!

As a Microsoft MVP for Windows Development, I’ll take over the Mixed Reality session. You’ll see live demos of getting started with both VR headsets, as well as the Microsoft HoloLens. 150 attendees have signed up – so it’ll certainly be a great event!

In addition, Microsoft has released amazing hands-on labs for everyone to follow up and dive deeper into the content presented at the sessions. The Mixed Reality Lab includes controllers, spatial sound and spatial mapping. It’s a great way to get started with some of the most exciting features of MR. Check it out!

 

How to Record a Video from a Unity ARCore App on Android

ARCore Recorded Video converted to an Animated GIF

A video is a great way to showcase your Unity app. To capture the full visual fidelity of your app, you need to record at the highest possible quality with a smooth frame rate.

Several screen recording apps are available in the Google Play Store. However, there’s an easy and completely free way that provides the highest possible quality.

This short guide demonstrates how to record the screen with an APK file generated by Unity. Of course, it works for both AR and Non-AR Apps. Continue reading “How to Record a Video from a Unity ARCore App on Android”

Remote ARCore with Unity’s Experimental ARInterface

Overall, the AR ecosystem is still small. Nevertheless, it’s fragmented. Google develops ARCore, Apple creates ARKit and Microsoft is working on the Mixed Reality Toolkit. Fortunately, Unity started unifying these APIs with the ARInterface.

At Unite Austin, two of the Unity engineers introduced the new experimental ARInterface. In November 2017, they released it to the public via GitHub. It looks like this will be integrated into Unity 2018 – the new features of Unity 2018.1 include “AR Crossplatfom Kit (ARCore/ARKit API)“.

Remote Testing of AR Apps

The traditional mobile AR app development cycle includes compiling and deploying apps to a real device. That takes a long time and is tedious for quick testing iterations.

A big advantage of ARKit so far has been the ARKit Unity Remote feature. The iPhone runs a simple “tracking” app. It transmits its captured live data to the PC. Your actual AR app is running directly in the Unity Editor on the PC, based on the data it gets from the device. Through this approach, you can run the app by simply pressing the Play-button in Unity, without native compilation.

This is similar to the Holographic Emulation for the Microsoft HoloLens, which has been available for Unity for some time.

The great news is that the new Unity ARInterface finally adds a similar feature to Google ARCore: ARRemoteInterface. It’s available cross-platform for ARKit and ARCore.

ARInterface Demo App

In this article, I’ll explain the steps to get AR Remote running on Google ARCore. For reference: “Pirates Just AR” also posted a helpful short video on YouTube. Continue reading “Remote ARCore with Unity’s Experimental ARInterface”

NFC Tags, NDEF and Android (with Kotlin)

Android: Launch App through NFC Tags

In this article, you will learn how to add NFC tag reading to an Android app. It registers for auto-starting when the user taps a specific NDEF NFC tag with the phone. In addition, the app reads the NDEF records from the tag.

NFC & NDEF

Apple added support for reading NFC tags with iOS 11 in September 2017. All iPhones starting with the iPhone 7 offer an API to read NFC tags. While Android included NFC support for many years, this was the final missing piece to bring NFC tag scenarios to the masses. Continue reading “NFC Tags, NDEF and Android (with Kotlin)”

How To: RecyclerView with a Kotlin-Style Click Listener in Android

Android RecyclerView - Click Listener - Flow

In this article, we add a click listener to a RecyclerView on Android. Advanced language features of Kotlin make it far easier than it has been with Java. However, you need to understand a few core concepts of the Kotlin language.

To get started with the RecyclerView, follow the steps in the previous article or check out the finished project on GitHub. Continue reading “How To: RecyclerView with a Kotlin-Style Click Listener in Android”

Kotlin & RecyclerView for High Performance Lists in Android

Android: RecyclerView - Adapter Flow

RecyclerView is the best approach to show scrolling lists on Android. It ensures high performance & smooth scrolling, while providing list elements with flexible layouts. Combined with modern language features of Kotlin, the code overhead of the RecyclerView is greatly reduced compared to the traditional Java approach.

Sample Project: PartsList – Getting Started

In this article, we’ll walk through a sample scenario: a scrolling list for a maintenance app, listing machine parts: “PartsList”. However, this scenario only affects the strings we use – you can copy this approach for any use case you need. Continue reading “Kotlin & RecyclerView for High Performance Lists in Android”

Using Natural Language Understanding, Part 4: Real-World AI Service & Socket.IO

The final vital sign checklist app with natural language understanding

In this last part, we bring the vital sign check list to life. Artificial Intelligence interprets assessments spoken in natural language. It extracts the relevant information and manages an up-to-date, browser-based checklist. Real-time communication is handled through Web Sockets with Socket.IO.

The example scenario focuses on a vital signs checklist in a hospital. The same concept applies to countless other use cases.

In this article, we’ll query the Microsoft LUIS Language Understanding service from a Node.js backend. The results are communicated to the client through Socket.IO.

Connecting LUIS to Node.JS

In the previous article, we verified that our LUIS service works fine. Now, it’s time to connect all components. The aim is to query LUIS from our Node.js backend. Continue reading “Using Natural Language Understanding, Part 4: Real-World AI Service & Socket.IO”

Using Natural Language Understanding, Part 3: LUIS Language Understanding Service

Pre-built entities in intents, in use with LUIS

Training Artificial Intelligence to perform real-life tasks has been painful. The latest AI services now offer more accessible user interfaces. These require little knowledge about machine learning. The Microsoft LUIS service (Language Understanding Intelligent Service) performs an amazing task: interpreting natural language sentences and extracting relevant parts. You only need to provide 5+ sample sentences per scenario.

In this article series, we’re creating a sample app that interprets assessments from vital signs checks in hospitals. It filters out relevant information like the measured temperature or pupillary response. Yet, it’s easy to extend the scenario to any other area.

Language Understanding

After creating the backend service and the client user interface in the first two parts, we now start setting up the actual language understanding service. I’m using the LUIS Language Understanding service from Microsoft, which is based on the Cognitive Services of Microsoft Azure. Continue reading “Using Natural Language Understanding, Part 3: LUIS Language Understanding Service”