Categories
Speech Assistants

Alexa Development with Voiceflow for Newcomers

Speech assistants are one of the most important ways to access services in the future. They are usable without further instructions even by children and elderly. And they’re hands-free. These advantages are reflected in their growing adoption: according to voicebot.ai, already one third of American households have a smart speaker .

Amazon’s Alexa is leading the market, followed by Google Assistant. Also, Baidu, Alibaba, Xiaomi and Apple Siri are important players. Strategy Analysis runs regular reports on market share data . Obviously, usage is quite different by market. For example, Baidu, Alibaba and Xiaomi are stronger in Asian markets. But overall, Amazon Alexa together with its Echo smart speaker ecosystem is the perfect place to start if you want to reach as many people as possible, globally.

Developing for Amazon Alexa

When you decide to create a “Skill” for Amazon Alexa, you have two basic options:

  • Alexa Skills Kit: Use Amazon’s developer tools directly. This gives you all features but is also the most complex to start. You need to write at least a bit of JavaScript (through Node.js) or Python code. The Alexa-hosted option is easy to set up. You can edit the code right from the browser. No need to provision any other services anymore.
  • 3rd Party Tools: for example, Voiceflow or the Microsoft Bot Framework. While you still need to create the Alexa skill in Amazon’s frontend (so that it is also discoverable by Alexa-powered devices), the skill design & development then mostly happens in these tools. Often, their editors are easier to use and/or even offer cross-platform support.

Especially for people with little experience in JavaScript development or if your skill is simple, 3rd party tools are often the better choice. If you want deep integration into the platform, use the latest features (like Alexa Conversations or the Motion Sensor APIs), go with the Alexa Skills Kit.

Categories
Artificial Intelligence Image Processing

Hands-On “Deep Learning” Videos: Now on YouTube

Every new product or service claims to use deep learning or neural networks. But: how do they really work? What can machine learning do? How complicated is it to get started?

In the 4-part video series “Deep Learning Hands-On with TensorFlow 2 & Python”, you’ll learn what many of the buzzwords are about and how they relate to the problems you want to solve.

By watching the short videos, your journey will start with the background of neural networks, which are the base of deep learning. Then, two practical examples show two concrete applications on how you can use neural networks to perform classification with TensorFlow:

  • Breast cancer classification: based on numerical / categorical data
  • Hand-written image classification: the classic MNIST dataset based on small images

In the last part, we’ll look at one of the most important specialized variants of neural networks: convolutional neural networks (CNNs), which are especially well-suited for image classification.

Watching all four videos gives you a thorough understanding of how deep learning works and the guidance to get started!

Categories
Android AR / VR

Environmental HDR Lighting & Reflections in ARCore: Implementation in Unity 3D (Part 3)

How to make real-time HDR lighting and reflections possible on a smartphone? Based on the unique properties of human perception and the challenges of capturing the world’s state and applying it to virtual objects. Is it still possible?

Google found an interesting approach, which is based on using Artificial Intelligence to fill the missing gaps. In this article, we’ll take a look at how ARCore handles this. The practical implementation of this research is available in the ARCore SDK for Unity. Based on this, a short hands-on guide demonstrates how to create a sphere that reflects the real world – even though the smartphone only captures a fraction of it.

Google ARCore Approach to Environmental HDR Lighting

To still make environmental HDR lighting possible in real-time on smartphones, Google uses an innovative approach, which they also published as a scientific paper . Here, I’ll give you a short, high-level overview of their approach:

First, Google captured a massive amount of training data. The video feed of the smartphone camera captured both the environment, as well as three different spheres. The setup is shown in the image below.

Categories
Android AR / VR

Environmental HDR Lighting & Reflections in ARCore: Virtual Lighting (Part 2)

In part 1, we looked at how humans perceive lighting and reflections – vital basic knowledge to estimate how realistic these cues need to be. The most important goal is that the scene looks natural to human viewers. Therefore, the virtual lighting needs to be closely aligned with real lighting.

But how to measure lighting in the real world, and how to apply it to virtual objects?

Virtual Lighting

How do you need to set up virtual lighting to satisfy the criteria mentioned in part 1? Humans recognize if an object doesn’t fit in:

The left image shows a simple scene setup, where the shadow direction is wrong. The virtual object doesn't fit in.
In the ideal case on the right, the shadow and shading is correct.
Comparing a simple scene setup to environmental HDR lighting. Image adapted from the Google Developer documentation.

The image above from the Google Developer Documentation shows both extremes. Even though you might still recognize that the rocket is a virtual object in the right image, you’ll need to look a lot harder. The image on the left is clearly wrong, especially due to the misplaced shadow.

Categories
Android AR / VR

Environmental HDR Lighting & Reflections in ARCore: Human Perception (Part 1)

Realistically merging virtual objects with the real world in Augmented Reality has a few challenges. The most important:

  1. Realistic positioning, scale and rotation
  2. Lighting and shadows that match the real-world illumination
  3. Occlusion with real-world objects

The first is working very well in today’s AR systems. Number 3 for occlusion is working OK on the Microsoft HoloLens; and it’s soon also coming to ARCore (a private preview is currently running through the ARCore Depth API – which is probably based on the research by Flynn et al. ).

But what about the second item? Google put a lot of effort into this recently. So, let’s look behind the scenes. How does ARCore estimate HDR (high dynamic range) lighting and reflections from the camera image?

Remember that ARCore needs to scale to a variety of smartphones; thus, a requirement is that it also works on phones that only have a single RGB camera – like the Google Pixel 2.

Categories
Digital Healthcare Events Speech Assistants

Alexa for Wellbeing Online Challenge

In the near future, we will primarily interact with technology through voice. Especially for older generations and kids, voice has the lowest entry barrier – compared to the complexity of computers or even smartphones. Simply start talking to speech assistants like Amazon Alexa, and they will help immediately.

To make most use of it, I’ve co-organized the “Alexa for Wellbeing Online Challenge” during the last few weeks. Together with AWS Educate and Hilfswerk Lower Austria, we’ll host a 10-day online hackathon, open to everyone.

Categories
AR / VR

Using Amazon Sumerian in Trainings and Classrooms with AWS IAM

In this article, we’ll configure AWS Identity and Access Management (IAM) to easily use Amazon Sumerian with multiple users. This is especially important for classrooms or trainings. You often don’t want to loose time by having attendees set up and activate their own AWS accounts, including their personal credit cards.

Instead, by setting up sub-users in your account beforehand, you have complete control over the experience and can get started right away. Additionally, it helps with troubleshooting for exercises.

Right now, no ready-made AWS Educate classrooms are available that support Amazon Sumerian. If that changes, the classrooms would be a good alternative, as it gives students their own free AWS credits instead of everything billed to a central account.

Securing Your Account

The first step is making sure you own root account is properly secured. A major part is enabling Multi-Factor Authentication (MFA) for your root account. Especially when working in teams and with source control, it’s an easy-to-make mistake to upload your credentials somewhere; you don’t want others to have full control over your whole AWS account, as this can incur major charges to your credit card. Therefore, it’s best to enable MFA before you continue.

Categories
Android App Development AR / VR Digital Healthcare

Hit Test & Augmented Reality Anchors with Amazon Sumerian (Part 3)

In an Augmented Reality scene, users looks at the live camera feed. Virtual objects anchor at specific positions of the real world. Our task is to let the user place virtual in the real world. To achieve that, the user simply taps on the smartphone screen. Through a hit test, our script then creates an anchor in the real world and links that to a virtual 3D model entity.

That’s the high level overview. To code this anchoring logic, a few intermediate steps are needed:

  1. Hit Test: converts coordinates of the user’s screen tap and sends the normalized coordinates to the AR system. This checks what’s in the real world at that position.
  2. Register Anchor: next, our script instructs the AR system to create an anchor at that position.
  3. Link Anchor: finally, the ID of the created anchor is linked to our entity. This allows Sumerian to continually update the transform of our 3D entity. Thus, the object stays in place in the real world, even when the user moves around.

Transforming these steps into code, this is what our code architecture looks like. It includes three call-backs, starting with the touch event and ending with the registered and linked AR anchor.

Categories
Android App Development AR / VR Digital Healthcare

Augmented Reality Anchors and Amazon Sumerian’s ArAnchorComponent (Part 2)

The WebXR standard isn’t finished yet. How does the web-based Amazon Sumerian platform integrate with the real world for Augmented Reality? We’ll take a look at the glue that binds the 3D WebGL contents from the web view to the native AR platform (ARCore / ARKit). To access this, we will also look at Sumerian internal engine classes like ArAnchorComponent, which handle the cross-platform web-to-native mapping.

This article continues from part 1, which covered the scripting basics of Amazon Sumerian and prepared the scene for AR placement.

Anchors in Amazon Sumerian

Let’s start with a bit of background of how Sumerian handles AR.

Ultimately, a 3D model is placed in the user’s real environment using an “Anchor”. This is directly represented in Sumerian. To create an anchor in your scene, your code goes through the following steps:

Categories
Android App Development AR / VR Digital Healthcare

Augmented Reality Object Placement with Amazon Sumerian (Part 1)

How to (re)-position the virtual objects in the real world in an Augmented Reality experience – while still having an interactive scene? Elegantly guide your users through the placement process.

The official AR tutorial from Amazon contains a simple script: by tapping anywhere in the scene, it instantly moves the objects to that position. However, for the Digital Healthcare Explained app, I needed a more flexible behavior:

  1. Activate placement mode by tapping on a specific object in the 3D scene. In this case, I decided that tapping the host avatar triggers placement mode.
  2. The host then explains what to do: tapping on another surface moves the host and related objects. Guide users through the process. The Sumerian hosts are ideal to explain the process.
  3. The user taps on a real-world surface in the AR scene.
  4. Next, the scene contents move, the anchor updates and the host confirms.

New ES6 Based Scripting

Additionally, Amazon Sumerian is evolving its scripting language. A major upgrade to ES6 is underway. It’s fully based on classes and fits better into the actions and state machines used in other places of Sumerian. The new APIs are still marked as “Preview”. However, the old APIs are already called “Legacy” or “Old Script Format”.

While documentation for the new Sumerian Engine APIs directly is already available, it’s very brief and doesn’t contain many examples. The official tutorials are still based on the legacy API.

I decided to re-write the script using the new APIs. It involves calling a lot of internal parts of Sumerian. Thus, it’s a lot more complex than all other examples for the new API currently out there. However, it’s interesting to dig more into the internals of how a modern, web-based AR environment works.