How to Record a Video from a Unity ARCore App on Android

ARCore Recorded Video converted to an Animated GIF

A video is a great way to showcase your Unity app. To capture the full visual fidelity of your app, you need to record at the highest possible quality with a smooth frame rate.

Several screen recording apps are available in the Google Play Store. However, there’s an easy and completely free way that provides the highest possible quality.

This short guide demonstrates how to record the screen with an APK file generated by Unity. Of course, it works for both AR and Non-AR Apps. Continue reading “How to Record a Video from a Unity ARCore App on Android”

Remote ARCore with Unity’s Experimental ARInterface

Overall, the AR ecosystem is still small. Nevertheless, it’s fragmented. Google develops ARCore, Apple creates ARKit and Microsoft is working on the Mixed Reality Toolkit. Fortunately, Unity started unifying these APIs with the ARInterface.

At Unite Austin, two of the Unity engineers introduced the new experimental ARInterface. In November 2017, they released it to the public via GitHub. It looks like this will be integrated into Unity 2018 – the new features of Unity 2018.1 include “AR Crossplatfom Kit (ARCore/ARKit API)“.

Remote Testing of AR Apps

The traditional mobile AR app development cycle includes compiling and deploying apps to a real device. That takes a long time and is tedious for quick testing iterations.

A big advantage of ARKit so far has been the ARKit Unity Remote feature. The iPhone runs a simple “tracking” app. It transmits its captured live data to the PC. Your actual AR app is running directly in the Unity Editor on the PC, based on the data it gets from the device. Through this approach, you can run the app by simply pressing the Play-button in Unity, without native compilation.

This is similar to the Holographic Emulation for the Microsoft HoloLens, which has been available for Unity for some time.

The great news is that the new Unity ARInterface finally adds a similar feature to Google ARCore: ARRemoteInterface. It’s available cross-platform for ARKit and ARCore.

ARInterface Demo App

In this article, I’ll explain the steps to get AR Remote running on Google ARCore. For reference: “Pirates Just AR” also posted a helpful short video on YouTube. Continue reading “Remote ARCore with Unity’s Experimental ARInterface”

Augmented Reality Christmas Tree with Google ARCore Developer Preview 2 – in 5 Minutes

Christmas Tree with Google ARCore

We don’t have a Christmas tree in our apartment. But in today’s world, this is what Augmented Reality is for, right? Therefore, I decided to create an AR Christmas Tree in 5 minutes. This also gave me an opportunity to check out the new Google ARCore Developer Preview 2.

Christmas Tree 3D Model

First off, you need a 3D model of a Christmas tree. Two of the most accessible sources are Google Poly and Microsoft Remix 3D. Sticking to models created directly by Google and Microsoft, these two are the choices:

Christmas Tree by Poly by Google
Christmas Tree by Poly by Google

Continue reading “Augmented Reality Christmas Tree with Google ARCore Developer Preview 2 – in 5 Minutes”

Real-Time Light Estimation with Google ARCore

ARCore: Light Estimation is an average of the overall image luminosity

ARCore has a great feature – light estimation. The ARCore SDK estimates the global lighting, which you can use as input for your own shaders to make the virtual objects fit in better with the captured real world. In this article, I’m taking a closer look at how the light estimation works in the current ARCore preview SDK.

Note: this article is based on the ARCore developer preview 1. Some details changed in the developer preview 2 – although the generic process is still similar. Continue reading “Real-Time Light Estimation with Google ARCore”

Getting Started with Google ARCore, Part 2: Visualizing Planes & Placing Objects

Models of Brains (segmented from an an MRI) placed in the real world using Google ARCore

Following the basic project setup of the first part of this article, we now get to the fascinating details of the ARCore SDK. Learn how to find and visualize planes. Additionally, I’ll show how to instantiate objects and how to anchor them to the real world using Unity.

Finding Planes with ARCore

The ARCore example contains a simple script to visualize planes, point clouds and to place the Android mascot. We’ll create a shorter version of the script here. Continue reading “Getting Started with Google ARCore, Part 2: Visualizing Planes & Placing Objects”

Getting Started with Google ARCore, Part 1: Project Setup & ARCore SDK

ARCore - Plane Detection running on the Google Pixel 2

ARCore by Google is still in preview and only runs on a select few phones including the Google Pixel 2. In this article, I’m creating a demo app for ARCore using the ARCore SDK for Unity (Preview 1).

It’s following up on the blog post series where I segmented a 3D model of the brain from an MRI image. Instead of following these steps, you can download the final model used in this article for free from Google Poly.

ARCore vs Tango

Previously, the AR efforts of Google were focused on the Tango platform. It included additional hardware depth sensors for accurate recognition of the environment. Unfortunately, only two phones are commercially available equipped with the necessary hardware to run Tango – the Asus ZenFone AR and the Lenovo Phab 2 Pro. Continue reading “Getting Started with Google ARCore, Part 1: Project Setup & ARCore SDK”

Showing a 360° Photo in Google Daydream VR based on Unity, Part 2

360° Photo in the Google Daydream VR (2017) headset, Unity app running on the Google Pixel 2

In the first part of the article, we captured a 360° photo using a Samsung Gear 360 camera. Now, we’ll create a new Unity project for Android. Using the right shader and material, we can assign the cylindric projection to a Skybox. This is the perfect 360° photo viewer for Unity, which can then be easily deployed to a Google Daydream / Cardboard VR headset!

Loading the 360° Photo in Unity

The Skybox in Unity is the easiest way to show a 360° photo in VR. Note that 360° 2D and 3D video will be supported out-of-the-box in the upcoming Unity 2017.3 release, according to the current Unity roadmap.

For setting up a 360° panorama as a Skybox, the following guides are very helpful if you need further pointers: SimplyVR, Tales from the Rift. The instructions below outline all the necessary steps you need to create your own 360° photo viewer in Unity! Continue reading “Showing a 360° Photo in Google Daydream VR based on Unity, Part 2”

Showing a 360° Photo in Google Daydream VR based on Unity, Part 1

Stitched 360° Photo taken with the Samsung Gear 360 in full resolution

Capturing a 360° photo / video and viewing it in VR is one of the most immersive use cases. As the user is part of a captured real world, the virtual experience is the most life-like possible.

In this article, I will show how to capture a 360° photo using the new Samsung Gear 360 camera (2017 version), then load the photo into a Unity project for Google Daydream / Cardboard to view it in VR on an Android phone (in this case, the Google Pixel 2). Continue reading “Showing a 360° Photo in Google Daydream VR based on Unity, Part 1”

Capturing a 3D Point Cloud with Intel RealSense and Converting to a Mesh with MeshLab

Depth and Color images from the RealSense Viewer

When dealing with Augmented and Virtual Reality, one of the most important tasks is capturing real objects and creating 3D models out of these. In this guide, I will demonstrate a quick method using the Intel RealSense camera to capture a point cloud. Next, I’ll convert the point cloud to a mesh using MeshLab. This mesh can then be exported to an STL file for 3D printing. Another option is visualization in 3D for AR / VR, where I’ll also cover how to preserve the vertex coloring from transferring the original point cloud to Unity. Continue reading “Capturing a 3D Point Cloud with Intel RealSense and Converting to a Mesh with MeshLab”

3D Printing MRI / CT / Ultrasound Data, Part 2: Splitting the Brain

Combined brain halves, 3D printed without support structures

Are there any other ways to 3D print segmented medical data coming from MRI / CT / Ultrasound by splitting it in two halves?

In the first part of this article, the result was that the support structures required by a standard 3D printer significantly reduce the details present on the surface of the printed body part.

Christoph Braun had the idea for another method to reduce the support structures to a minimum: by splitting the object in two halves, each has a flat surface area that can be used as the base for the 3D print.

Importing and Scaling the STL Model

For processing the 3D object, we’ll use OpenSCAD – The Programmers Solid 3D CAD Modeller. It’s a free open source tool, aimed more at developers, with the advantage that the processes can easily be automated. Continue reading “3D Printing MRI / CT / Ultrasound Data, Part 2: Splitting the Brain”