Download, Export or Backup Amazon Sumerian Scenes (Part 6)

Amazon Sumerian is a purely cloud-based tool. Its scenes are intended to be run directly from the cloud. As such, one of the most common questions is: how can I download the scene I created in Amazon Sumerian? You might want to do this to have a backup, to send it to a colleague or to move the scene to another region.


The easiest way to backup your scene is to create a snapshot. This is directly integrated into the main Sumerian editor UI. Select the root node of the scene in the Entities panel. Next, navigate to the “Snapshots” section in the inspector panel.

In this panel, you can create snapshots that are easy to return to later. I’d recommend creating a snapshot before major changes in your app, after finishing vital parts of code, and obviously for every publicly released version.

However, the snapshots are internal to the scene. How to get content from one scene to another?

Re-Using Assets in Multiple Projects

Asset Packs allow sharing content between different scenes. In the “Assets” panel of the editor, you can structure your scene contents into different packs. By default, an imported host is its own pack, for example. But you’re free to create custom packs, which can include assets of any type you like: 3D models, scripts, sounds & more.

Once you have placed all the individual assets you’d like to bring to a second scene into a pack, click on the pack to select it. This brings up four small menu items:

Add assets to a pack for easy re-use in different

The share button brings up another dialog. In case the button does not show up in the menu, you can alternatively click the “Add to Asset Library” at the bottom of the inspector panel.

Adding a pack to an asset library - choose the main type to categorize the asset.
Adding a pack to an asset library – choose the main type to categorize the asset.

Your content might include a multitude of things, but to organize it, it’s often still helpful to choose the main content type. For example, your asset could be a model with a small added script for interactivity. Nevertheless, you might still want to classify it with the “Meshes” category to make it easier to find it again.

Once it has been added, exit from the scene editor back to the Sumerian dashboard.

Amazon Sumerian Projects

In the previous parts of the article series, we have not touched projects so far. These are accessible through the Sumerian dashboard. They allow to combine multiple scenes, assets and templates. By default, everything you create is in a “Draft” project. But it’s easy to create a new project and move content between projects.

Project manager in Amazon Sumerian
Project manager in Amazon Sumerian

Depending on where your scene is located, you can find your exported asset pack in the respective project. Simply navigate to the project, and then select the “Assets” tab.

No matter if you select a scene or an asset pack from your project, you can always delete, move or copy it. This is possible through the inspector panel on the right. Through these features, you can now easily re-use the asset pack in another scene or project. It’s visible through the “Import Assets” button in the top bar of the editor:

The exported asset pack can now easily be imported to another scene or project.
The exported asset pack can now easily be imported to another scene or project.

Exporting Sumerian Scenes

Yet, all the different options presented so far only work within your own Amazon account. What if you want to create a local backup or share the scene with a colleague?

Even though Amazon is working on it, Sumerian doesn’t support multi-user access to projects yet. But, there is another option. You can download the entire scene as a zip file and share it with others.

I have not found the option in the menu so far. However, the Sumerian Slack channel contained the important hint: press Ctrl + E to bring up the “Export As” dialog in Sumerian. It gives you three options:

The important but difficult to find "Export As" dialog in Amazon Sumerian
The important but difficult to find “Export As” dialog in Amazon Sumerian

Export Sumerian Scene as Bundle

This is probably the most important option for this article. If you choose the “Bundle” button, your whole scene gets exported as a .zip file. It contains several components:

  • root.bundle: a JSON file that contains your entire scene configuration. This includes the hierarchy, entity transforms, used components for each entity as well as their settings.
  • *.bin: these are the actual assets / 3D models. I didn’t yet find out their exact format, and if there is a way to load the files into a 3D modelling tool. That would be helpful in situations where you’d need to do some minor changes to a model and then re-import it to Sumerian.
  • *.png / *.jpg: the textures used by objects, preview thumbnails etc.

You can easily take a look inside the root.bundle file. After formatting it to make it easier to read, you can directly see the entities and their properties:

The beginning of an exported Amazon Sumerian scene, saved as root.bundle and showing the details of a custom 3D model entity with its transform component and the reference to its state machine component.
The beginning of an exported Amazon Sumerian scene, saved as root.bundle and showing the details of a custom 3D model entity with its transform component and the reference to its state machine component.

Re-Importing an Exported Sumerian Bundle

How to load such an exported bundle zip file back into Amazon Sumerian? There doesn’t seem to be a dedicated “Import Scene” button in the dashboard yet. Use the following approach to prepare the scene:

  1. Create a new Amazon Sumerian scene.
  2. Delete all the entities that were created by default. You can’t delete the main camera.
  3. In the assets panel, use the wipe button of the “Default Pack” to remove all assets from your scene that are no longer in use.
Clean up the new Amazon Sumerian scene before importing the exported bundle.
Clean up the new Amazon Sumerian scene before importing the exported bundle.

Now, your new scene is ready to import the zip bundle file. Use the “Import Assets” button at the top and upload the .zip file through the “Import from Disk” option on the right.

After a few seconds, the scene contents should appear as a new pack in the “Assets” panel. Additionally, the contents of the exported scenes are merged into the “Entities” panel.

This means that you now have two cameras. Set the camera of the imported scene as the “main camera” in the settings of the corresponding “Camera” component in the inspector. Then it’s safe to delete the “old” camera which we were not allowed to delete before.

Exported scene bundle re-imported into a new Amazon Sumerian scene
Exported scene bundle re-imported into a new Amazon Sumerian scene

Config File

Unfortunately, this option doesn’t work at the time of writing. I get the error message: Download error – Error: Bundles cannot be generated using this API.

Judging from the name and the error message, I’d assume that this option only downloads the root.bundle file instead of the large zip containing all assets like the “Bundle” option. That’d be handy if you only change a little bit in the config, but don’t want to download 100 mb of scene data again because of that.


This is a very powerful option, especially for more complex scenes. If you export your scene as webpage, you get an offline copy of the scene configuration (the bundle) as well as the 3D models of the assets.

As such, only the latest version of the Amazon Sumerian scripts are loaded from the cloud. The large data files can be hosted offline on your own server. This has a large advantage of speeding up the loading process. Of course , the disadvantage is that you loose the quick update capability where you just need to republish the scene.

Contents of a downloaded Amazon Sumerian scene as a website.
Contents of a downloaded Amazon Sumerian scene as a website.

Note that viewing the scene on your local computer doesn’t work by just dragging the index.html file into your web browser. As the 3D engine of Sumerian relies heavily on Ajax requests, the web page needs a proper web serer to handle these.

Simple Web Server for Chrome configured to run a local Amazon Sumerian website. It loads scripts from Amazon, but has assets stored locally.
Simple Web Server for Chrome configured to run a local Amazon Sumerian website. It loads scripts from Amazon, but has assets stored locally.

If you don’t want to install a full-blown server on your PC like Apache, Microsoft IIS or nginx, you can go with the Web Server for Chrome. It’s a simple server that works fine with Amazon Sumerian websites. Choose the directory where you extracted the zip file from Sumerian and navigate to the localhost URL.

A simple demo scene from Amazon Sumerian running on a local web server.
A simple demo scene from Amazon Sumerian running on a local web server.

Private & Embedded Scenes

By default, your scene is public once you publish it. This means that everyone who knows the link to your published scene can view it in the browser. On the one hand side, it’s great: that makes it easy to distribute your scene.

However, this approach is also limited. If you for example want to limit contents to paid users only, you wouldn’t want them to simply directly navigate to your scene URL. After all, URLs are easy to retrieve from web sites, decompiled Android apps or simply by listening to the HTTP commands.

Another risk is the price: you pay for Amazon Sumerian by transferred megabytes. As such, you’ll want to have full control over who can see your scene.

AWS Amplify for Secure Embedding

To support these scenarios, Amazon has recently introduced Amazon Sumerian support into AWS Amplify. Amplify is essentially a framework and toolset to integrate AWS cloud services into websites and mobile apps. Compared to Node.js, Amplify is a client-first framework. This means that you directly build iOS, Android, Web or React Native apps. In case you go for web apps, Amplify deeply integrates with React, Ionic, Angular and Vue.js.

Note that you can’t privately host a scene if you already published it publicly. However, in the “Publish” dialog in the top right corner, you can un-publish your scene with one click. This re-enables private hosting:

Host privately - non-public Amazon Sumerian scenes.
Host privately – non-public Amazon Sumerian scenes.

After this step, you can configure the scene and add scene authentication. Amplify contains a good step-by-step guide for how to set up your client.

Animation & Timeline for AR with Amazon Sumerian (Part 5)

So far, we have set up a fully functional scene for our ambitious Augmented Reality project. The overall idea: a host avatar explains different 3D objects, which are placed in the user’s surroundings. Only one piece is missing – an animation.

In this part of the article series, we’ll look at three possible ways to animate objects in Amazon Sumerian: timelines, “classic” continuous animations and tween actions as part of state machines in behaviors. All three have different advantages and use cases. Thus, it’s important that you can decide which approach is best for each situation.

This is a capture of the current prototype and what it’ll look like, captured from a phone in Augmented Reality.

Animations in the Digital Healthcare Explained prototype, captured in Augmented Reality running on a Google Pixel with ARCore

Animation Actions

Let’s get started with the tween actions. In the previous parts, we’ve already integrated several state machines and actions into our scene. This approach ties in perfectly well into the same approach.

Continue reading “Animation & Timeline for AR with Amazon Sumerian (Part 5)”

User Interaction & Messages in Amazon Sumerian (Part 4)

Messages for User Interaction with the Host in Amazon Sumerian

Learn how to let individual parts of your AR / VR app communicate with each other. This part of the tutorial lets the user trigger actions within your scene. For example: the host starts explaining an object when you tap on it. Internally, the connection is established via messages. It’s a vital concept to understand on your journey to real-life AR apps with Amazon Sumerian.

The guide builds upon the project created in the previous parts of the article: 1 – general setup, 2 – speech & gestures, 3 – 3D Models & AR Anchors.

App Scenario: “Digital Healthcare Explained”

After the basic components of the scene are in place, it’s time to wire everything together. We want to achieve two things:

  • Chain sequences together to make one thing happen after another
  • Let the user interact with entities in the scene

Our demo app informs the user about different healthcare topics. The following chart summarizes its flow:

Overall concept of the "Digital Healthcare Explained" app.
Overall concept of the “Digital Healthcare Explained” app.

At first, the host greets the user. Then, several 3D models representing different healthcare topics appear around the host. The user selects one of these topics by tapping the respective entity. As we’re creating an Augmented Reality app, the user can walk around in the room to discover different topics.

Once the user tapped on one of these topics, the host starts explaining. Specific animations for the selected object start, which help understanding the topic.

After the host finished the explanation, the user can select the next topic.

Messages: Communication within the Scene

Events are broadcasted through messages. These are simply user-defined strings. In Sumerian, they’re often referred to as “channels”.

Continue reading “User Interaction & Messages in Amazon Sumerian (Part 4)”

Custom 3D Models & AR Anchors in Amazon Sumerian (Part 3)

Integrate the real world into your Amazon Sumerian AR app. Plus: place virtual content into the user’s environment. Learn how to anchor multiple 3D models that have a fixed spatial relationship.

This article builds on the foundations of the AR project setup in part 1, as well as extending the host with speech & gestures in part 2.

Import Custom 3D Models

While Sumerian comes with a few ready-made assets, you will often need to add custom 3D models to your scene as well. Currently, Sumerian supports importing two common file types: .fbx (also used by Unity and Autodesk software) and .obj (very wide-spread and common format).

Simply drag & drop such a model from your computer to your assets panel. Alternatively, you can also use the “Import Assets” button in the top bar and then use “Browse” to choose the file to upload.

Where to get these 3D models? Either you create them yourself using Blender, Maya or any other tool. Alternatively, go to great free portals like Google Poly and Microsoft Remix 3D. These objects are usually low-poly and therefore well-suited for mobile phones.

Continue reading “Custom 3D Models & AR Anchors in Amazon Sumerian (Part 3)”

Speech & Gestures with Amazon Sumerian (Part 2)

Configuring speech for the Amazon Sumerian Host

In the first part of the article series, we set up an Augmented Reality app with a host (= avatar). Now, we’ll dive deeper and integrate host interactions. To make the character more life-like, it should look at you. We’ll assign speech files and ensure that the gestures of the character match the spoken content.

But before we set out on these tasks, let’s take a minute to look at some vital concepts of Amazon Sumerian.

Behaviors, State Machines & Events

Unless you want your app to just show a static scene, you’ll need to integrate actions. The trigger for an action could react to interactive user inputs. Alternatively, you define what happens sequentially – e.g., first a new object appears in the scene, then the host avatar explains it.

Technically, this is solved using a state machine. Each entity can have multiple different states. A behavior is a collection of these states. States transition from one to another based on actions & their events (= interactions or timing).

Sumerian State Machines - Behaviors contain states, which have actions that can trigger events, which lead to transitions to other states.
Sumerian State Machines – Behaviors contain states, which have actions that can trigger events, which lead to transitions to other states.

Each state has a name: e.g., “Waiting”, “Moving”, “Talking”. In addition, each state typically has one or more actions: e.g., waiting for five seconds, animating the movement of the entity or playing a sound file. Sumerian comes with pre-defined actions. Additionally, you can provide your own JavaScript code for custom or more complex tasks.

These actions can trigger events. Some examples: the wait time of 5 seconds is over, the movement is completed or the sound file finished playing. Using a transition, you can then transition to a different state.

By combining several states together with transitions, you can make entities interact with the user or perform other tasks to ensure your scene is dynamic.

Continue reading “Speech & Gestures with Amazon Sumerian (Part 2)”

Amazon Sumerian & Augmented Reality (Part 1)

Amazon Sumerian Host, placed in the real world with Google ARCore

Many AR / VR use cases involve virtual trainings or guide topics. With Amazon Sumerian, you can quickly create cross-platform apps for these scenarios. The main advantage is the large amount of ready-made content: avatars (called hosts) and virtual environment templates. Through the direct integration of Amazon Web Services (AWS), it’s easy to make the host speak to the user – including lip sync, gestures and even conversations through bots.

Of course, you can create similar solutions with Unity. But Sumerian requires far less prior 3D software knowledge and is therefore ideal for smaller projects as well as prototypes. The interface and generic setup is still quite similar to Unity; so it’s a good evolution to switch to Unity – if needed – after you’ve created your first few apps and services with Amazon Sumerian.

Additionally, right now Amazon is hosting an AR / VR challenge with lots of prizes for the best apps of various categories. So, it’s a great time to explore Sumerian!

What is Amazon Sumerian?

Essentially, Sumerian is a browser-based 3D editing platform. It allows developing for most AR and VR platforms, including Oculus, Vive, Windows Mixed Reality, as well as the browser, Google ARCore and Apple ARKit.

Behind the scenes, it’s based on WebXR. That’s the evolution of WebVR, which was mainly targeting VR headsets. With WebXR, you can access sound, controllers and also anchor objects to the real environment in Mixed Reality scenarios.

Amazon Sumerian Account Setup

First, you need to set up your Amazon account. Amazon offers an AWS free tier, which gives you access to many services and provides some usage quotas for free for the first 12 months. Afterwards, you can still continue using selected services for free. Note that Sumerian is not part of these, but 12 months provides enough time to test & develop your service.

Continue reading “Amazon Sumerian & Augmented Reality (Part 1)”

Using Natural Language Understanding, Part 4: Real-World AI Service & Socket.IO

The final vital sign checklist app with natural language understanding

In this last part, we bring the vital sign check list to life. Artificial Intelligence interprets assessments spoken in natural language. It extracts the relevant information and manages an up-to-date, browser-based checklist. Real-time communication is handled through Web Sockets with Socket.IO.

The example scenario focuses on a vital signs checklist in a hospital. The same concept applies to countless other use cases.

In this article, we’ll query the Microsoft LUIS Language Understanding service from a Node.js backend. The results are communicated to the client through Socket.IO.

Connecting LUIS to Node.JS

In the previous article, we verified that our LUIS service works fine. Now, it’s time to connect all components. The aim is to query LUIS from our Node.js backend. Continue reading “Using Natural Language Understanding, Part 4: Real-World AI Service & Socket.IO”

Using Natural Language Understanding, Part 3: LUIS Language Understanding Service

Pre-built entities in intents, in use with LUIS

Training Artificial Intelligence to perform real-life tasks has been painful. The latest AI services now offer more accessible user interfaces. These require little knowledge about machine learning. The Microsoft LUIS service (Language Understanding Intelligent Service) performs an amazing task: interpreting natural language sentences and extracting relevant parts. You only need to provide 5+ sample sentences per scenario.

In this article series, we’re creating a sample app that interprets assessments from vital signs checks in hospitals. It filters out relevant information like the measured temperature or pupillary response. Yet, it’s easy to extend the scenario to any other area.

Language Understanding

After creating the backend service and the client user interface in the first two parts, we now start setting up the actual language understanding service. I’m using the LUIS Language Understanding service from Microsoft, which is based on the Cognitive Services of Microsoft Azure. Continue reading “Using Natural Language Understanding, Part 3: LUIS Language Understanding Service”

Using Natural Language Understanding, Part 2: Node.js Backend & User Interface

User Interface for our Vital Sign Checklist app that uses the LUIS Language Understanding Service from Microsoft

The vision: automatic checklists, filled out by simply listening to users explaining what they observe. The architecture of the sample app is based on a lightweight architecture: HTML5, Node.js + the LUIS service in the cloud.

Such an app would be incredibly useful in a hospital, where nurses need to perform and log countless vital sign checks with patients every day.

In part 1 of the article, I’ve explained the overall architecture of the service. In this part, we get hands-on and start implementing the Node.js-based backend. It will ultimately handle all the central messaging. It communicates both with the client user interface running in a browser, as well as the Microsoft LUIS language understanding service in the Azure Cloud.

Creating the Node Backend

Node.js is a great fit for such a service. It’s easy to setup and uses JavaScript for development. Also, the code runs locally for development, allowing rapid testing. But, it’s easy to deploy it to a dedicated server or the cloud later.

I’m using the latest version of Node.js (currently 9.3) and the free Visual Studio Code IDE for editing the script files. Continue reading “Using Natural Language Understanding, Part 2: Node.js Backend & User Interface”

Using Natural Language Understanding, Part 1: Introduction & Architecture

Vital Signs Checklist Architecture - 5

During the last few years, cognitive services became immensely powerful. Especially interesting is natural language understanding. Using the latest tools, training the computer to understand real spoken sentences and to extract information is reduced to a matter of minutes. We as humans no longer need to learn how to speak with a computer; it simply understands us.

I’ll show how to use the Language Understanding Cognitive Service (LUIS) from Microsoft. The aim is to build an automated check-list for nurses working at hospitals. Every morning, they record the vital sign of every patient. At the same time, they document the measurements on paper checklists.

With the new app developed in this article, the process is much easier. While checking the vital signs, nurses usually talk to the patients about their assessments. The “Vital Signs Checklist” app filters out the relevant data (e.g., the temperature or the pupillary response) and marks it in a checklist. Nurses no longer have to pick up a pen to manually record the information.

The Final Result: Vital Signs Checklist

In this article, we’ll create a simple app that uses the natural language understanding APIs (“LUIS”) of the Microsoft Cognitive Services on Microsoft Azure. The service extracts the relevant data from freely spoken assessments.

LUIS just went from preview state to general availability. This important milestone brings SLAs and more worldwide availability regions. So, it’s a great time to start using it! Continue reading “Using Natural Language Understanding, Part 1: Introduction & Architecture”