Learn how to let individual parts of your AR / VR app communicate with each other. This part of the tutorial lets the user trigger actions within your scene. For example: the host starts explaining an object when you tap on it. Internally, the connection is established via messages. It’s a vital concept to understand on your journey to real-life AR apps with Amazon Sumerian.
The guide builds upon the project created in the previous parts of the article: 1 – general setup, 2 – speech & gestures, 3 – 3D Models & AR Anchors.
App Scenario: “Digital Healthcare Explained”
After the basic components of the scene are in place, it’s time to wire everything together. We want to achieve two things:
- Chain sequences together to make one thing happen after another
- Let the user interact with entities in the scene
Our demo app informs the user about different healthcare topics. The following chart summarizes its flow:

At first, the host greets the user. Then, several 3D models representing different healthcare topics appear around the host. The user selects one of these topics by tapping the respective entity. As we’re creating an Augmented Reality app, the user can walk around in the room to discover different topics.
Once the user tapped on one of these topics, the host starts explaining. Specific animations for the selected object start, which help understanding the topic.
After the host finished the explanation, the user can select the next topic.
Messages: Communication within the Scene
Events are broadcasted through messages. These are simply user-defined strings. In Sumerian, they’re often referred to as “channels”.