Introduction

8th Wall Web enables developers to create augmented reality brand experiences that live on the mobile web.

Built entirely using standards-compliant JavaScript and WebGL, 8th Wall Web is a complete implementation of 8th Wall’s Simultaneous Localization and Mapping (SLAM) engine, hyper-optimized for real-time AR on mobile browsers. Features include 6-Degrees of Freedom Tracking, Surface Estimation, Lighting, World Points and Hit Tests.

8th Wall Web is easily integrated into 3D JavaScript frameworks such as:

Tutorial API Reference Need Help?

What's New

8th Wall Web Release 9.2 is now available! Release 9.2 provides a number of updates and enhancements, including:

Release 9.2:

Release 9.1:

  • New Features:

    • Added support for Amazon Sumerian in 8th Wall Web
    • Improved tracking stability and eliminated jitter

Click Here for to see a full list of changes.

Requirements

Web Browser Requirements:

Mobile browsers require the following functionality in order to view 8th Wall Web experiences:

  • WebGL (canvas.getContext('webgl') || canvas.getContext('webgl2'))
  • getUserMedia (navigator.mediaDevices.getUserMedia)
  • deviceorientation (window.DeviceOrientationEvent)
  • Web-Assembly/WASM (window.WebAssembly)

This translates to the following for iOS and Android devices:

  • iOS:

    • OS: iOS 11 or newer
    • Supported Browsers:

      • Safari

        • Note: getUserMedia only returns video input devices directly in Safari, not in UIWebView or WKWebView.
  • Android:

    • Supported Browsers:

      • Chrome
      • Chrome-variants (e.g. Samsung Browser)
      • Firefox

Supported Frameworks

8th Wall Web is easily integrated into 3D JavaScript frameworks such as:

Supported Features

Platform Lighting AR Background Camera Motion Horizontal Surfaces Vertical Surfaces Image Detection World Points Hit Tests
8th Wall Web Yes Yes 6 DoF (Scale Free) Yes, Instant Planar No No Yes Yes

Tutorial

This guide provides all of the steps required to get you up and running with a sample A-Frame or three.js sample project.

Download a Sample Project

Sample projects can be found on 8th Wall Web's public GitHub repo at https://github.com/8thwall/web

Create a Console Account

New Users:

  1. Create a free account at https://console.8thwall.com/sign-up
  2. Select the "Web Developer" Workspace Type

SignUpWebWorkspace

Existing Users:

  1. Go to https://console.8thwall.com/workspaces
  2. If you already have a Web Developer workspace, select it. If not, create one by clicking the "Create a New Workspace" button.

Create App Key

  1. If you have multiple workspaces associated with your console user, select your Web Developer workspace.

ConsoleSelectWorkspace

  1. Click "Dashboard" in the left hand navigation

  2. Click "Create a new Web App +":

ConsoleCreateWebApp

  1. Enter a name for your Web App. The application name should be descriptive but short and only include letters, spaces, "." and "_". It should be unique. (e.g. “WebARTest”):

ConsoleCreateWebApp2

  1. Click Create

  2. After creation, your App Key information will be displayed. Copy the AppKey string.

Add App Key to Web Project

Add your App Key to the A-Frame or three.js project. If you are using a sample project from 8th Wall, edit index.html and replace the "X"'s with your app key.

  • Example (A-Frame):

    <script async src="//apps.8thwall.com/xrweb?appKey=XXXXXXXXXX"></script>

  • Example (three.js):

    <script defer src="https://apps.8thwall.com/xrweb?appKey=XXXXXXXXXX"></script>

Generate Dev Token

A Development Token is required for viewing 8th Wall Web applications served locally or hosted applications under Development. This token can be used to view any hosted application within your account. A device can be authorized for only one account at a time.

  1. Login to the console: https://console.8thwall.com

  2. Click "Development Token" from the left navigation.

  3. Check Enable

  4. Select a version of 8th Wall Web to use with your web app. To use the latest stable version of 8th Wall Web, select release. To use a pre-release version, select beta. If you select "live version", the version of the XR engine will be the one specified in your App Key settings.

ConsoleDevToken

Install Dev Token

  • Mobile: If you are logged into the console from the mobile, simply click Authorize Device to install a dev token on that device.

  • Desktop: If you are logged into the console on your laptop/desktop, click Authorize Device, and then scan the QR code with your mobile device to install a dev token on that device.

ConsoleDevTokenQR

Serve Application

From Web Server

If you have a development web server, upload the sample project and connect from a mobile device that has a dev token installed. You'll need to connect via HTTPS as browsers require HTTPS certificates to access the camera on your phone through a browser.

Locally From Mac

Serving web app locally from your computer can be tricky as browsers require HTTPS certificates to access the camera on your phone through a browser. As a convenience, when you download the 8th Wall Web sample projects, 8th Wall has also included a "serve" script that will run a local https webserver on your development computer.

  1. Install Node.js and npm

If you don't already have Node.js and npm installed, get it here: https://www.npmjs.com/get-npm

  1. Open a terminal window (Terminal.app, iTerm2, etc):
# cd <directory_where_you_saved_sample_project_files>
# cd serve
# npm install
# cd ..
# ./serve/bin/serve -d <sample_project_dir>

Example:

./serve/bin/serve -n -d xr3js -p 7777

ServeLocally

Locally From Windows

  1. Install Node.js and npm

If you don't already have Node.js and npm installed, get it here: https://www.npmjs.com/get-npm

  1. Open a Command Prompt (cmd.exe)
# cd <directory_where_you_saved_sample_project_files>
# cd serve
# npm install
# cd ..
# serve\bin\serve.bat -d <sample_project_dir>

Example:

serve\bin\serve.bat -n -d xr3js -p 7777

ServeLocallyWindows

View App on Phone

iOS

  1. The “serve” command run in the previous step will display the IP and Port to connect to
  2. Open Safari on iOS 11+, and conned to the “Listening” URL. Note: Safari will complain about the SSL certificates, but you can safely proceed.
  3. Click "visit this website": iOSConnect1
  4. Click "Show Details": iOSConnect2
  5. Click "Visit Website": iOSConnect3
  6. Finally, click "Allow" to grant camera permissions and start viewing the sample AR experience: iOSConnect4

Android

  1. The “serve” command run in the previous step will display the IP and Port to connect to
  2. Open Chrome, a Chrome-variant (e.g. Samsung browser) or Firefox
  3. Chrome Example: The browser will complain that the cert is invalid, simply click "Advanced" to proceed: AndroidConnect1
  4. Click "PROCEED TO ... (UNSAFE)": AndroidConnect2

Console Overview

The 8th Wall Console is a web based portal that allows you to:

  • View App Key usage
  • Generate and manage App Keys
  • Manage workspace team members and permissions
  • Manage plans and billing

To access the 8th Wall Console, go to https://console.8thwall.com

New Users:

  1. Create a free account at https://console.8thwall.com/sign-up
  2. Select the "Web Developer" Workspace Type

SignUpWebWorkspace

Existing Users:

  1. Go to https://console.8thwall.com/login and login with your email and password.

  2. If you have multiple workspaces associated with your console user, select your Web Developer workspace.

ConsoleSelectWorkspace

Console Dashboard

The 8th Wall Web Developer Dashboard is your hub for creating and managing your Web Applications as well as viewing usage and analytics.

ConsoleDashboard

App Key Management

An App Key is required to run web apps using 8th Wall Web. This section decribes how to create, manage and use Web App Keys for 8th Wall Web.

Create Key

  1. If you have multiple workspaces associated with your console user, select a Web Developer workspace.

ConsoleSelectWorkspace

  1. Click "Dashboard" in the left navigation

  2. Click "Create a new Web App +":

ConsoleCreateWebApp

  1. Enter a name for your Web App. The application name should be descriptive but short and only include letters, spaces, "." and "_". It should be unique. (e.g. “WebARTest”):

ConsoleCreateWebApp2

  1. Click Create

  2. After creation, your App Key information will be displayed. Copy the AppKey string.

Copy Key

  1. Click Dashboard in the left navigation

  2. Find the desired app key in the list of "My Web Applications" and click the Edit button to view the Application Information page:

ConsoleAppKeyCopy

  1. On the Application Information page copy the AppKey string.

  2. Add the App Key into your A-Frame or three.js project. If you are using a sample project from 8th Wall, edit index.html and replace the "X"'s with your app key.

  • Example (A-Frame):

    <script async src="//apps.8thwall.com/xrweb?appKey=XXXXXXXXXX"></script>

  • Example (three.js):

    <script defer src="https://apps.8thwall.com/xrweb?appKey=XXXXXXXXXX"></script>

Disable Key

Note: Disabling an App Key will cause any web applications using it to stop working. You can always re-enable it later.

  1. Click Dashboard in the left navigation

  2. Find the desired app key in the list of "My Web Applications" and click on the name to expand

  3. Click the Status slider to the OFF position

  4. Click OK to confirm.

ConsoleAppKeyDisable

Enable Key

To re-enable an Web Application after it has been disabled:

  1. Click Dashboard in the left navigation

  2. Find the desired app key in the list of "My Web Applications" and click on the name to expand

  3. Click the Status slider to the ON position

  4. Click OK to confirm.

ConsoleAppKeyEnable

Delete Key

Note: Deleting an App Key will cause any web apps using it to stop working. You cannot undo this operation.

  1. Click Dashboard in the left navigation

  2. Find the desired app key in the list of "My Web Applications" and click the Delete button

  3. Type "DELETE" to confirm and then click OK.

ConsoleAppKeyDelete

Allowed Origins

If you have upgraded to the Web Developer Pro plan, you can host your Web Application publicly on your own web server. In order to do so, you will need to specify a list of approved URLs that can host this App Key.

  1. Click Dashboard in the left navigation

  2. Find the desired app key in the list of "My Web Applications" and click the Edit button to view the Application Information page:

ConsoleAppKeyCopy

  1. In the "Allowed Origins" section of the Application Information page, enter one or more origins. Click the "+" to add multiple. An origin may not contain a wildcard, a path, or a port.

ConsoleAppKeyDisable

XR Engine Version

For each public Web Application, you can configure the version of the XR engine used when serving public web clients.

If you select a Channel (release or beta), public clients will always be served the most recent version of 8th Wall Web. If you freeze the version then you will need to manually upgrade to receive the latest features and improvements of the engine.

NOTE: This is only available to Web Developer Pro workspaces.

In general, 8th Wall recommends using the official release channel for production web apps.

If you would like to test your web app against a pre-release version of 8th Wall Web, which may contain new features and/or bug fixes that haven't gone through full QA yet, you can switch to the beta channel:

ConsoleAppKeyChannel

  1. Click Dashboard in the left navigation

  2. Find the desired app key in the list of "My Web Applications" and click the Edit button to view the Application Information page:

To Freeze to a specific version, select the desired Channel (release or beta) and click the Freeze button

To Re-join a Channel and stay up-to-date, select the desired Channel radio button

To Rollback to the previous selection, click the Rollback button.

Development Tokens

A Development Token is required for viewing 8th Wall Web applications served locally or hosted applications under Development. This token can be used to view any hosted application within your account. A device can be authorized for only one account at a time.

  1. Login to the console at https://console.8thwall.com

  2. Select "Web Apps" from the left navigation.

  3. Scroll down to the "Development Token" section at the bottom of the page

  4. Select a version of 8th Wall Web to use with your web app. To use the current stable version of 8th Wall Web, select release. To use a pre-release version, select beta. Note: Production apps should use the release channel.

  5. Check Enable

ConsoleDevToken

Install Development Token

  • Mobile: If you are logged into the console from the mobile, simply click Authorize Device to install a dev token on that device.

  • Desktop: If you are logged into the console on your laptop/desktop, click Authorize Device, and then scan the QR code with your mobile device to install a dev token on that device.

ConsoleDevTokenQR

Teams

Each Workspace has a team containing one or more Users, each with different permissions. Users can belong to multiple Workspace teams.

Add other members to your team to allow them to use the same App Keys you have created. You can collaboratively work on the same set of apps and see their usage.

Invite Users

  1. Click Teams in the left navigation
  2. Enter the email address(es) for the users you want to invite. Enter multiple emails separated by commas.
  3. Click Invite users

ManagePlans

User Roles

Team members can have one of three roles:

  • OWNER
  • ADMIN
  • DEV

Capabilities for each role:

Capability OWNER ADMIN DEV
View Dashboard / Usage X X X
Web Apps - Create X X X
Web Apps - Edit X X X
Web Apps - Delete X X X
Web Apps - Enable/Disable X X X
Development Tokens X X X
Teams - View Users X X X
Teams - Invite Users X X
Teams - Remove Users X X
Teams - Manage User Roles X X
Workspaces - Create X X X
Workspaces - Edit X
Workspaces - Manage Plans X
View Quick Start X X X
View Release Notes X X X
Edit Profile X X X

Workspaces

A Workspace is a logical grouping of Users and Applications. Grouping Applications under the same Workspace allows you to view consolidated usage and billing. Workspaces can contain one or more Users, each with different permissions. Users can belong to multiple Workspaces.

There are 3 types of Workspaces:

  • Web Developer: Create and manage App Keys for 8th Wall Web - 8th Wall's JavaScript SDK for creating rich AR experiences that run directly in a mobile web browser.
  • XR Developer: Create and manage App Keys for 8th Wall XR for Unity® - 8th Wall's cross-platform Unity SDK for building mobile AR apps that can be deployed on iOS and Android devices.
  • AR Camera: Upload your 3D models and instantly create AR Cameras that run on the web.

Initial Workspace

When signing up for a new 8th Wall account, you must select an inital workspace type. You can create additional workspaces later.

  1. Create a free account at https://console.8thwall.com/sign-up
  2. Select the "Web Developer" Workspace Type

SignUpWebWorkspace

Managing Workspaces

The Workspaces page in the console allows you to view and manage the Workspaces you are a member of.

Use one of the following methods to access the Workspaces page:

  1. Click the workspace switcher menu at the top of the left navigation and select Manage All

AccountSwitcherManageAll

  1. Or click on your name at the top-right of the page and select Workspaces

ConsoleWorkspaceMenu

  1. Or go directly to https://console.8thwall.com/workspaces to view and manage your workspaces.

Create Workspace

NOTE: You can have a maximum of one FREE Web Developer workspace and one FREE AR Camera workspace.

  1. Go to https://console.8thwall.com/workspaces
  2. Click "Create a New Workspace"
  3. Select Workspace Type
  4. Click Create

ConsoleWorkspaceCreate

Switching Workspaces

  1. The workspace switcher menu can be found at the top of the left navigation. Click it to select your desired Workspace:

AccountSwitcherManageAll

Manage Plans

  1. Click Manage Plans in the left navigation
  2. Click Upgrade to Pro to enter your billing information and upgrade to the Web Developer Pro plan.

Please refer to https://8thwall.com/pricing.html for detailed information on plans and pricing.

For enterprise licensing options, please contact the 8th Wall team by filling out the form at https://8thwall.com/products-web.html#contact

ManagePlans

API Overview

This section of the documentation contains details of 8th Wall Web's Javascript API.

XR

Description

Entry point for 8th Wall Web

Functions

Function Description
addCameraPipelineModule Adds a module to the camera pipeline that will receive event callbacks for each stage in the camera pipeline.
isPaused Indicates whether or not the XR session is paused.
pause Pause the current XR session. While paused, the camera feed is stopped and device motion is not tracked.
resume Resume the current XR session.
run Open the camera and start running the camera run loop.
runPreRender Executes all lifecycle updates that should happen before rendering.
runPostRender Executes all lifecycle updates that should happen after rendering.
stop Stop the current XR session. While stopped, the camera feed is closed and device motion is not tracked.
version Get the 8th Wall Web engine version.

Modules

Module Description
AFrame Entry point for A-Frame integration with 8th Wall Web.
CameraPixelArray Provides a camera pipeline module that gives access to camera data as a grayscale or color uint8 array.
CanvasScreenshot Provides a camera pipeline module that can generate screenshots of the current scene.
GlTextureRenderer Provides a camera pipeline module that draws the camera feed to a canvas as well as extra utilities for GL drawing operations.
Sumerian Entry point for Sumerian integration with 8th Wall Web.
Threejs Provides a camera pipeline module that drives three.js camera to do virtual overlays.
XrController XrController provides 6DoF camera tracking and interfaces for configuring tracking.
XrDevice Provides information about device compatibility and characteristics.

XR.addCameraPipelineModule()

XR.addCameraPipelineModule()

Description

8th Wall camera applications are built using a camera pipeline module framework. For a full description on camera pipeline modules, see CameraPipelineModule.

Applications install modules which then control the behavior of the application at runtime. A module object must have a .name string which is unique within the application, and then should provide one or more of the camera lifecycle methods which will be executed at the appropriate point in the run loop.

During the main runtime of an application, each camera frame goes through the following cycle:

onStart -> onCameraStatusChange (requesting -> hasStream -> hasVideo) -> onProcessGpu -> onProcessCpu -> onUpdate -> onRender

Camera modules should implement one or more of the following camera lifecycle methods:

Function Description
onCameraStatusChange Called when a change occurs during the camera permissions request.
onCanvasSizeChange Called when the canvas changes size.
onDeviceOrientationChange Called when the device changes landscape/portrait orientation.
onException Called when an error occurs in XR. Called with the error object.
onPaused Called when XR.pause() is called.
onProcessCpu Called to read results of GPU processing and return usable data.
onProcessGpu Called to start GPU processing.
onRender Called after onUpdate. This is the time for the rendering engine to issue any WebGL drawing commands. If an application is providing its own run loop and is relying on XR.runPreRender() and XR.runPostRender(, this method is not called and all rendering must be coordinated by the external run loop.
onResume Called when XR.resume() is called.
onStart Called when XR starts. First callback after XR.run() is called.
onUpdate Called to update the scene before render. Data returned by modules in onProcessGpu and onProcessCpu will be present as processGpu.modulename and processCpu.modulename where the name is given by module.name = "modulename".
onVideoSizeChange Called when the canvas changes size.

Note: Camera modules that implement onProcessGpu or onProcessCpu can provide data to subsequent stages of the pipeline. This is done by the module's name.

Example1 - A camera pipeline module for managing camera permissions:

XR.addCameraPipelineModule({
  name = 'camerastartupmodule',
  onCameraStatusChange = ({status}) {
    if (status == 'requesting') {
      myApplication.showCameraPermissionsPrompt()
    } else if (status == 'hasStream') {
      myApplication.dismissCameraPermissionsPrompt()
    } else if (status == 'hasVideo') {
      myApplication.startMainApplictation()
    } else if (status == 'failed') {
      myApplication.promptUserToChangeBrowserSettings()
    }
  },
})

Example2 - a QR code scanning application could be built like this:

// Install a module which gets the camera feed as a UInt8Array.
XR.addCameraPipelineModule(
  XR.CameraPixelArray.pipelineModule({luminance: true, width: 240, height: 320}))

// Install a module that draws the camera feed to the canvas.
XR.addCameraPipelineModule(XR.GlTextureRenderer.pipelineModule())

// Create our custom application logic for scanning and displaying QR codes.
XR.addCameraPipelineModule({
  name = 'qrscan',
  onProcessCpu = ({onProcessGpuResult}) => {
    // CameraPixelArray.pipelineModule() returned these in onProcessGpu.
    const { pixels, rows, cols, rowBytes } = onProcesGpuResult.camerapixelarray
    const { wasFound, url, corners } = findQrCode(pixels, rows, cols, rowBytes)
    return { wasFound, url, corners }
  },
  onUpdate = ({onProcessCpuResult}) => {
    // These were returned by this module ('qrscan') in onProcessCpu
    const {wasFound, url, corners } = onProcessCpuResult.qrscan
    if (wasFound) {
      showUrlAndCorners(url, corners)
    }
  },
})

XR.isPaused()

bool XR.isPaused()

Parameters

None

Description

Indicates whether or not the XR session is paused.

Example

// Call XR.pause() / XR.resume() when the button is pressed.
document.getElementById('pause').addEventListener(
  'click',
  () => {
    if (!XR.isPaused()) {
      XR.pause()
    } else {
      XR.resume()
    }
  },
  true)

XR.pause()

XR.pause()

Parameters

None

Description

Pause the current XR session. While paused, device motion is not tracked.

Example

// Call XR.pause() / XR.resume() when the button is pressed.
document.getElementById('pause').addEventListener(
  'click',
  () => {
    if (!XR.isPaused()) {
      XR.pause()
    } else {
      XR.resume()
    }
  },
  true)

XR.resume()

XR.resume()

Parameters

None

Description

Resume the current XR session after it has been paused.

Example

// Call XR.pause() / XR.resume() when the button is pressed.
document.getElementById('pause').addEventListener(
  'click',
  () => {
    if (!XR.isPaused()) {
      XR.pause()
    } else {
      XR.resume()
    }
  },
  true)

XR.run()

XR.run(canvas, webgl2: true, ownRunLoop: true)

Parameters

Parameters

Property Type Default Description
canvas HTMLCanvasElement The HTML Canvas that the camera feed will be drawn to.
webgl2 bool true If true, use WebGL2 if available, otherwise fallback to WebGL1. If false, always use WebGL1.
ownRunLoop bool true If true, XR should use it's own run loop. If false, you will provide your own run loop and be responsible for calling runPreRender and runPostRender yourself [Advanced Users only]

Description

Open the camera and start running the camera run loop.

Example

// Open the camera and start running the camera run loop
// In index.html: <canvas id="xrweb" width="480" height="640" />
XR.run({canvas: document.getElementById('xrweb')})

XR.runPreRender()

XR.runPreRender()

Description

Executes all lifecycle updates that should happen before rendering.

Parameters

None

Example

// Implement A-Frame components tick() method
function tick() {
  // Check device compatibility and run any necessary view geometry updates and draw the camera feed.
  ...
  // Run XR lifecycle methods
  XR.runPreRender(Date.now())
  }

XR.runPostRender()

XR.runPostRender()

Description

Executes all lifecycle updates that should happen after rendering.

Parameters

None

Example

// Implement A-Frame components tock() method
  function tock() {
  // Check whether XR is initialized
  ...
  // Run XR lifecycle methods
  XR.runPostRender()
}

XR.stop()

XR.stop()

Parameters

None

Description

While stopped, the camera feed is closed and device motion is not tracked. Must call XR.run() to restart after the engine is stopped.

Example

XR.stop()

XR.version()

string XR.version()

Parameters

None

Description

Get the 8th Wall Web engine version.

Example

console.log(XR.version())

XR.AFrame

A-Frame (https://aframe.io) is a web framework designed for building virtual reality experiences. By adding 8th Wall Web to your A-Frame project, you can now easily build augmented reality experiences for the web.

Adding 8th Wall Web to A-Frame

8th Wall Web can be added to your A-Frame project in a few easy steps:

  1. Include a slightly modified version of A-Frame (referred to as "8-Frame") which fixes some polish concerns:

<script src="//cdn.8thwall.com/web/aframe/8frame-0.8.2.min.js"></script>

  1. Add the following script tag to the HEAD of your page. Replace the X's with your app key:

<script async src="//apps.8thwall.com/xrweb?appKey=XXXXXXXXXXXXX"></script>

  1. Add an "xrweb" component to your a-scene tag:

<a-scene xrweb>

Functions

Function Description
xrwebComponent Creates an A-Frame component which can be registered with AFRAME.registerComponent(). Generally won't need to be called directly.

XR.AFrame.xrwebComponent()

XR.AFrame.xrwebComponent()

Parameters

None

Description

Creates an A-Frame component which can be registered with AFRAME.registerComponent(). This, however, generally won't need to be called directly. On 8th Wall Web script load, this component will be registered if it is detected that A-Frame has loaded (i.e if window.AFRAME exists).

Example

window.AFRAME.registerComponent('xrweb', XR.AFrame.xrwebComponent())

AFrame Events

This section describes the events emitted by the "xrweb" A-Frame component.

You can listen for these events in your web application call a function to handle the event.

Events Emitted

Event Emitted Description
camerastatuschange This event is emitted when the status of the camera changes. See onCameraStatusChange from XR.addCameraPipelineModule for more information on the possible status.
realityerror This event is emitted when an error has occured when initializing 8th Wall Web. This is the recommended time at which any error messages should be displayed. The XrDevice API can help with determining what type of error messaging should be displayed.
realityready This event is emitted when 8th Wall Web has initialized and at least one frame has been successfully processed. This is the recommended time at which any loading elements should be hidden.
screenshoterror This event is emitted in response to the screenshotrequest resulting in an error.
screenshotready This event is emitted in response to the screenshotrequest event being being completed successfully. The JPEG compressed image of the AFrame canvas will be provided.

camerastatuschange

Description

This event is emitted when the status of the camera changes. See onCameraStatusChange from XR.addCameraPipelineModule for more information on the possible status.

Example:

var handleCameraStatusChange = function handleCameraStatusChange(event) {
  console.log('status change', event.detail.status);

  switch (event.detail.status) {
    case 'requesting':
      // Do something
      break;

    case 'hasStream':
      // Do something
      break;

    case 'failed':
      event.target.emit('realityerror');
      break;
  }
};
let scene = this.el.sceneEl
scene.addEventListener('camerastatuschange', handleCameraStatusChange)

realityerror

Description

This event is emitted when an error has occured when initializing 8th Wall Web. This is the recommended time at which any error messages should be displayed. The XrDevice API can help with determining what type of error messaging should be displayed.

Example:

let scene = this.el.sceneEl
  scene.addEventListener('realityerror', (event) => {
    if (XR.XrDevice.isDeviceBrowserCompatible()) {
      // Browser is compatible. Print the exception for more information.
      console.log(event.detail.error)
      return
    }

    // Browser is not compatible. Check the reasons why it may not be.
    for (let reason of XR.XrDevice.incompatibleReasons()) {
      // Handle each XR.XrDevice.IncompatibilityReasons
    }
  })

realityready

Description

This event is emitted when 8th Wall Web has initialized.

Example:

let scene = this.el.sceneEl
scene.addEventListener('realityready', () => {
  // Hide loading UI
})

screenshoterror

Description

This event is emitted in response to the screenshotrequest resulting in an error.

Example:

let scene = this.el.sceneEl
scene.addEventListener('screenshoterror', (event) => {
  console.log(event.detail)
  // Handle screenshot error.
})

screenshotready

Description

This event is emitted in response to the screenshotrequest event being being completed successfully. The JPEG compressed image of the AFrame canvas will be provided.

Example:

let scene = this.el.sceneEl
scene.addEventListener('screenshotready', (event) => {
  // screenshotPreview is an <img> HTML element
  const image = document.getElementById('screenshotPreview')
  image.src = 'data:image/jpeg;base64,' + event.detail
})

AFrame Event Listeners

This section describes the events that are listened for by the "xrweb" A-Frame component

You can emit these events in your web application to perform various actions:

Event Listener Description
hidecamerafeed Hides the camera feed. Tracking does not stop.
recenter Recenters the camera feed to its origin. If a new origin is provided as an argument, the camera's origin will be reset to that, then it will recenter.
screenshotrequest Emits a request to the engine to capture a screenshot of the AFrame canvas. The engine will emit a screenshotready event with the JPEG compressed image or screenshoterror if an error has occured.
showcamerafeed Shows the camera feed.
stopxr Stop the current XR session. While stopped, the camera feed is stopped and device motion is not tracked.

hidecamerafeed

scene.emit('hidecamerafeed')

Parameters

None

Description

Hides the camera feed. Tracking does not stop.

Example

let scene = this.el.sceneEl
scene.emit('hidecamerafeed')

recenter

scene.emit('recenter', {origin, facing})

Parameters

Parameter Description
origin: {x, y, z} [Optional] The location of the new origin.
facing: {w, x, y, z} [Optional] A quaternion representing direction the camera should face at the origin.

Description

Recenters the camera feed to its origin. If a new origin is provided as an argument, the camera's origin will be reset to that, then it will recenter.

If origin and facing are not provided, camera is reset to origin previously specified by a call to recenter or the last call to updateCameraProjectionMatrix(). Note: with A-Frame, updateCameraProjectionMatrix() initially gets called based on initial camera position in the scene.

Example

let scene = this.el.sceneEl
scene.emit('recenter')

// OR

let scene = this.el.sceneEl
scene.emit('recenter', {
  origin: {x: 1, y: 4, z: 0},
  facing: {w: 0.9856, x:0, y:0.169, z:0}
})

screenshotrequest

scene.emit('screenshotrequest')

Parameters

None

Description

Emits a request to the engine to capture a screenshot of the AFrame canvas. The engine will emit a screenshotready event with the JPEG compressed image or screenshoterror if an error has occured.

Example

const scene = this.el.sceneEl
const photoButton = document.getElementById('photoButton')

// Emit screenshotrequest when user taps
photoButton.addEventListener('click', () => {
  image.src = ""
  scene.emit('screenshotrequest')
})

scene.addEventListener('screenshotready', event => {
  image.src = 'data:image/jpeg;base64,' + event.detail
})

scene.addEventListener('screenshoterror', event => {
  console.log("error")
})

showcamerafeed

scene.emit('showcamerafeed')

Parameters

None

Description

Shows the camera feed.

Example

let scene = this.el.sceneEl
scene.emit('showcamerafeed')

stopxr

scene.emit('stopxr')

Parameters

None

Description

Stop the current XR session. While stopped, the camera feed is stopped and device motion is not tracked.

Example

let scene = this.el.sceneEl
scene.emit('stopxr')

CameraPipelineModule

8th Wall camera applications are built using a camera pipeline module framework. Applications install modules which then control the behavior of the application at runtime.

Refer to XR.addCameraPipelineModule() for details on adding camera pipeline modules to your application.

A camera pipeline module object must have a .name string which is unique within the application. It should implement one or more of the following camera lifecycle methods. These methods will be executed at the appropriate point in the run loop.

During the main runtime of an application, each camera frame goes through the following cycle:

onStart -> onCameraStatusChange (requesting -> hasStream -> hasVideo) -> onProcessGpu -> onProcessCpu -> onUpdate -> onRender

Camera modules should implement one or more of the following camera lifecycle methods:

Function Description
onCameraStatusChange Called when a change occurs during the camera permissions request.
onCanvasSizeChange Called when the canvas changes size.
onDeviceOrientationChange Called when the device changes landscape/portrait orientation.
onException Called when an error occurs in XR. Called with the error object.
onPaused Called when XR.pause() is called.
onProcessCpu Called to read results of GPU processing and return usable data.
onProcessGpu Called to start GPU processing.
onRender Called after onUpdate. This is the time for the rendering engine to issue any WebGL drawing commands. If an application is providing its own run loop and is relying on XR.runPreRender() and XR.runPostRender(), this method is not called and all rendering must be coordinated by the external run loop.
onResume Called when XR.resume() is called.
onStart Called when XR starts. First callback after XR.run() is called.
onUpdate Called to update the scene before render. Data returned by modules in onProcessGpu and onProcessCpu will be present as processGpu.modulename and processCpu.modulename where the name is given by module.name = "modulename".
onVideoSizeChange Called when the canvas changes size.

Note: Camera modules that implement onProcessGpu or onProcessCpu can provide data to subsequent stages of the pipeline. This is done by the module's name.

onCameraStatusChange()

onCameraStatusChange: ({ status, stream, video })

Description

Called when a change occurs during the camera permissions request.

Called with the status, and, if applicable, a reference to the newly available data. The typical status flow will be:

requesting -> hasStream -> hasVideo.

Parameters

Parameter Description
status One of [ 'requesting', 'hasStream', 'hasVideo', 'failed' ]
stream: [Optional] The MediaStream associated with the camera feed, if status is hasStream.
video: [Optional] The video DOM element displaying the stream, if status is hasVideo.

The status parameter has the following states:

State Description
requesting In 'requesting', the browser is opening the camera, and if applicable, checking the user permissons. In this state, it is appropriate to display a prompt to the user to accept camera permissions.
hasStream Once the user permissions are granted and the camera is successfully opened, the status switches to 'hasStream' and any user prompts regarding permissions can be dismissed.
hasVideo Once camera frame data starts to be available for processing, the status switches to 'hasVideo', and the camera feed can begin displaying.
failed If the camera feed fails to open, the status is 'failed'. In this case it's possible that the user has denied permissions, and so helping them to re-enable permissions is advisable.

Example

XR.addCameraPipelineModule({
  name = 'camerastartupmodule',
  onCameraStatusChange = ({status}) => {
    if (status == 'requesting') {
      myApplication.showCameraPermissionsPrompt()
    } else if (status == 'hasStream') {
      myApplication.dismissCameraPermissionsPrompt()
    } else if (status == 'hasVideo') {
      myApplication.startMainApplictation()
    } else if (status == 'failed') {
      myApplication.promptUserToChangeBrowserSettings()
    }
  },
})

onCanvasSizeChange()

onCanvasSizeChange: ({ videoWidth, videoHeight, canvasWidth, canvasHeight })

Description

Called when the canvas changes size. Called with dimensions of video and canvas.

Parameters

Parameter Description
videoWidth The width of the camera feed, in pixels.
videoHeight The height of the camera feed, in pixels.
canvasWidth The width of the GLctx canvas, in pixels.
canvasHeight The height of the GLctx canvas, in pixels.

Example

XR.addCameraPipelineModule({
  name: 'mycamerapipelinemodule',
  onCanvasSizeChange: ({videoWidth, videoHeight, canvasWidth, canvasHeight}) => {
    myHandleResize({videoWidth, videoHeight, canvasWidth, canvasHeight})
  },
})

onDeviceOrientationChange()

onDeviceOrientationChange: ({ videoWidth, videoHeight, orientation})

Description

Called when the device changes landscape/portrait orientation.

Parameters

Parameter Description
videoWidth The width of the camera feed, in pixels.
videoHeight The height of the camera feed, in pixels.
orientation The rotation of the ui from portrait, in degrees (-90, 0, 90, 180).

Example

XR.addCameraPipelineModule({
  name: 'mycamerapipelinemodule',
  onDeviceOrientationChange: ({orientation, videoWidth, videoHeight}) => {
    // handleResize({orientation, videoWidth, videoHeight})
  },
})

onException()

onException: (error)

Description

Called when an error occurs in XR. Called with the error object.

Parameters

Parameter Description
error The error object that was thrown

Example

XR.addCameraPipelineModule({
  name: 'mycamerapipelinemodule',
    onException : (error) => {
      console.error('XR threw an exception', error)
  },
})

onPaused()

onPaused: ()

Description

Called when XR.pause() is called.

Parameters

None

Example

XR.addCameraPipelineModule({
  name: 'mycamerapipelinemodule',
  onPaused: () => {
    console.log('pausing application')
  },
})

onProcessGpu()

onProcessGpu: ({ frameStartResult })

Description

Called to start GPU processing

Parameters

Parameter Description
frameStartResult { cameraTexture, GLctx, textureWidth, textureHeight, orientation, videoTime, repeatFrame }

The frameStartResult parameter has the following properties:

Property Description
cameraTexture The WebGLTexture containing camera feed data.
GLctx The WebGLRenderingContext or WebGL2RenderingContext.
textureWidth The width (in pixels) of the camera feed texture.
textureHeight The height (in pixels) of the camera feed texture.
orientation The rotation of the ui from portrait, in degrees (-90, 0, 90, 180).
videoTime The timestamp of this video frame.
repeatFrame True if the camera feed has not updated since the last call.

Returns

Any data that you wish to provide to onProcessCpu and onUpdate should be returned. It will be provided to those methods as processGpuResult.modulename

Example

XR.addCameraPipelineModule({
  name: 'mycamerapipelinemodule',
  onProcessGpu: ({frameStartResult}) => {
    const {cameraTexture, GLctx, textureWidth, textureHeight} = frameStartResult

    if(!cameraTexture.name){
      console.error("[index] Camera texture does not have a name")
    }

    const restoreParams = XR.GlTextureRenderer.getGLctxParameters(GLctx, [GLctx.TEXTURE0])
    // Do relevant GPU processing here
    ...
    XR.GlTextureRenderer.setGLctxParameters(GLctx, restoreParams)

    // These fields will be provided to onProcessCpu and onUpdate
    return {gpuDataA, gpuDataB}
  },
})

onProcessCpu()

onProcessCpu: ({ frameStartResult, processGpuResult })

Description

Called to read results of GPU processing and return usable data. Called with { frameStartResult, processGpuResult }. Data returned by modules in onProcessGpu will be present as processGpu.modulename where the name is given by module.name = "modulename".

Parameter Description
frameStartResult The data that was provided at the beginning of a frame.
processGpuResult Data returned by all installed modules during onProcessGpu.

Returns

Any data that you wish to provide to onUpdate should be returned. It will be provided to that method as processCpuResult.modulename

Example

XR.addCameraPipelineModule({
  name: 'mycamerapipelinemodule',
  onProcessCpu: ({ frameStartResult, processGpuResult }) => {
    const GLctx = frameStartResult.GLctx
    const { cameraTexture } = frameStartResult
    const { camerapixelarray, mycamerapipelinemodule } = processGpuResult

    // Do something interesting with mycamerapipelinemodule.gpuDataA and mycamerapipelinemodule.gpuDataB
    ...
    
    // These fields will be provided to onUpdate
    return {cpuDataA, cpuDataB}
  },
})

onRender()

onRender: ()

Description

Called after onUpdate. This is the time for the rendering engine to issue any WebGL drawing commands. If an application is providing its own run loop and is relying on XR.runPreRender() and XR.runPostRender(), this method is not called and all rendering must be coordinated by the external run loop.

Parameters

None

Example

XR.addCameraPipelineModule({
  name: 'mycamerapipelinemodule',
  onRender: () => {
    // This is already done by XR.Threejs.pipelineModule() but is provided here as an illustration.
    XR.Threejs.xrScene().renderer.render()
  },
})

onResume()

onResume: ()

Description

Called when XR.resume() is called.

Parameters

None

Example

XR.addCameraPipelineModule({
  name: 'mycamerapipelinemodule',
  onResume: () => {
    console.log('resuming application')
  },
})

onStart()

onStart: ({ canvas, GLctx, isWebgl2, orientation, videoWidth, videoHeight, canvasWidth, canvasHeight })

Description

Called when XR starts. First callback after XR.run() is called.

Parameters

Parameter Description
canvas The canvas that backs GPU processing and user display.
GLctx The WebGLRenderingContext or WebGL2RenderingContext.
isWebgl2 True if GLCtx is a WebGL2RenderingContext.
orientation The rotation of the ui from portrait, in degrees (-90, 0, 90, 180).
videoWidth The height of the camera feed, in pixels.
videoHeight The height of the camera feed, in pixels.
canvasWidth The width of the GLctx canvas, in pixels.
canvasHeight The height of the GLctx canvas, in pixels.

Example

XR.addCameraPipelineModule({
  name: 'mycamerapipelinemodule',
  onStart: ({canvasWidth, canvasHeight}) => {
    // Get the 3js scene. This was created by XR.Threejs.pipelineModule().onStart(). The
    // reason we can access it here now is because 'mycamerapipelinemodule' was installed after
    // XR.Threejs.pipelineModule().
    const {scene, camera} = XR.Threejs.xrScene()

    // Add some objects to the scene and set the starting camera position.
    myInitXrScene({scene, camera})

    // Sync the xr controller's 6DoF position and camera paremeters with our scene.
    XR.XrController.updateCameraProjectionMatrix({
      origin: camera.position,
      facing: camera.quaternion,
    })
  },
})

onUpdate()

onUpdate: ({ frameStartResult, processGpuResult, processCpuResult })

Description

Called to update the scene before render. Called with { frameStartResult, processGpuResult, processCpuResult }. Data returned by modules in onProcessGpu and onProcessCpu will be present as processGpu.modulename and processCpu.modulename where the name is given by module.name = "modulename".

Parameters

Parameter Description
frameStartResult The data that was provided at the beginning of a frame.
processGpuResult Data returned by all installed modules during onProcessGpu.
processCpuResult Data returned by all installed modules during onProcessCpu.

Example

XR.addCameraPipelineModule({
  name: 'mycamerapipelinemodule',
  onUpdate: ({ frameStartResult, processGpuResult, processCpuResult }) => {
    if (!processCpuResult.reality) {
      return
    }
    const {rotation, position, intrinsics} = processCpuResult.reality
    const {cpuDataA, cpuDataB} = processCpuResult.mycamerapipelinemodule
    // ...
  },
})

onVideoSizeChange()

onVideoSizeChange: ({ videoWidth, videoHeight, canvasWidth, canvasHeight, orientation })

Description

Called when the canvas changes size. Called with dimensions of video and canvas as well as device orientation.

Parameters

Parameters Description
videoWidth The width of the camera feed, in pixels.
videoHeight The height of the camera feed, in pixels.
canvasWidth The width of the GLctx canvas, in pixels.
canvasHeight The height of the GLctx canvas, in pixels.
orientation The rotation of the ui from portrait, in degrees (-90, 0, 90, 180).

Example

XR.addCameraPipelineModule({
  name: 'mycamerapipelinemodule',
  onVideoSizeChange: ({videoWidth, videoHeight, canvasWidth, canvasHeight}) => {
    myHandleResize({videoWidth, videoHeight, canvasWidth, canvasHeight})
  },
})

XR.CameraPixelArray

Description

Provides a camera pipeline module that gives access to camera data as a grayscale or color uint8 array.

Functions

Function Description
pipelineModule A pipeline module that provides the camera texture as an array of RGBA or grayscale pixel values that can be used for CPU image processing.

XR.CameraPixelArray.pipelineModule()

XR.CameraPixelArray.pipelineModule({ luminance, maxDimension, width, height })

Description

A pipeline module that provides the camera texture as an array of RGBA or grayscale pixel values that can be used for CPU image processing.

Parameters

Parameter Default Description
luminance [Optional] false If true, output grayscale instead of RGBA
maxDimension: [Optional] The size in pixels of the longest dimension of the output image. The shorter dimension will be scaled relative to the size of the camera input so that the image is resized without cropping or distortion.
width [Optional] The width of the camera feed texture. Width of the output image. Ignored if maxDimension is specified.
height [Optional] The height of the camera feed texture. Height of the output image. Ignored if maxDimension is specified.

Returns

Return value is an object made available to onProcessCpu and onUpdate as:

processGpuResult.camerapixelarray: {rows, cols, rowBytes, pixels}

Property Description
rows Height in pixels of the output image.
cols Width in pixels of the output image.
rowBytes Number of bytes per row of the output image.
pixels A UInt8Array of pixel data.
srcTex A texture containing the source image for the returned pixels.

Example

XR.addCameraPipelineModule(XR.CameraPixelArray.pipelineModule({ luminance: true }))
XR.addCameraPipelineModule({
  name: 'mycamerapipelinemodule',
  onProcessCpu: ({ processGpuResult }) => {
    const { camerapixelarray } = processGpuResult
    if (!camerapixelarray || !camerapixelarray.pixels) {
      return
    }
    const { rows, cols, rowBytes, pixels } = camerapixelarray

    ...
  },

XR.CanvasScreenshot

Description

Provides a camera pipeline module that can generate screenshots of the current scene.

Functions

Function Description
configure Configures the expected result of canvas screenshots.
pipelineModule Creates a camera pipeline module that, when installed, receives callbacks on when the camera has started and when the canvas size has changed.
setForegroundCanvas Sets a foreground canvas to be displayed on top of the camera canvas. This must be the same dimensions as the camera canvas.
takeScreenshot Returns a Promise that when resolved, provides a buffer containing the JPEG compressed image. When rejected, an error message is provided.

XR.CanvasScreenshot.configure()

XR.CanvasScreenshot.configure({ maxDimension, jpgCompression })

Description

Configures the expected result of canvas screenshots.

Parameters

Parameter Description
maxDimension: [Optional] The value of the largest expected dimension.
jpgCompression: [Optional] 1-100 value representing the JPEG compression quality. 100 is little to no loss, and 1 is a very low quality image.

Note: maxDimension defaults to 1280 unless overridden. jpgCompression defaults to 75 unless overridden.

Example

XR.CanvasScreenshot.configure({ maxDimension: 640, jpgCompression: 50 })

XR.CanvasScreenshot.pipelineModule()

XR.CanvasScreenshot.pipelineModule()

Description

Creates a camera pipeline module that, when installed, receives callbacks on when the camera has started and when the canvas size has changed.

Parameters

None

Returns

A CanvasScreenshot pipeline module that can be added via XR.addCameraPipelineModule().

Example

XR.addCameraPipelineModule(XR.CanvasScreenshot.pipelineModule())

XR.CanvasScreenshot.setForegroundCanvas()

XR.CanvasScreenshot.setForegroundCanvas(canvas)

Description

Sets a foreground canvas to be displayed on top of the camera canvas. This must be the same dimensions as the camera canvas.

Only required if you use separate canvases for camera feed vs virtual objects.

Parameters

Parameter Description
canvas The canvas to use as a foreground in the screenshot

Example

const myOtherCanvas = document.getElementById('canvas2')
XR.CanvasScreenshot.setForegroundCanvas(myOtherCanvas)

XR.CanvasScreenshot.takeScreenshot()

XR.CanvasScreenshot.takeScreenshot()

Description

Returns a Promise that when resolved, provides a buffer containing the JPEG compressed image. When rejected, an error message is provided.

Parameters

None

Example

XR.addCameraPipelineModule(XR.canvasScreenshot().cameraPipelineModule())
XR.canvasScreenshot().takeScreenshot().then(
  data => {
    // myImage is an <img> HTML element
    const image = document.getElementById('myImage')
    image.src = 'data:image/jpeg;base64,' + data
  },
  error => {
    console.log(error)
    // Handle screenshot error.
  })
})

XR.GlTextureRenderer

Description

Provides a camera pipeline module that draws the camera feed to a canvas as well as extra utilities for GL drawing operations.

Functions

Function Description
configure Configures the pipeline module that draws the camera feed to the canvas.
create Creates an object for rendering from a texture to a canvas or another texture.
fillTextureViewport Convenience method for getting a Viewport struct that fills a texture or canvas from a source without distortion. This is passed to the render method of the object created by GlTextureRenderer.create()
getGLctxParameters Gets the current set of WebGL bindings so that they can be restored later.
pipelineModule Creates a pipeline module that draws the camera feed to the canvas.
setGLctxParameters Restores the WebGL bindings that were saved with getGLctxParameters.
setTextureProvider Sets a provider that passes the texture to draw.

XR.GlTextureRenderer.configure()

XR.GlTextureRenderer.configure({ vertexSource, fragmentSource, toTexture, flipY })

Description

Configures the pipeline module that draws the camera feed to the canvas.

Parameters

Parameter Description
vertexSource [Optional] The vertex shader source to use for rendering.
fragmentSource [Optional] The fragment shader source to use for rendering.
toTexture [Optional] A WebGlTexture to draw to. If no texture is provided, drawing will be to the canvas.
flipY [Optional] If true, flip the rendering upside-down.

Example

const purpleShader = 
  // Purple.
  ` precision mediump float;
    varying vec2 texUv;
    uniform sampler2D sampler;
    void main() {
      vec4 c = texture2D(sampler, texUv);
      float y = dot(c.rgb, vec3(0.299, 0.587, 0.114));
      vec3 p = vec3(.463, .067, .712);
      vec3 p1 = vec3(1.0, 1.0, 1.0) - p;
      vec3 rgb = y < .25 ? (y * 4.0) * p : ((y - .25) * 1.333) * p1 + p;
      gl_FragColor = vec4(rgb, c.a);
    }`

XR.GlTextureRenderer.configure({fragmentSource: purpleShader})

XR.GlTextureRenderer.create()

XR.GlTextureRenderer.create({ GLctx, vertexSource, fragmentSource, toTexture, flipY })

Description

Creates an object for rendering from a texture to a canvas or another texture.

Parameters

Parameter Description
GLctx The WebGlRenderingContext (or WebGl2RenderingContext) to use for rendering. If no toTexture is specified, content will be drawn to this context's canvas.
vertexSource [Optional] The vertex shader source to use for rendering.
fragmentSource [Optional] The fragment shader source to use for rendering.
toTexture [Optional] A WebGlTexture to draw to. If no texture is provided, drawing will be to the canvas.
flipY [Optional] If true, flip the rendering upside-down.

Returns

Returns an object that has a render function:

Property Description
render({ renderTexture, viewport }) A function that renders the renderTexture to the specified viewport. Depending on if toTexture is supplied, the viewport is either on the canvas that created GLctx, or it's relative to the render texture provided.

The render function has the following parameters:

Parameter Description
renderTexture A WebGlTexture (source) to draw.
viewport The region of the canvas or output texture to draw to; this can be constructed manually, or using GlTextureRenderer.fillTextureViewport().

The viewport is specified by { width, height, offsetX, offsetY } :

Property Description
width The width (in pixels) to draw.
height The height (in pixels) to draw.
offsetX [Optional] The minimum x-coordinate (in pixels) to draw to.
offsetY [Optional] The minimum y-coordinate (in pixels) to draw to.

XR.GlTextureRenderer.fillTextureViewport()

XR.GlTextureRenderer.fillTextureViewport(srcWidth, srcHeight, destWidth, destHeight)

Description

Convenience method for getting a Viewport struct that fills a texture or canvas from a source without distortion. This is passed to the render method of the object created by GlTextureRenderer.create()

Parameters

Parameter Description
srcWidth The width of the texture you are rendering.
srcHeight The height of the texture you are rendering.
destWidth The width of the render target.
destHeight The height of the render target.

Returns

An object: { width, height, offsetX, offsetY }

Property Description
width The width (in pixels) to draw.
height The height (in pixels) to draw.
offsetX The minimum x-coordinate (in pixels) to draw to.
offsetY The minimum y-coordinate (in pixels) to draw to.

XR.GlTextureRenderer.getGLctxParameters()

XR.GlTextureRenderer.getGLctxParameters(GLctx, textureUnit)

Description

Gets the current set of WebGL bindings so that they can be restored later.

Parameters

Parameter Description
GLctx The WebGLRenderingContext or WebGL2RenderingContext to get bindings from.
textureunits The texture units to preserve state for, e.g. [GLctx.TEXTURE0]

Returns

A struct to pass to setGLctxParameters.

Example

const restoreParams = XR.GlTextureRenderer.getGLctxParameters(GLctx, [GLctx.TEXTURE0])
// Alter context parameters as needed
...
XR.GlTextureRenderer.setGLctxParameters(GLctx, restoreParams)
// Context parameters are restored to their previous state

XR.GlTextureRenderer.pipelineModule()

XR.GlTextureRenderer.pipelineModule({ vertexSource, fragmentSource, toTexture, flipY })

Description

Creates a pipeline module that draws the camera feed to the canvas.

Parameters

Parameter Description
vertexSource [Optional] The vertex shader source to use for rendering.
fragmentSource [Optional] The fragment shader source to use for rendering.
toTexture [Optional] A WebGlTexture to draw to. If no texture is provided, drawing will be to the canvas.
flipY [Optional] If true, flip the rendering upside-down.

Returns

A GlTextureRenderer pipeline module that can be added via XR.addCameraPipelineModule().

Example

XR.addCameraPipelineModule(XR.GlTextureRenderer.pipelineModule())

XR.GlTextureRenderer.setGLctxParameters()

XR.GlTextureRenderer.setGLctxParameters(GLctx, restoreParams)

Description

Restores the WebGL bindings that were saved with getGLctxParameters.

Parameters

Parameter Description
GLctx The WebGLRenderingContext or WebGL2RenderingContext to restore bindings on.
restoreParams The output of getGLctxParameters.

Example

const restoreParams = XR.GlTextureRenderer.getGLctxParameters(GLctx, [GLctx.TEXTURE0])
// Alter context parameters as needed
...
XR.GlTextureRenderer.setGLctxParameters(GLctx, restoreParams)
// Context parameters are restored to their previous state

XR.GlTextureRenderer.setTextureProvider()

XR.GlTextureRenderer.setTextureProvider(({ frameStartResult, processGpuResult, processCpuResult }) => {} )

Description

Sets a provider that passes the texture to draw. This should be a function that take the same inputs as cameraPipelineModule.onUpdate.

Parameters

setTextureProvider() takes a function with the following parameters:

Parameter Description
frameStartResult The data that was provided at the beginning of a frame.
processGpuResult Data returned by all installed modules during onProcessGpu.
processCpuResult Data returned by all installed modules during onProcessCpu.

Example

XR.GlTextureRenderer.setTextureProvider(
  ({processGpuResult}) => {
    return processGpuResult.camerapixelarray ? processGpuResult.camerapixelarray.srcTex : null
  })

XR.Sumerian

Amazon Sumerian lets you create VR, AR, and 3D applications quickly and easily. For more information on Sumerian, please see https://aws.amazon.com/sumerian/

Adding 8th Wall Web to Sumerian

Please refer to the following URL for a getting stared guide on using 8th Wall Web with Amazon Sumerian:

https://github.com/8thwall/web/tree/master/gettingstarted/xrsumerian

Functions

Function Description
addXRWebSystem Adds a custom Sumerian System to the provided Sumerian world.

XR.Sumerian.addXRWebSystem()

XR.Sumerian.addXRWebSystem()

Description

Adds a custom Sumerian System to the provided Sumerian world. If the given world is already running (i.e. in a {World#STATE_RUNNING} state), this system will start itself. Otherwise, it will wait for the world to start before running. When starting, this system will attach to the camera in the scene, modify it's position, and render the camera feed to the background. The given Sumerian world must only contain one camera.

Parameters

Parameter Description
world The Sumerian world that corresponds to the loaded scene.

Example

window.XR.Sumerian.addXRWebSystem(world)

Sumerian Events

This section describes the events emitted when using 8th Wall Web with Amazon Sumerian

You can listen for these events in your web application call a function to handle the event.

Events Emitted

Event Emitted Description
camerastatuschange This event is emitted when the status of the camera changes. See onCameraStatusChange from XR.addCameraPipelineModule for more information on the possible status.
screenshoterror This event is emitted in response to the screenshotrequest resulting in an error.
screenshotready This event is emitted in response to the screenshotrequest event being being completed successfully. The JPEG compressed image will be provided.
xrerror This event is emitted when an error has occured when initializing 8th Wall Web.
xrready This event is emitted when 8th Wall Web has initialized and at least one frame has been successfully processed.

camerastatuschange (Sumerian)

Description

This event is emitted when the status of the camera changes. See onCameraStatusChange from XR.addCameraPipelineModule for more information on the possible status.

Example:

var handleCameraStatusChange = function handleCameraStatusChange(data) {
  console.log('status change', data.status);

  switch (data.status) {
    case 'requesting':
      // Do something
      break;

    case 'hasStream':
      // Do something
      break;

    case 'failed':
      // Do something
      break;
  }
};
window.sumerian.SystemBus.addListener('camerastatuschange', handleCameraStatusChange)

screenshoterror (Sumerian)

Description

This event is emitted in response to the screenshotrequest resulting in an error.

Example:

window.sumerian.SystemBus.addListener('screenshoterror', (data) => {
  console.log(event.detail)
  // Handle screenshot error.
})

screenshotready (Sumerian)

Description

This event is emitted in response to the screenshotrequest event being being completed successfully. The JPEG compressed image of the Sumerian canvas will be provided.

Example:

window.sumerian.SystemBus.addListener('screenshotready', (data) => {
    // screenshotPreview is an <img> HTML element
    const image = document.getElementById('screenshotPreview')
    image.src = 'data:image/jpeg;base64,' + data
  })

xrerror

Description

This event is emitted when an error has occured when initializing 8th Wall Web. This is the recommended time at which any error messages should be displayed. The XrDevice API can help with determining what type of error messaging should be displayed.

Example:

window.sumerian.SystemBus.addListener('xrerror', (data) => {
    if (XR.XrDevice.isDeviceBrowserCompatible) {
      // Browser is compatible. Print the exception for more information.
      console.log(data.error)
      return
    }

    // Browser is not compatible. Check the reasons why it may not be.
    for (let reason of XR.XrDevice.incompatibleReasons()) {
      // Handle each XR.XrDevice.IncompatibleReason
    }
  })

xrready

Description

This event is emitted when 8th Wall Web has initialized and at least one frame has been successfully processed. This is the recommended time at which any loading elements should be hidden.

Example:

window.sumerian.SystemBus.addListener('xrready', () => {
  // Hide loading UI
})

Sumerian Event Listeners

This section describes the events that are listened for by the Sumerian module in 8th Wall Web.

You can emit these events in your web application to perform various actions:

Event Listener Description
recenter Recenters the camera feed to its origin. If a new origin is provided as an argument, the camera's origin will be reset to that, then it will recenter.
screenshotrequest Emits a request to the engine to capture a screenshot of the Sumerian canvas. The engine will emit a screenshotready event with the JPEG compressed image or screenshoterror if an error has occured.

recenter (Sumerian)

window.sumerian.SystemBus.emit('recenter', {origin, facing})

Description

Recenters the camera feed to its origin. If a new origin is provided as an argument, the camera's origin will be reset to that, then it will recenter.

Parameters

Parameter Description
origin: {x, y, z} [Optional] The location of the new origin.
facing: {w, x, y, z} [Optional] A quaternion representing direction the camera should face at the origin.

Example

window.sumerian.SystemBus.emit('recenter')

// OR

window.sumerian.SystemBus.emit('recenter', {
  origin: { x: 1, y: 4, z: 0 },
  facing: { w: 0.9856, x: 0, y: 0.169, z: 0 }
})

screenshotrequest (Sumerian)

window.sumerian.SystemBus.emit('screenshotrequest')

Parameters

None

Description

Emits a request to the engine to capture a screenshot of the Sumerian canvas. The engine will emit a screenshotready event with the JPEG compressed image or screenshoterror if an error has occured.

Example

const photoButton = document.getElementById('photoButton')

// Emit screenshotrequest when user taps
photoButton.addEventListener('click', () => {
  image.src = ""
  window.sumerian.SystemBus.emit('screenshotrequest')
})

window.sumerian.SystemBus.addListener('screenshotready', event => {
  image.src = 'data:image/jpeg;base64,' + event.detail
})

window.sumerian.SystemBus.addListener('screenshoterror', event => {
  console.log("error")
})

XR.Threejs

Description

Provides a camera pipeline module that drives three.js camera to do virtual overlays.

Functions

Function Description
pipelineModule A pipeline module that interfaces with the threejs environment and lifecyle.
xrScene Get a handle to the xr scene, camera and renderer.

XR.Threejs.pipelineModule()

XR.Threejs.pipelineModule()

Description

A pipeline module that interfaces with the threejs environment and lifecyle. The threejs scene can be queried using Threejs.xrScene() after Threejs.pipelineModule()'s onStart method is called. Setup can be done in another pipeline module's onStart method by referring to Threejs.xrScene() as long as XR.addCameraPipelineModule is called on the second module after calling XR.addCameraPipelineModule(Threejs.pipelineModule()).

  • onStart, a threejs renderer and scene are created and configured to draw over a camera feed.
  • onUpdate, the threejs camera is driven with the phone's motion.
  • onRender, the renderer's render() method is invoked.

Note that this module does not actually draw the camera feed to the canvas, GlTextureRenderer does that. To add a camera feed in the background, install the GlTextureRenderer.pipelineModule() before installing this module (so that it is rendered before the scene is drawn).

Parameters

None

Returns

A Threejs pipeline module that can be added via XR.addCameraPipelineModule().

Example

// Add XrController.pipelineModule(), which enables 6DoF camera motion estimation.
XR.addCameraPipelineModule(XR.XrController.pipelineModule())

// Add a GlTextureRenderer which draws the camera feed to the canvas.
XR.addCameraPipelineModule(XR.GlTextureRenderer.pipelineModule())

// Add Threejs.pipelineModule() which creates a threejs scene, camera, and renderer, and
// drives the scene camera based on 6DoF camera motion.
XR.addCameraPipelineModule(XR.Threejs.pipelineModule())

// Add custom logic to the camera loop. This is done with camera pipeline modules that provide
// logic for key lifecycle moments for processing each camera frame. In this case, we'll be
// adding onStart logic for scene initialization, and onUpdate logic for scene updates.
XR.addCameraPipelineModule({
  // Camera pipeline modules need a name. It can be whatever you want but must be unique
  // within your app.
  name: 'myawesomeapp',

  // onStart is called once when the camera feed begins. In this case, we need to wait for the
  // XR.Threejs scene to be ready before we can access it to add content.
  onStart: ({canvasWidth, canvasHeight}) => {
    // Get the 3js scene. This was created by XR.Threejs.pipelineModule().onStart(). The
    // reason we can access it here now is because 'myawesomeapp' was installed after
    // XR.Threejs.pipelineModule().
    const {scene, camera} = XR.Threejs.xrScene()

    // Add some objects to the scene and set the starting camera position.
    myInitXrScene({scene, camera})

    // Sync the xr controller's 6DoF position and camera paremeters with our scene.
    XR.XrController.updateCameraProjectionMatrix({
      origin: camera.position,
      facing: camera.quaternion,
    })
  },

  // onUpdate is called once per camera loop prior to render. Any 3js geometry scene would
  // typically happen here.
  onUpdate: () => {
    // Update the position of objects in the scene, etc.
    updateScene(XR.XrController.xrScene())
  },
})

XR.Threejs.xrScene()

XR.Threejs.xrScene()

Description

Get a handle to the xr scene, camera and renderer.

Parameters

None

Returns

An object: { scene, camera, renderer }

Property Description
scene The Threejs scene.
camera The Threejs main camera.
renderer The Threejs renderer.

Example

const {scene, camera} = XR.Threejs.xrScene()

XR.XrController

Description

XrController provides 6DoF camera tracking and interfaces for configuring tracking.

Functions

Function Description
configure Configures what processing is performed by XrController (may have performance implications).
pipelineModule Creates a camera pipeline module that, when installed, receives callbacks on when the camera has started, camera proessing events, and other state changes. These are used to calculate the camera's position.
recenter Repositions the camera to the origin / facing direction specified by updateCameraProjectionMatrix and restart tracking.
updateCameraProjectionMatrix Reset the scene's display geometry and the camera's starting position in the scene. The display geometry is needed to properly overlay the position of objects in the virtual scene on top of their corresponding position in the camera image. The starting position specifies where the camera will be placed and facing at the start of a session.

XR.XrController.configure()

XrController.configure({ enableWorldPoints })

Description

Configures what processing is performed by XrController (may have performance implications).

Parameters

Parameter Description
enableWorldPoints [Optional] If true, worldPoints will be provided by XrController.pipelineModule().

Example

XR.XrController.configure({ enableWorldPoints: true })

XR.XrController.pipelineModule()

XR.XrController.pipelineModule()

Parameters

None

Description

Creates a camera pipeline module that, when installed, receives callbacks on when the camera has started, camera proessing events, and other state changes. These are used to calculate the camera's position.

Returns

Return value is an object made available to onUpdate as:

processCpuResult.reality: {rotation, position, intrinsics, trackingStatus, worldPoints}

Property Description
rotation: {w, x, y, z} The orientation (quaternion) of the camera in the scene.
position: {x, y, z} The position of the camera in the scene.
intrinsics A column-major 4x4 projection matrix that gives the scene camera the same field of view as the rendered camera feed.
trackingStatus One of 'UNSPECIFIED', 'NOT_AVAILABLE', 'LIMITED' or 'NORMAL'.
worldPoints: [{id, confidence, position: {x, y, z}}] An array of detected points in the world at their location in the scene. Only filled if XrController is configured to return world points.

Example

XR.addCameraPipelineModule(XR.XrController.pipelineModule())

XR.XrController.recenter()

XR.XrController.recenter()

Parameters

None

Description

Repositions the camera to the origin / facing direction specified by updateCameraProjectionMatrix and restart tracking.

XR.XrController.updateCameraProjectionMatrix()

XR.XrController.updateCameraProjectionMatrix({ cam, origin, facing })

Description

Reset the scene's display geometry and the camera's starting position in the scene. The display geometry is needed to properly overlay the position of objects in the virtual scene on top of their corresponding position in the camera image. The starting position specifies where the camera will be placed and facing at the start of a session.

Parameters

Parameter Description
cam [Optional] { pixelRectWidth, pixelRectHeight, nearClipPlane, farClipPlane }
origin: { x, y, z } [Optional] The starting position of the camera in the scene.
facing: { w, x, y, z } [Optional] The starting direction (quaternion) of the camera in the scene.

cam has the following parameters:

Parameter Description
pixelRectWidth The width of the canvas that displays the camera feed.
pixelRectHeight The height of the canvas that displays the camera feed.
nearClipPlane The closest distance to the camera at which scene objects are visible.
farClipPlane The farthest distance to the camera at which scene objects are visible.

Example

XR.XrController.updateCameraOrigin({
  origin: { x: 1, y: 4, z: 0 },
  facing: { w: 0.9856, x: 0, y: 0.169, z: 0 }
})

XR.XrDevice

Description

Provides information about device compatibility and characteristics.

Properties

Property Type Description
IncompatibilityReasons Enum The possible reasons for why a device and browser may not be compatible with 8th Wall Web.

Functions

Function Description
deviceEstimate Returns an estimate of the user's device (e.g. make / model) based on user agent string and other factors. This information is only an estimate, and should not be assumed to be complete or reliable.
incompatibleReasons Returns an array of XrDevice.IncompatibilityReasons why the device the device and browser are not supported. This will only contain entries if XR.XrDevice.isDeviceBrowserCompatible() returns false.
incompatibleReasonDetails Returns extra details about the reasons why the device and browser are incompatible. This information should only be used as a hint to help with further error handling. These should not be assumed to be complete or reliable. This will only contain entries if XR.XrDevice.isDeviceBrowserCompatible() returns false.
isDeviceBrowserCompatible Returns an estimate of whether the user's device and browser is compatible with 8th Wall Web. If this returns false, XrDevice.incompatibleReasons() will return reasons about why the device and browser are not supported.

XR.XrDevice.IncompatibilityReasons

Enumeration

Description

The possible reasons for why a device and browser may not be compatible with 8th Wall Web.

Properties

Property Value Description
UNSPECIFIED 0 The incompatible reason is not specified.
UNSUPPORTED_OS 1 The estimated operating system is not supported.
UNSUPPORTED_BROWSER 2 The estimated browser is not supported.
MISSING_DEVICE_ORIENTATION 3 The browser does not support device orientation events.
MISSING_USER_MEDIA 4 The browser does not support user media acccess.
MISSING_WEB_ASSEMBLY 5 The browser does not support web assembly.

XR.XrDevice.deviceEstimate()

XR.XrDevice.deviceEstimate()

Description

Returns an estimate of the user's device (e.g. make / model) based on user agent string and other factors. This information is only an estimate, and should not be assumed to be complete or reliable.

Parameters

None

Returns

An object: { locale, os, osVersion, manufacturer, model }

Property Description
locale The user's locale.
os The device's operating system.
osVersion The device's operating system version.
manufacturer The device's manufacturer.
model The device's model.

XR.XrDevice.incompatibleReasons()

XR.XrDevice.incompatibleReasons()

Description

Returns an array of XR.XrDevice.IncompatibilityReasons why the device the device and browser are not supported. This will only contain entries if XR.XrDevice.isDeviceBrowserCompatible() returns false.

Parameters

None

Returns

Returns an array of XrDevice.IncompatibleReasons

Example

const reasons = XR.XrDevice.incompatibleReasons()
for (let reason of reasons) {
  switch (reason) {
    case XR.XrDevice.IncompabilityReasons.UNSUPPORTED_OS:
      // Handle unsupported os error messaging.
      break;
    case XR.XrDevice.IncompabilityReasons.UNSUPPORTED_BROWSER:
       // Handle unsupported browser
       break;
   ...
}

XR.XrDevice.incompatibleReasonDetails()

XR.XrDevice.incompatibleReasonDetails()

Description

Returns extra details about the reasons why the device and browser are incompatible. This information should only be used as a hint to help with further error handling. These should not be assumed to be complete or reliable. This will only contain entries if XrDevice.isDeviceBrowserCompatible() returns false.

Parameters

None

Returns

An object: { inAppBrowser, inAppBrowserType }

Property Description
inAppBrowser The name of the in-app browser detected (e.g. 'Twitter')
inAppBrowserType A string that helps describe how to handle the in-app browser.

XR.XrDevice.isDeviceBrowserCompatible()

XR.XrDevice.isDeviceBrowserCompatible()

Description

Returns an estimate of whether the user's device and browser is compatible with 8th Wall Web. If this returns false, XrDevice.incompatibleReasons() will return reasons about why the device and browser are not supported.

Parameters

None

Returns

True or false.

Changelog

Release 9.2:

Release 9.1:

  • New Features:

    • Added support for Amazon Sumerian in 8th Wall Web
    • Improved tracking stability and eliminated jitter

Release 9:

  • Initial release of 8th Wall Web!

Invalid App Key

Issue: When trying to view my Web App, I receive an "Invalid App Key" error message

  1. Verify your app key was pasted properly into index.html.
  2. Verify you are connecting to your web app via https
  3. Verify you have a valid Dev Token. On your phone, visit https://apps.8thwall.com/token to view info about the dev token installed.

Object Not Tracking Surface Properly

Issue: Content in my scene doesn't appear to be "sticking" to a surface properly

Resolution:

To place an object on a surface, the base of the object needs to be at a height of Y=0

Note: Setting the position at a height of Y=0 isn't necesarily sufficient.

For example, if the transform your model is at the center of the object, placing it at Y=0 will result in part of the object living below the surface. In this case you'll need to adjust the vertical position of the object so that the bottom of the object sits at Y=0.

It's often helpful to visualize object positioning relative to the surface by placing a semi-transparent plane at Y=0.

A-Frame example:

<a-plane position="0 0 0" rotation="-90 0 0" width="4" height="4" material="side: double; color: #FFFF00; transparent: true; opacity: 0.5" shadow></a-plane>

Three.js example:

  // Create a 1x1 Plane with a transparent yellow material
  var geometry = new THREE.PlaneGeometry( 1, 1, 1, 1 );   // THREE.PlaneGeometry (width, height, widthSegments, heightSegments)
  var material = new THREE.MeshBasicMaterial( {color: 0xffff00, transparent:true, opacity:0.5, side: THREE.DoubleSide} );
  var plane = new THREE.Mesh( geometry, material );
  // Rotate 90 degrees (in radians) along X so plane is parallel to ground 
  plane.rotateX(1.5708)
  plane.position.set(0, 0, 0)
  scene.add( plane );

Help & Support

Need some help? 8th Wall is here to help you succeed. Contact us directly, or reach out to the community to get answers.

Ways to get help:

Email Support Stack Overflow GitHub
Email support@8thwall.com to get help directly from our support team Get help and discuss solutions with other 8th Wall Web users by using the 8thwall-web tag Download sample code and view step-by-step setup guides on our GitHub repo


























[1] Intended to outline our general product direction. It is intended for information purposes only, and may not be incorporated into any contract. It is not a commitment to deliver any material, code, or functionality, and should not be relied upon in making purchasing decisions. The development, release, and timing of any features or functionality described for 8th Wall’s products remain at the sole discretion of 8th Wall, Inc.