JP/EN/CN
MENU

ToF AR
Download

Table of Contents

1. TofAr component

The ToF AR TofAr component provides common functionality between other components.

The TofAr component is available on the following platforms:

Platform Available Note

Android

Yes

iOS

Yes

1.1. RunMode

RunMode represents ToF AR execution mode. ToF AR can operate in the following patterns.

With MultiNode, the sensor value and camera image of the Android device can be received and used by other devices.

Default
Multi Node

The current RunMode can be obtained from runMode in RuntimeSettingsProperty

1.2. Prefabs

Name Description

TofArManager

Provides SDK common functions

1.3. Component description

For more details on the TofAr components, see API references for TofAr.

2. ToF component

The ToF AR ToF component

The ToF component provides access to the ToF camera’s Depth data, Confidence data and PointCloud data. The features provided are:

  • Data retrieval

  • Data visualization

  • Controlling the ToF camera (frame rate, shutter speed, and so on)

Depth Configuration showing Depth and Confidence screens.
Depth Configuration showing Mesh screen.

The ToF component is available on the following platforms:

Platform Available Note

Android

Yes

iOS

Yes

2.1. Streams and data structures

The application can retrieve data from the public fields in the manager shown in the following diagram.

stream

2.1.1. Data layout: Depth data

The pixels of the Depth image are stored as an array of shorts (16-bit). Each short represents the distance measured away from the ToF camera (the origin) in millimeters.

[0] [1] [2] […​] [Width * Height]

2500

2550

2525

…​

0

2.1.2. Data layout: confidence data

The pixels of the Confidence image are stored as an array of shorts (16-bit). Each short is a Condfidence pixel value that is greater than or equal to zero.

[0] [1] [2] […​] [Width * Height]

100

0

500

…​

0

2.1.3. Data layout: PointCloud data

The PointCloud is stored as an array of bytes (8-bit). The 16-bit value for each of x, y, and z is stored using two elements in the array.

Extract the values with:

  • System.BitConverter.ToInt16(byte[] value, int startIndex)

  • System.BlockCopy (Array src, int srcOffset, Array dst, int dstOffset, int count)

For example:

// get single value.
short x = BitConverter.ToInt16(TofArTofManager.Instance.PointCloudData.Data, 0);
short y = BitConverter.ToInt16(TofArTofManager.Instance.PointCloudData.Data, 2);

// get all values.
int size = TofArTofManager.Instance.PointCloudData.Data.Length;
short[] all = new short[size / sizeof(short)];
Buffer.BlockCopy(TofArTofManager.Instance.PointCloudData.Data, 0, all, 0, size);
[0] [1] [2] [3] [4] [5] [6] [7] …​ [Width * Height * 2 * 3]

[x0]

[y0]

[z0]

[x1]

…​

[Width * Height * 3]

100

50

200

200

100

60

64

32

…​

0

2.2. Prefabs

Name Description

TofArTofManager

Manage the connection with The ToF camera.
Scenes using the ToF functionality must use this prefab.

DepthViewQuad

Display Quad object of ToF camera image.
If configured into scenes, it automatically links with starting/ending of TofArTofManager stream and is displayed.

DepthViewRawImage

Display RawImage object of ToF camera image.
If configured into scenes, it automatically links with starting/ending of TofArTofManager stream and is displayed.

PointCloudMesh

Display Mesh object of PointCloud.
If configured into scenes, it automatically links with starting/ending of TofArTofManager stream and is displayed.

DepthConfigurationPanel

UI for changing depth mode

SkeletonDepthView

RawImage object that shows Hand, Body and Face recognition results on ToF camera image.

2.3. Component description

For more details on the Tof components, see API references for Tof.

3. Color component

The ToF AR Color component.

The color component provides access to the RGB camera data. The features provided are:

  • Data retrieval

  • Data visualization

  • Controlling the RGB camera (resolution, auto-focus, auto-exposure, white-balance, and so on)

  • Still image capture (only supported the use of AVFoundation on iOS devices)

Color component output showing a cup on a table in fron of a white wall.

This component is available on the following platforms:

Platform Available Note

Android

Yes

iOS

Yes

3.1. Streams and data structures

The application can retrieve data from the public fields in the manager shown in the following diagram.

stream

3.1.1. Data layout: color data

The pixels of the color image are stored as an array of bytes (8-bit). The bytes contain the color camera’s pixel data. The contents of the pixel data depends on the RGB camera’s hardware and settings.

[0] [1] [2] […​] [max]

128

120

200

…​

0

3.1.2. Data layout: PhotoColor data

The image is stored as an array of bytes (8-bit), which format is specified at CapturePhotoProperty.

3.1.3. Data layout: PhotoDepth data

The pixels of Depth image captured when recording PhotoColor data are stored as an array of bytes (8-bit).

3.2. Prefabs

Name Description

TofArColorManager

Manage the connection with the RGB camera.
Scenes using the Color functionality must use this prefab.

ColorViewQuad

Display Quad object of ToF camera image.
If configured into scenes, it automatically links with starting/ending of TofArColorManager stream and is displayed.

ColorViewRawImage

Display RawImage object of ToF camera image.
If configured into scenes, it automatically links with starting/ending of TofArColorManager stream and is displayed.

3.3. Component description

For more details on the Color component, see API references for Color.

4. Plane component

The ToF AR Plane component

The plane component recognizes a plane in real time for any given point inside the Depth camera image. The features provided are:

  • Plane data retrieval

  • Dynamic updating of an object inside a Unity scene.

Can get/set up to eight detection configs at once. Plane detection stops executing if null or an empty config list is set.

Plane output

This component is available on the following platforms:

Platform Available Note

Android

Yes

iOS

Yes

4.1. Streams and data structures

The application can retrieve data from the public fields in the manager shown in the following diagram.

stream

4.1.1. Data layout: plane data

Struct that stores the plane data.

[Member] [Type] [Description]

normal

Vector3

Normal vector of plane

center

Vector3

Reference point of plane detection

radius

float

Distance between the reference point and the closest plane

4.2. Prefabs

Name Description

TofArPlaneManager

Manages the connection with the ToF AR Plane component.
Configured to scenes using Plane function.

PlaneObject

3D object visualizing plane detection result.
If configured into scenes, plane calculation result, plane size (radius) and the slope automatically link with TofArPlaneManager stream and are displayed.

4.3. Component description

For more details on the Plane component, see API references for Plane.

5. Mesh component

The ToF AR Mesh component.

The mesh component generates a 3D mesh from the Depth camera image in real time. The features provided are:

  • Vertex or triangle data retrieval

  • Dynamic updating of a mesh object inside a Unity scene.

  • Mesh reduction control

  • Mesh generation excluding the mask part generated from the Segmentation component

Mesh output

Mesh resolution can be controlled through the settings in AlgorithmConfigProperty.

By setting meshReductionLevel = n, where n ≥ 0, you can control the resolution of the mesh as only 1/(n+1) points will be used for mesh generation.

When meshReductionLevel = 0, the resolution is at its best. The higher the value, the lower the resolution - but the lighter it becomes. In the figure below, the orange PointCloud data is used as input data for the mesh generation.

PointCloud Data diagram showing resolution of meshReduction levels 0 (320x240)

This component is available on the following platforms:

Platform Available Note

Android

Yes

iOS

Yes

5.1. Streams and data structures

The application can retrieve data from the public fields in the manager shown in the following diagram.

Diagram of MeshStream MeshcChannel connected to PublicFields MEshData

5.1.1. Data layout: mesh data

Struct that stores the vertex and triangle data.

[Member] [Type] [Description]

verticesCount

int

Vertex count

vertices

float*

Pointer to vertex array

trianglesCount

int

Triangle count

triangles

int*

Pointer to triangle array

5.2. Prefabs

Name Description

TofArMeshManager

Manages the connection with ToF AR Mesh component.
Scenes using the Mesh functionality must use this prefab.

DynamicMesh

Mesh object visualizing Mesh generation result.
When placed into scene, the shape represented by the vertices of the mesh generation result is automatically linked to the start/end of TofArMeshManager’s stream and displayed.

5.3. Component description

For more details on the Mesh component, see API references for Mesh.

6. Coordinate component

The ToF AR Coordinate Component.

The coordinate component provides a means to convert between the different coordinate systems of the Depth, RGB and 3D cameras. It is mainly used for retrieving converted coordinate data through getting component properties.

Coordinate component output

Calibration information utilizes the CalibrationSettings Prefab and is able to be shared among applications.

This component is available on the following platforms:

Platform Available Note

Android

Yes

iOS

Yes

6.1. Streams and data structures

This component does not have access to stream data.

6.2. Prefabs

Name Description

TofArCoordinateManager

Manages the connection with ToF AR Coordinate component.
Scenes using the Coordinate functionality must use this prefab.

CalibrationSettings

Manages the calibration information for the Depth and Color image mapping to function.
Can save, load, and reset calibration information.

6.3. Component description

For more details on the Coordinate component, see API references for Coordinate.

7. Hand component

The ToF AR Hand Component

The hand component recognizes in real time the location of the joints of the hand and fingers by utilizing the ToF camera data.

Hand component output

The features provided are:

  • Check whether a hand exists or not.

  • Retrieve the joint location data of the hand, and so on.

  • Display center point of both hands, only when RecognizeConfigProperty.cameraFacing is set to Front.

  • Hand pose and hand gesture recognition (hand pose recognition executes once, while hand gesture recognition executes continuously).

  • Fingertip plane touch estimation.

This component is available on the following platforms:

Platform Available Note

Android

Yes

iOS

Yes

7.1. Recognition mode

Recognition mode Description

OneHandHoldSmapho

Hold device with one hand, then extend other hand in front of the device and capture using the rear camera.

Face2Face

Face the device and capture one or boths hands using the front camera.

HeadMount

Equip the device onto VR goggles for smartphones and wear it on your head, then extend one or both hands out in front and capture using the rear camera.

7.2. Recognition distance for each recognition mode

Recognition mode Recognition distance

OneHandHoldSmapho

100~800mm

Face2Face

100~1200mm

HeadMount

100~800mm

7.3. Streams and data structures

The application can retrieve data from the public fields in the manager shown in the following diagram.

stream

7.3.1. Data layout: hand data

Struct that stores hand data.

[Member] [Type] [Description]

handStatus

HandStatus

Hand recognition status

featurePointsLeft

Vector3[]

XYZ coordinates of the left hand and wrist feature points in the camera coordinate system
*The coordinates of ArmCenter cannot be retrieved.

featurePointsRight

Vector3[]

XYZ coordinates of the right hand and wrist feature points in the camera coordinate system
*The coordinates of ArmCenter cannot be retrieved.

poseLevelsLeft

float[]

Left hand pose recognition level

poseLevelsRight

float[]

Right hand pose recognition level

7.4. Hand feature point positions

handindex
The coordinates of ArmCenter cannot be retrieved.

7.5. About the Neural Network Library that executes hand recognition processing

The following Neural Network Libraries can be used on the lower layer of hand recognition processing:

  • TensorFlow Lite

The Neural Network Library and RuntimeMode used when executing the application are determined by the following flow:

  1. The following values of Unity Inspector set for TofArHandManager

    • nnlib

    • runtimeMode

    • runtimeModeAfter

  2. The default values for each device.
    Refer to Default Settings for Hand Library and RuntimeMode for the default values for each device.

7.6. Device Orientation and HandModel Rotation Configuration Overview

Cardboard XR Plugin Orientation [HandModel.]
autoRotation
HandModel rotation

1

Off

Auto Rotation

True

Match device orientation

2

Off

Auto Rotation

False

Do not rotate

3

Off

Other

True

Match device orientation

4

Off

Other

False

Do not rotate

5

On

Landscape Right

True

Will display correctly when device orientation is Landscape Right

6

On

Landscape Right

False

Do not rotate (same orientation as 7)

7

On

Landscape Left

True

Will display correctly when device orientation is Landscape Left

8

On

Landscape Left

False

Do not rotate (same orientation as 7)

7.7. About adding collision detection to the hand model

To add collision detection to the hand model, place the HandCollider Prefab into the scene, and set up any optional settings.

7.8. About pose estimation

Pose estimation recognizes hand poses in a single frame. Refer to PoseIndex for pose types.

7.9. About gesture estimation

Gesture estimation recognizes one-second continuous hand movement patterns. Refer to GestureIndex for pose types.

7.10. Prefabs

Name Description

TofArHandManager

Manages the connection with ToF AR Hand component.
Scenes using the Hand functionality must use this prefab.

HandModel

3D object visualizing hand detection result.
If configured into scenes, Spheres are displayed at positions where hand features are detected.

RealHandModel

3D object visualizing hand detection result.
If configured into scenes, a real 3d model is displayed where the hand is detected.
Variations preset with the following materials are provided under Assets/TofAr/TofArHand/V0/RealHandModel:

  • RealHandModelArmor

  • RealHandModelIce

  • RealHandModelNormal

  • RealHandModelRim

HandCollider

Enable collision detection for hand models.
If configured into scenes, Colliders are placed at positions where hand features are detected.

FingerTouchDetector

Enables fingertip plane touch estimation function.
The fingertip plane touch estimation estimates whether the fingertip is touching the plane.

7.11. Component description

For more details on the Hand component, see API references for Hand.

8. MarkRecog component

The ToF AR MarkRecog component.

The MarkRecog component recognizes optional marks located inside binary images. MarkRecog primarily supports the following function:

  • Recognition of mark located inside image.

MarkRecog output

This component is available on the following platforms:

Platform Available Note

Android

Yes

iOS

No

8.1. Streams and data structures

This component does not have access to stream data.

8.2. Prefabs

Name Description

TofArMarkRecogManager

Must be in scene for mark recognition to work.

MarkImageDrawer

Links with Hand recognition component and creates mark image.

8.3. Component description

For more details on the MarkRecog component, see API references for MarkRecog.

9. Modeling Component

The ToF AR Modeling component.

The Modeling component accumulates depth data of ToF cameras over multiple frames and generates 3D Mesh data. It mainly provides the following functions:

  • Start/end 3D modeling process

  • Set 3D modeling parameters

  • Output data

  • Mesh generation excluding the mask part generated from the Segmentation component

Modeling screen showing a mesh overlaying the corner of a room with drapes.

This component is available on the following platforms:

Platform Available Note

Android

Yes

iOS

No

9.1. Streams and data structures

The application can retrieve data from the public fields in the manager shown in the following diagram.

Diagram of a Stream Channel accessing PublicFields ModelingData.

9.1.1. Data format: ModelingData

A structure that stores a list of vertices and a list of triangles.
The modeling result may be divided into multiple 3D mesh data (blocks), and the member with the largest number of blocks is stored in the list.

Data is stored in each member in the following format.

[Member] [Type] [Description]

vertices

float[]

Vertex array

triangles

int[]

Triangle array

blockIndex

int[]

Block index

9.2. Prefabs

None

9.3. Component description

For more details on the Modeling component, see API references for Modeling.

10. Body component

The ToF AR Body component.

Output screen showing a man overlaid with a body pose skeleton.

The Body component performs Body recognition related processing and provides recognition result data to the application. It provides the following functions:

  • Body recognition from ToF camera image

  • Body recognition data output in AR Foundation compliant format

  • Display of Body recognition result

  • Body gesture recognition

The application can get the data from the TofArBodyManager.BodyData field.

This component is available on the following platforms:

Platform Available Note

Android

Yes

Available Estimators:
SV2BodyPoseEstimator

iOS

Yes

Available Estimators:
SV2BodyPoseEstimator
ARFoundationConnector

10.1. Data format: Body data

The application can get the structure that stores the Body information from the TofArBodyManager.BodyData field.

[Member] [Type] [Description]

results

BodyResult[]

Body information array

BodyResult has the same definition as the ARHumanBody class of AR Foundation.

10.2. Definition of joint position

The basic definition of joint position is based on ARKit.

The joint position index corresponding to the charts in the Validating a Model for Motion Capture article from Apple is defined in TofAr.V0.Body.JointIndices.

Since the actual numbers of joint data and joint positions recognized differ depending on the Body recognition engine used, they are converted and stored in Body data. The conversion mapping will be described separately.

10.3. Body recognition engine

Supports the following recognition engines:

10.4. About gesture estimation

Gesture estimation is performed on the TofArBodyGestureManager. The gesture estimation result can be obtained by registering to the TofArBodyGestureManager.OnGestureEstimated event handler.

Refer to BodyGesture for gesture types.

10.5. Prefabs

Name Description

TofArBodyManager

Manages the connection with ToF AR Body component.
Must be placed in scenes that use Body functions.

BodySkeleton

Display Body recognition result as simple skeleton.

10.6. Component description

For more details on the Body component, see API references for Body.

11. SV2BodyPoseEstimator

Performs Body recognition processing on ToF AR Depth data.

This component is available on the following platforms:

Platform Available Note

Android

Yes

iOS

Yes

11.1. Method

11.1.1. Scene settings

  1. The following must be in the scene:

    • TofArTof/TofArTofManager

    • TofArBody/TofArBodyManager

  2. Set TofArBodyManager.DetectorType to BodyPoseDetectorType.Internal_SV2

  3. Place BodySkeleton Prefab at the origin of the scene.

11.1.2. Runtime settings

The recognition process starts when TofArTofManager and TofArBodyManager start.

11.2. Joint position mapping

Maps the recognition result to the ARKit joint index.

ARKit SV2BodyPoseEstimator joint index

JointIndices.Head

0

JointIndices.Neck1

1

JointIndices.RightShoulder1

2

JointIndices.RightForearm

3

JointIndices.RightHand

4

JointIndices.LeftShoulder1

5

JointIndices.LeftForearm

6

JointIndices.LeftHand

7

JointIndices.RightUpLeg

8

JointIndices.RightLeg

9

JointIndices.RightFoot

10

JointIndices.LeftUpLeg

11

JointIndices.LeftLeg

12

JointIndices.LeftFoot

13

JointIndices.RightEyeball

14

JointIndices.LeftEyeball

15

JointIndices.RightEye

16

JointIndices.LeftEye

17

12. ARFoundationConnector

ARFoundationConnector allows you to use both ARFoundation and ToF AR in UnityProject. The current version of ToF AR does not control the lower layer of the camera directly. ToF AR uses output data frames from the RGB or ToF cameras which are obtained by ARFoundation, and then runs the Hand and Body recognition processes. Using ARFoundationConnector, app developers can receive data frames from ToF AR with the same interface, even if ARFoundation is not used directly.

Schematic showing the application’s connection of code

12.1. Method

For more details, see Use AR Foundation.

12.2. Prefabs

Name Description

ARFoundationConnector

A function that inputs frame data obtained by ARFoundation to ToF AR.
App developers can receive data frames from ToF AR with the same interface, even if ARFoundation is not used directly.

12.3. Component description

ARFoundationConnector can obtain data from Manager component on ToF AR corresponding to the data below that ARFoundation outputs.

Functions of ARFoundation Corresponding ToF AR Manager

RGB Color image

TofArColorManager

Depth data

TofArTofManager

Human Stencil, Human Depth

TofArSegmentationManager

3D Body tracking

TofArBodyManager

Face tracking

TofArFaceManager

Device tracking

TofArSlamManager

For more details on the features that ARFoundation supports on each platform, see com.unity.xr.arfoundation (Platform Support).
Features that are not supported by ARFoundation are also not supported by ToF AR.

13. Segmentation Component

The ToF AR Segmentation Component.

Segmentation output showing a skyline with the sky replaced by artistic

The Segmentation component performs Segmentation recognition related processing and provides recognition result data to the application. It mainly provides the following functions:

  • Create a mask texture by estimating the sky section from the Color camera image (When using SkyDetector).

  • Create a mask texture by estimating the person section from the Color camera image (When using HumanDetector).

This component is available on the following platforms:

Platform Available Note

Android

Yes

Available Estimators:
SkyDetector
HumanDetector

iOS

Yes

13.1. Data format: Segmentation data

The application can get the structure that stores the Segmentation information from the TofArSegmentationManager.SegmentationData field. Please refer to SegmentationResults

13.2. Segmentation recognition engine

Supports the following recognition engines:

13.3. Prefabs

Name Description

TofArSegmentationManager

Manages the connection with ToF AR Segmentation component.
Must be placed in scenes that use Segmentation functions.

SkySegmentationDetector

Creates a mask texture by estimating the sky section from the Color camera image.

HumanSegmentationDetector

Creates a mask texture by estimating the person section from the Color camera image.

13.4. Component description

For more details on the Segmentation component, see API references for Segmentation.

14. SkyDetector

Creates a mask texture by estimating the sky section from the Color camera image.

This component is available on the following platforms:

Platform Available Note

Android

Yes

iOS

Yes

14.1. Method

  1. Places SkySegmentationDetector Prefab into the scene.

Upon start, connects to TofArColorManager and TofArSegmentationManager, and starts estimation processing. Therefore, each Manager must already exist at the start.

TofArColorManager needs to be set to output RGB format Color data. For more details, refer to FormatConvertProperty.

14.2. Output data format

14.2.1. Segmentaton information

The application can get the structure that stores the Segmentation information from the TofArSegmentationManager.SegmentationData field. For more details, refer to SegmentationResults

OutputFormat
SegmentationResult.name SegmentationResult.rawPointer SegmentationResult.maskBufferByte

Sky

Pointer to the TextureFormat.R8 format buffer

TextureFormat.R8 format buffer

14.3. Mask texture 2D information

Texture2D can be obtained from the SkySegmentationDetector.MaskTexture property.

The texture format is TextureFormat.R8

For more details, refer to API references for Segmentation.Sky

15. HumanDetector

Creates a mask texture by estimating the human parts in the Color camera image.

This component is available on the following platforms:

Platform Available Note

Android

Yes

iOS

Yes

15.1. Method

  • Place the HumanSegmentationDetector Prefab into the scene.

Upon start, this connects to TofArColorManager and TofArSegmentationManager, and starts estimation processing. Therefore, each Manager must already exist at the start.

TofArColorManager must be set to output RGB format Color data. For more details, refer to FormatConvertProperty

15.2. Output data format

15.2.1. Segmentaton information

The application can get the structure that stores the Segmentation information from the TofArSegmentationManager.SegmentationData field. For more details, refer to SegmentationResults.

OutputFormat
SegmentationResult.name SegmentationResult.rawPointer SegmentationResult.maskBufferByte

Human

Pointer to the TextureFormat.R8 format buffer

TextureFormat.R8 format buffer

15.3. Mask texture 2D information

Texture2D can be obtained from the HumanSegmentationDetector.MaskTexture property.

The texture format is TextureFormat.R8

For more details, refer to API references for Segmentation.Human

16. AR Foundation Human Stencil / Human Depth

Gets Human Stencil data and Human Depth data from AR Foundation and create mask texture.

This component is available on the following platforms:

Platform Available Note

Android

No

iOS

Yes

16.1. Method

  1. Setup ARFoundationConnector. For more information, see Setting up AR Foundation.

  2. Enable AR Foundation Segmentation component from ARFoundationConnector Inspector.

  3. Enable Capture Stencil and Capture Depth in AR Foundation Segmentation component.

    Inspector window with Capture Stencil and Capture Depth settings marked.

Connects to TofArSegmentationManager at runtime and processes. Therefore, it is necessary that the manager already exists at runtime.

16.2. Output data format

The structure containing Segmentation information can be obtained from the TofArSegmentationManager.SegmentationData field. For more details, refer to SegmentationResults.

OutputFormat
SegmentationResult.name SegmentationResult.rawPointer SegmentationResult.maskBufferByte SegmentationResult.maskBufferFloat

Human

Pointer to the TextureFormat.R8 format buffer

TextureFormat.R8 format buffer

HumanDepth

Pointer to the TextureFormat.RFloat format buffer

TextureFormat.RFloat format buffer

17. Face component

The ToF AR Face component.

The Face component performs Face recognition related processing and provides recognition result data to the application. It mainly provides the following functions:

  • Output Face recognition data and BlendShape data in an AR Foundation compliant format

  • Display Face recognition results

  • Display line of sight recognition results

Face component output

The application can get the data from the TofArFaceManager.FaceData field.

This component is available on the following platforms:

Platform Available Note

Android

No

iOS

Yes

Uses ARKit internally.
Cannot be used simultaneously with AR Foundation and others using ARKit.

17.1. Data format: Face data

The application can obtain the structure containing Face information from the TofArFaceManager.FaceData field.

[Member] [Type] [Description]

results

FaceResult[]

Body information array

FaceResult is defined like the AR Foundation ARFace class with an additional field for BlendShape data.

17.2. Prefabs

Name Description

TofArFaceManager

Manages the connection with ToF AR Face component.
Must be placed in scenes that use Face functions.

FaceModel

Displays face recognition result as Mesh.
Displays line of sight recognition result.

17.3. Component description

For more details on the Face component, see API references for Face.