Handtracker in shared AR experience
This section describes how to use Ur module and add simple hand interaction to the shared AR experience project.
Before you begin
Before getting started, complete the following steps:
- Set up your development environment by following the steps in the Quickstart.
- Follow the Create a simple shared AR experience tutorial or clone the project from the repository.
Initializing Handtracker
- Install the Ur package.
- Import
Ur
inConjureKitManager
.
using Auki.Ur;
- Import
ARFoundation
to be able to use theARRaycastManager
using UnityEngine.XR.ARFoundation;
- Create a private
HandTracker
variable.
private HandTracker _handTracker;
- Create serializable
ARSession
andARRaycastManager
variables.
[SerializeField] private ARSession arSession;
[SerializeField] private ARRaycastManager arRaycastManager;
Attach the
ARRaycastManager
component to AR Session Origin GameObject.Drag the
ARSession
and theARRaycastManager
components to coressponding fields on theConjureKitManager
GameObject.Get
HandTracker
instance and initialize the AR system in theConjureKitManager
'sStart
function.
_handTracker = HandTracker.GetInstance();
_handTracker.SetARSystem(arSession, arCamera, arRaycastManager);
- Start the
HandTracker
by calling
_handTracker.Start();
- Call the
_handTracker.Update
every frame to continuously track the hand while moving.
private void Update()
{
_handTracker.Update();
}
Hand Interaction
Now we want to position a sphere with a collider on our hand's index fingertip so it can interact with the cube.
Create a 3d sphere, rename it to FingertipLandmark, and scale it down to 0.3.
On the collider component, tick the
isTrigger
checkbox.Create a new material, change its color to something more noticeable, and drag it to the sphere mesh renderer.
Add a new tag in Project Settings -> Tags and Layers named
hand
or any other name you choose and add this tag to the FingertipLandmark we just createdCreate a
Renderer
variable for the FingertipLandmark
[SerializeField] private Renderer Fingertip Landmark;
Populate it with the sphere we just created.
And in Start function, Set the fingertip landmark as a child of our camera transform.
fingertipLandmark.transform.SetParent(arCamera.transform);
To get triggers from other colliders, the cube should have a
Rigidbody
component. Add it and tick theIs Kinematic
checkbox to make sure the cube doesn't fall.Create a new C# script that will handle trigger events on the cube. If the cube is triggered with an object tagged with
hand
its color will change to a random color. If the trigger exits the cube, it will return to white.
public class TouchableByHand : MonoBehaviour
{
private void OnTriggerEnter(Collider other)
{
if (other.tag == "hand")
{
gameObject.GetComponent<Renderer>().material.color = Random.ColorHSV();
}
}
private void OnTriggerExit(Collider other)
{
if (other.tag == "hand")
{
gameObject.GetComponent<Renderer>().material.color = Color.white;
}
}
}
Add this script to the cube prefab.
To get our landmark positions in real-time, we will use the Ur callback - OnUpdate that is invoked when a new hand pose is received. This callback will pass 4 data types"
landmarks: are landmark positions relative to the hand position
translations: are the positions of each hand in camera space
isRightHand: indicates whether the right or left hand was detected
score: is the hand confidence score. If 0, no hand was detected
The landmark and translation arrays contain consecutive floats representing the x, y & z-components of a 3D vector.
Once we get the landmarks and translations, we can place the fingertip landmark on landmark 8, the tip of the index finger (see diagram). If the hand tracker detects a hand, we should see the fingertip landmark. If not, meaning our hand is not in camera sight, we can disable the fingertip landmark renderer.
_handTracker.OnUpdate += (landmarks, translations, isRightHand, score) =>
{
if (score[0] > 0)
{
var handPosition = new Vector3(
translations[0],
translations[1],
translations[2]);
var pointerLandmarkIndex = 8 * 3; // Index fingertip
var pointerLandMarkPosition = new Vector3(
landmarks[pointerLandmarkIndex + 0],
landmarks[pointerLandmarkIndex + 1],
landmarks[pointerLandmarkIndex + 2]);
fingertipLandmark.enabled = true;
fingertipLandmark.transform.localPosition = handPosition + pointerLandMarkPosition;
}
else
{
fingertipLandmark.enabled = false;
}
};
Hand Landmarks Visualization
Ur module allows us to visualize the hand landmarks with two simple methods.
- Start by creating a private boolean.
landmarksVisualizeBool = true;
- Create a toggle method that uses the hand tracker methods
ShowHandMesh
andHideHandMesh
.
public void ToggleHandLandmarks()
{
landmarksVisualizeBool = !landmarksVisualizeBool;
if (landmarksVisualizeBool)
{
_handTracker.ShowHandMesh();
}
else
{
_handTracker.HideHandMesh();
}
}
Now it can be toggled using a UI toggle or any other method you choose.
Calibration
The hand tracker can sometimes project the hand landmarks into the wrong depth range. A calibration process is required to fix this error. It's a short process where we ask the user to place their hand on a flat surface, direct the camera on it and bring it closer and further for a few seconds while a plane ray-casting is performed to determine the distance from the camera to the plane on which the user has laid the hand. In the demo scene from the sample provided with the Ur package, you can find a Calibrate Hand Tracker Button and code that can easily be used in any project.
You will also need to add the ARPlaneManager
component on the ARSessionOrigion
GameObject.
Occlusion Culling
In order to layer our 3D objects and real-world elements correctly, we use Occlusion culling.
Add the
AROcclusionManager
component toAR Camera
GameObject and adjust its properties (Fastest on all).To use the
AROcclusionManager
in the code importARSubsystems
.
using UnityEngine.XR.ARSubsystems;
- Declare an
AROcclusionManager
serializable variable and a boolean for toggling it on and off.
[SerializeField] private AROcclusionManager arOcclusionManager;
private bool occlusionBool = true;
Drag the
ARCamera
GameObject to theAROcclusionManager
field in theConureKitManager
.Create a toggle method to toggle between the different
AROcclusionManager
modes.
public void ToggleOcclusion()
{
occlusionBool = !occlusionBool;
arOcclusionManager.requestedHumanDepthMode = occlusionBool ? HumanSegmentationDepthMode.Fastest : HumanSegmentationDepthMode.Disabled;
arOcclusionManager.requestedHumanStencilMode = occlusionBool ? HumanSegmentationStencilMode.Fastest : HumanSegmentationStencilMode.Disabled;
arOcclusionManager.requestedEnvironmentDepthMode = occlusionBool ? EnvironmentDepthMode.Fastest : EnvironmentDepthMode.Disabled;
}
Use a UI toggle to switch between modes, but again you can use any method you choose.
Complete code
using UnityEngine;
using Auki.ConjureKit;
using UnityEngine.UI;
using Auki.ConjureKit.Manna;
using Auki.Ur;
using Auki.Util;
using UnityEngine.XR.ARFoundation;
using UnityEngine.XR.ARSubsystems;
public class ConjureKitManager : MonoBehaviour
{
[SerializeField] private Camera arCamera;
[SerializeField] private ARSession arSession;
[SerializeField] private ARRaycastManager arRaycastManager;
[SerializeField] private Text sessionState;
[SerializeField] private Text sessionID;
[SerializeField] private GameObject cube;
[SerializeField] private Button spawnButton;
[SerializeField] Button qrCodeButton;
private bool qrCodeBool;
private IConjureKit _conjureKit;
private Manna _manna;
private ARCameraManager arCameraManager;
private Texture2D _videoTexture;
[SerializeField] private Renderer fingertipLandmark;
private HandTracker _handTracker;
private bool landmarksVisualizeBool = true;
[SerializeField] private AROcclusionManager arOcclusionManager;
private bool occlusionBool = true;
void Start()
{
arCameraManager = arCamera.GetComponent<ARCameraManager>();
_conjureKit = new ConjureKit(
arCamera.transform,
"YOUR_APP_KEY",
"YOUR_APP_SECRET");
_manna = new Manna(_conjureKit);
_conjureKit.OnStateChanged += state =>
{
if (state == State.JoinedSession)
{
Debug.Log("State.JoinedSession " + Time.realtimeSinceStartup);
}
if (state == State.Calibrated)
{
Debug.Log("State.Calibrated " + Time.realtimeSinceStartup);
}
sessionState.text = state.ToString();
ToggleControlsState(state == State.Calibrated);
};
_conjureKit.OnJoined += session =>
{
Debug.Log("OnJoined " + Time.realtimeSinceStartup);
sessionID.text = session.Id.ToString();
};
_conjureKit.OnLeft += session =>
{
sessionID.text = "";
};
_conjureKit.OnEntityAdded += CreateCube;
_conjureKit.Connect();
_handTracker = HandTracker.GetInstance();
_handTracker.SetARSystem(arSession, arCamera, arRaycastManager);
_handTracker.OnUpdate += (landmarks, translations, isRightHand, score) =>
{
if (score[0] > 0)
{
var handPosition = new Vector3(
translations[0],
translations[1],
translations[2]);
var pointerLandmarkIndex = 8 * 3; // Index fingertip
var pointerLandMarkPosition = new Vector3(
landmarks[pointerLandmarkIndex + 0],
landmarks[pointerLandmarkIndex + 1],
landmarks[pointerLandmarkIndex + 2]);
fingertipLandmark.enabled = true;
fingertipLandmark.transform.localPosition = handPosition + pointerLandMarkPosition;
}
else
{
fingertipLandmark.enabled = false;
}
};
_handTracker.Start();
_handTracker.ShowHandMesh();
}
private void Update()
{
FeedMannaWithVideoFrames();
_handTracker.Update();
}
private void FeedMannaWithVideoFrames()
{
var imageAcquired = arCameraManager.TryAcquireLatestCpuImage(out var cpuImage);
if (!imageAcquired)
{
AukiDebug.LogInfo("Couldn't acquire CPU image");
return;
}
if (_videoTexture == null) _videoTexture = new Texture2D(cpuImage.width, cpuImage.height, TextureFormat.R8, false);
var conversionParams = new XRCpuImage.ConversionParams(cpuImage, TextureFormat.R8);
cpuImage.ConvertAsync(
conversionParams,
(status, @params, buffer) =>
{
_videoTexture.SetPixelData(buffer, 0, 0);
_videoTexture.Apply();
cpuImage.Dispose();
_manna.ProcessVideoFrameTexture(
_videoTexture,
arCamera.projectionMatrix,
arCamera.worldToCameraMatrix
);
}
);
}
private void ToggleControlsState(bool interactable)
{
if (spawnButton) spawnButton.interactable = interactable;
if (qrCodeButton) qrCodeButton.interactable = interactable;
}
public void ToggleLighthouse()
{
qrCodeBool = !qrCodeBool;
_manna.SetLighthouseVisible(qrCodeBool);
}
public void ToggleHandLandmarks()
{
landmarksVisualizeBool = !landmarksVisualizeBool;
if (landmarksVisualizeBool)
{
_handTracker.ShowHandMesh();
}
else
{
_handTracker.HideHandMesh();
}
}
public void ToggleOcclusion()
{
occlusionBool = !occlusionBool;
arOcclusionManager.requestedHumanDepthMode = occlusionBool ? HumanSegmentationDepthMode.Fastest : HumanSegmentationDepthMode.Disabled;
arOcclusionManager.requestedHumanStencilMode = occlusionBool ? HumanSegmentationStencilMode.Fastest : HumanSegmentationStencilMode.Disabled;
arOcclusionManager.requestedEnvironmentDepthMode = occlusionBool ? EnvironmentDepthMode.Fastest : EnvironmentDepthMode.Disabled;
}
public void CreateCubeEntity()
{
if (_conjureKit.GetState() != State.Calibrated)
return;
Vector3 position = arCamera.transform.position + arCamera.transform.forward * 0.5f;
Quaternion rotation = Quaternion.Euler(0, arCamera.transform.eulerAngles.y, 0);
Pose entityPos = new Pose(position, rotation);
_conjureKit.GetSession().AddEntity(
entityPos,
onComplete: entity => CreateCube(entity),
onError: error => Debug.Log(error));
}
private void CreateCube(Entity entity)
{
if (entity.Flag == EntityFlag.EntityFlagParticipantEntity) return;
var pose = _conjureKit.GetSession().GetEntityPose(entity);
Instantiate(cube, pose.position, pose.rotation);
}
}
The full code for this tutorial can be found on GitHub on the tutorial/handtracker
branch.
The complete project with all parts and the latest packages is on the master branch of the same repo.