Skip to content

Gaze Tracker

The Gaze Tracker will record the position and rotation of the HMD as well as the position of the gaze in the world. This API requires you to provide these values. These are some assumptions about your C++ project:

  • There is a renderer with some functions to convert world coordinates into model coordinates
  • There is a physics engine that can raycast into the world and return a position and pointer to the 'thing' that was hit

There could be many ways to structure your code, but to write example code for this page, these are assumptions about how your code is structured:

  • There is an 'Entity' for each object. This Entity holds references to components with transformation data and mesh data
  • There is a Dynamic Object Component that can access a Dynamic Object Id

Note

The code below is provided as a high level example. The actual implementation in your project will be different

Gaze Interval and Aggregation

In order to correctly aggregate session data, you should match the GazeInterval field with the rate Gaze is recorded in your project. The default value of 0.1 (ie recording 10 gaze a second) is fine for most projects. This default is also the maximum frequency for most Cognitive licenses. There will be a warning if you go over this limit.

If you want to record Gaze less frequently, you can change GazeInterval in the CoreSettings before calling the CognitiveVRAnalyticsCore constructor.

std::unique_ptr<cognitive::CognitiveVRAnalyticsCore> cog;
int main()
{
    cognitive::CoreSettings settings;
    settings.webRequest = &MakeWebRequest;
    //...

     //set a new gaze interval to every half second
     settings.GazeInterval = 0.5f;
     cog = cognitive::make_unique_cognitive<cognitive::CognitiveVRAnalyticsCore>(settings);
     cog->StartSession();
     //...
}

Gaze without a Gaze Point

This first example is for no logical GazePoint, such as if the user is looking at the sky.

std::vector<float> GetHMDPosition()
{
 return std::vector<float>{0,0,0}; //TODO get your hmd position in world space
}
std::vector<float> HMDRotation()
{
 return std::vector<float>{0,0,0,1}; //TODO get your hmd rotation in world space as a quaternion
}

float elapsedTime;
//OnUpdate should be called from your main loop every frame. 'deltaTime' being the time since the last frame was rendered
void OnUpdate(float deltaTime)
{
 elapsedTime += deltaTime;
 auto instance = cognitive::CognitiveVRAnalyticsCore::Instance();
 if (elapsedTime >= instance->GazeInterval)
 {
  elapsedTime -= instance->GazeInterval;
  std::vector<float>HMDPosition = GetHMDPosition();
  std::vector<float>HMDRotation = GetHMDRotation();
  instance->GetGazeTracker()->RecordGaze(HMDPosition,HMDRotation);
 }
}

Gaze with a World Gaze Point

Here is an example of recording a user's gaze against a surface. This is assuming the surface hit is not a Dynamic Object.

//return true if hit a surface and set refWorldGazePoint to the hit point
bool GetGazePoint(const std::vector<float> hmdPos, const std::vector<float> hmdRot, std::vector<float>& refWorldGazePoint)
{
 //PSEUDOCODE! your implementation to raycast forward from the HMD position. Sets refWorldGazePoint
 bool hitSomething = Example::PhysicsClass::Raycast(hmdPos, hmdRot, refWorldGazePoint);
 return hitSomething;
}

void OnUpdate(float deltaTime)
{
 //...
 std::vector<float>HMDPosition = GetHMDPosition();
 std::vector<float>HMDRotation = GetHMDRotation();
 std::vector<float>RefWorldGazePoint = { 0,0,0 };

 if (GetGazePoint(HMDPosition,HMDRotation,RefWorldGazePoint))
 {
  cog->GetGazeTracker()->RecordGaze(HMDPosition, HMDRotation, RefWorldGazePoint);
 }
}

Gaze with a Dynamic Object Gaze Point

This is an example of recording gaze if the user is looking at a Dynamic Object. The outcome is similar to recording a World Gaze Point, but is transformed to Model Space.

//return the DynamicObjectId of the hit object and set refLocalGazePoint to the hit point in model space
std::string GetGazePointDynamic(const std::vector<float> hmdPos, const std::vector<float> hmdRot, std::vector<float>& refLocalGazePoint)
{
 //PSEUDOCODE! your implementation to raycast forward from the HMD position
 //Assumptions:
 //There is an Entity class that can be accessed from the raycast method
 //There is a DynamicObjectComponent that holds a unique ID for that Dynamic Object

 std::vector<float> hitWorldPosition = {0,0,0};

 Example::Entity* hitEntity = Example::PhysicsClass::RaycastToEntityPtr(hmdPos, hmdRot, hitWorldPosition);
 if (hitEntity == nullptr){return std::string();}
 Example::DynamicObjectComponent* hitDynamicObject = hitEntity->GetDynamicObjectComponent();
 if (hitDynamicObject == nullptr){return std::string();}

 //transforms hitWorldPosition from a world space coordinate to a model space coordinate
 refLocalGazePoint = hitEntity->GetTransformComponent()->WorldToLocalPosition(hitWorldPosition);

 return hitDynamicObject.GetDynamicObjectId();
}

void OnUpdate(float deltaTime)
{
 std::vector<float>HMDPosition = GetHMDPosition();
 std::vector<float>HMDRotation = GetHMDRotation();
 std::vector<float>RefLocalGazePoint = { 0,0,0 };

 std::string hitDynamicObjectId = GetGazePointDynamic(HMDPosition,HMDRotation,RefLocalGazePoint);

 if (!hitDynamicObjectId.empty())
 {
  cog->GetGazeTracker()->RecordGaze(HMDPosition, HMDRotation, RefLocalGazePoint, hitDynamicObjectId);
 }
}

Gaze with a Video Gaze Point

This is basically the same as recording gaze on a Dynamic Object, but requires a couple more parameters.

//return the DynamicObjectId of the hit object and set refLocalGazePoint to the hit point in model space
std::string GetGazePointVideo(const std::vector<float> hmdPos, const std::vector<float> hmdRot,
 std::vector<float>& refLocalGazePoint, std::string& refMediaId, long& refMediaTime, std::vector<float>& refUVS)
{
 //PSEUDOCODE! your implementation to raycast forward from the HMD position
 //Assumptions:
 //There is an Entity class that can be accessed from the raycast method
 //A MeshComponent that has a function to get the UV (ST in opengl) coordinates from a model space coordinate
 //A DynamicObjectComponent that holds a unique ID for that Dynamic Object
 //And a VideoComponent that holds some data about the current state of the video player

 std::vector<float> hitWorldPosition = {0,0,0};

 Example::Entity* hitEntity = Example::PhysicsClass::RaycastToEntityPtr(hmdPos, hmdRot, hitWorldPosition);
 if (hitEntity == nullptr){return std::string();}
 Example::DynamicObjectComponent* hitDynamicObject = hitEntity->GetDynamicObjectComponent();
 if (hitDynamicObject == nullptr){return std::string();}

 //transforms hitWorldPosition from a world space coordinate to a model space coordinate
 refLocalGazePoint = hitEntity->GetTransformComponent()->WorldToLocalPosition(hitWorldPosition);

 Example::VideoComponent* hitVideoComponent = hitEntity->GetVideoComponent();
 if (hitVideoComponent == nullptr){return std::string();}

 //gets the UV coordinates at the local gaze point
 refUvs = hitEntity->GetMeshComponent()->GetUVs(refLocalGazePoint);
 refMediaTime = hitVideoComponent->GetCurrentFrame();
 refMediaId = hitVideoComponent->GetMediaId();

 return hitDynamicObject.GetDynamicObjectId();
}

void OnUpdate(float deltaTime)
{
 std::vector<float>HMDPosition = GetHMDPosition();
 std::vector<float>HMDRotation = GetHMDRotation();
 std::vector<float>RefLocalGazePoint = { 0,0,0 };
 std::string RefMediaId;
 long RefMediaTime;
 std::vector<float> RefUVS = {0,0};

 std::string hitDynamicObjectId = GetGazePointVideo(HMDPosition,HMDRotation,RefLocalGazePoint,RefMediaId,RefMediaTime,RefUVs);

 if (!hitDynamicObjectId.empty())
 {
  cog->GetGazeTracker()->RecordGaze(HMDPosition, HMDRotation, RefLocalGazePoint, hitDynamicObjectId,RefMediaId,RefMediaTime,RefUVS );
 }
}

intercom If you have a question or any feedback about our documentation please use the Intercom button in the lower right corner of any web page.