Using adsata, our users can create custom eyetracking studies and invite participants to their studies as simply as by just sharing a link. A study needs to have a visual stimulation, or "Stim", which can be an image or a webpage. Once a participant participates in the eyetracking study, a "Session" is created on the user dashboard almost instantaneously. Users can then collect and replay lists of sessions for their studies. These sessions then become a basis for the participant sessions the user wants to use for creating further data visualisations and metrics that help them understand aggregate visual behavior of the participants.
A conventional eye-tracker is typically a dedicated camera hardware that’s designed and optimized to capture eye movements in all lighting conditions. It can as well compensate for head position variations and an array of physiological variations of the eye region. However, using adsata simply means getting eye-tracking information from a normal camera. A normal camera is one that would only detect light in the visible spectrum. This is why only a webcam operating with visible light spectrum is needed.
Adsata utilizes an inbuilt or external camera affixed on a monitor or laptop to collect data on where a person is looking in a browser window. This method doesn’t use specialized cameras or infrared beams, but rather it uses the image produced from the webcam.
A Machine Learning algorithm is then employed in real-time to calculate the exact position of the head and eyes, and eye direction is correlated to the image on the screen. This process is done on the browser, without the need for sending any camera feeds to a server. Thus, resulting in an extremely secure architecture. Although the depth and precision of data obtained from this method is a bit limited, the method allows for large-scale studies and quick turnaround, which is perfect for quantitative research. This method is suitable for use in early design processes.
We rely on the following two Open Source software to estimate gaze on a browser window in real-time:
- Tensorflow.js FaceMesh ➡️ Machine Learning model for Facial Coding