STORM-Net: Open-source toolbox for automatic subject-specific co-registration of probe placements for developmental and clinical populations
Yotam Erel and Sagi Jaffe-Dax, Tel-Aviv University
Duration: 90 min
Requirements:Laptop with Windows or Linux operating system installed, with a connection to the internet. The toolbox currently does not support MAC.
Synopsis: Measuring the exact placement of probes (e.g., electrodes, optodes) on a participant’s head is a notoriously difficult step in acquiring fNIRS data and particularly difficult for any clinical or developmental population. Existing methods require the participant to remain still for a lengthy period of time, are laborious, and require extensive training. We will teach you an innovative video-based method for estimating the probes’ positions relative to the participant’s head, which is fast, motion-resilient, and automatic (and freely available). This method substantially facilitates the use of spatial co-registration methods on developmental and clinical populations, where lengthy, motion-sensitive measurement methods routinely fail. In this course, we will demonstrate the video-based method’s reliability and validity compared to existing state-of-the-art methods. We will also conduct a demonstration of our automatic method in estimating the position of probes on an infant head without lengthy online procedures, a task which was considered unachievable until now. Participants will have an opportunity to install and use the new method on their own computer and will be given detailed explanation on how to use the new method back in their lab.
Rationale: Knowing where the fNIRS probes were located with respect to the underlying cortical regions is a pre-requisite for making spatial conclusions and to exploit the full benefit of fNIRS. However, existing co-registration methods are not suitable for many developmental and clinical populations and are difficult to implement in the field. These co-registration methods either require long acquisition, are prone to a participant’s movement and are highly sensitive to their environment (3D digitizers), or require a lengthy manual annotation and are prone to experimenter bias (photogrammetry methods). To address all these problems, we developed an automatic video-based method of co-registration called STORM-Net that is ideal for developmental, clinical and field studies.
In this course, we teach participants how to use our novel video-based method for co-registration. This method, created for early developmental populations, is both easy to implement by novice experimenters and is robust to participant’s head movements. Our method requires only ~5 seconds of video using a phone camera with the probes already mounted on the scalp. We will present both the validity and the reliability of our video-based method compared to the traditional 3D digitizer on a group of adult participants. Importantly, we also demonstrate the feasibility of this approach with early developmental populations.
Learning objectives: The goal of this mini-course is to present our new automatic video-based co-registration method to fNIRS researchers and to give participants experience using this method. After reviewing current methods of co-registration and their limitations, we will show how our new method overcomes these limitations including reviewing our tests of the methods reliability across several conditions. We will then demonstrate the usage of our method and give participants the opportunity to get hands-on experience on their own computers.
Requirements: Laptop with Windows or Linux operating system installed, with a connection to the internet. The toolbox currently does not support MAC.