Skip to main content

Official websites use .gov
A .gov website belongs to an official government organization in the United States.

Secure .gov websites use HTTPS
A lock ( ) or https:// means you’ve safely connected to the .gov website. Share sensitive information only on official, secure websites.

Hazen - 2026

Mentor: Lindsey Hazen | lindsey.hazen@nih.gov
Lab
Center for Interventional Oncology
CC

Project 1: Smartphone-Based LiDAR for Respiratory Gating and body motion detection during interventions

Background

Respiratory motion presents a major challenge in percutaneous interventional radiology procedures, where accurate targeting is essential. Breath-hold techniques are commonly used to mitigate tumor motion; however, achieving reproducible and consistent breath-holds remains difficult. Inaccurate motion control can negatively impact procedural precision and outcomes.

Smartphone-based Light Detection and Ranging (LiDAR) technology provides a radiation-free method for real-time three-dimensional (3D) surface tracking. By emitting laser pulses and measuring their reflections, LiDAR generates point clouds that can be used to reconstruct body surface geometry. Prior studies suggest that smartphone LiDAR can detect body motion including respiratory-induced surface deformation. This technology may offer a low-cost, non-ionizing alternative for motion monitoring during interventional procedures, which could eventually be incorporated into smartphone-based or robotic needle guidance.

Objectives

The goal of this project is to evaluate the accuracy and feasibility of smartphone-based LiDAR for respiratory gating. Specific objectives include:

  1. Assessing the accuracy of LiDAR-based surface motion tracking.
  2. Evaluating the feasibility of extracting respiratory motion signals from surface deformation in real time.
  3. Validating LiDAR-derived measurements against computed tomography (CT) data.

Secondary Objective

  1. Compare surface motion, tracked by LiDAR and CT data, to internal organ movement.

Methods

This project will include:

  • Software Development: Development or optimization of smartphone app to continuously capture and process 3D surface data from smartphone LiDAR in real time.
  • Experimental Studies: Controlled experiments using motion phantoms to simulate deformation or respiratory motion, followed by feasibility studies in living animal models.
  • Data Analysis and Validation: Quantitative analysis of surface displacement and respiratory motion, with validation against CT imaging as the reference standard.

Expected Outcomes and Significance

This study is expected to demonstrate the feasibility of smartphone-based LiDAR for accurate body surface tracking and respiratory monitoring. Successful results could support the use of this widely available, radiation-free technology for respiratory gating and motion management in interventional radiology, potentially improving procedural accuracy while reducing complexity and cost.

Project 2: Design of an MRI-compatible phantom for MRI- US fusion imaging in prostate cancer evaluation

Abstract

Prostate cancer is often detectable on magnetic resonance imaging (MRI) but not readily visible on ultrasound (US), despite US being the primary imaging modality used during prostate interventions. MRI–ultrasound (MRI–US) fusion imaging has therefore increasingly important for prostate cancer intervention. However standardized phantoms with well-defined multimodal properties remain limited. 

We propose the development of an MRI-compatible prostate phantom that is MRI-visible but ultrasound-insensitive for validating MRI–US fusion workflows. The phantom will be fabricated using tissue-mimicking materials engineered to replicate prostate-relevant MRI relaxation properties (T1 and T2). Embedded lesion targets (casted using 3D printed mold) with tunable size and contrast will enable quantitative assessment of image registration accuracy, targeting precision, and system reproducibility. 

MRI characterization will be performed to establish stable and reproducible T1/T2 values, and the phantom will be used in conjunction with pre-acquired prostate ultrasound images to assess  MRI–US fusion performance. This platform will provide a standardized, reproducible environment for system calibration, operator training, and performance benchmarking of MRI–US fusion technologies. Additionally, it serves as a non-animal preclinical model, reducing reliance on animal studies for device testing, workflow optimization, and algorithm development.