Home > Events > Oral Candidacy - Hao Zheng

Oral Candidacy - Hao Zheng

Start: 8/6/2020 at 2:00PM
End: 8/6/2020 at 5:00PM
Location: Remote via Zoom
Attendees: Remote attendance:
Meeting ID:915 7713 6913. Those that will be joining remotely need to mute their microphones until the Q&A session begins. Please disconnect when the public portion ends.
https://notredame.zoom.us/j/91577136913
Add to calendar:
iCal vCal

Hao Zheng

Oral Candidacy Exam

August 3, 2020        2:00 pm        Remote via Zoom

Advisers: 

Dr. Danny Chen and Dr. Chaoli Wang

Committee Members:

Dr. Meng Jiang      Dr. Walter Scheirer      Dr. Yiyu Shi

Title:

"Efficient and Robust Deep Learning Based Approaches for Biomedical Image Segmentation and Related Problems"

Abstract:

Image segmentation is a fundamental problem in computer vision and has been studied for decades. It is also an essential preliminary step for quantitative biomedical image analysis and computer-aided diagnoses and studies. Recently, deep learning (DL) based methods have witnessed huge success on various image analysis tasks in terms of accuracy and generality. However, it is not straightforward to apply known semantic segmentation algorithms (usually motivated by, designed for, and evaluated with generic images) directly on biomedical images due to different imaging techniques and special application scenarios (e.g., high-dimensional images, small amount of annotated data, anisotropic images, domain knowledge from experts). In our preliminary research work, we have developed new DL-based methods for segmenting 2D and 3D biomedical images from two aspects: saving annotation efforts and improving model efficacy.

 First, volumetric images (such as MRI and CT) are very common in the biomedical field. How to make use of abundant 3D information in such images is critical for delineating detailed structures. Thus, we develop a new DL-based method that utilizes anisotropic 3D convolutional kernels to improve the segmentation performance. Moreover, having analyzed the (dis)advantages of both 2D and 3D DL models, we propose a new ensemble learning framework to unify the merits of both models and achieve better performance. Second, there is usually a great deal of annotation effort and cost associated with biomedical images, because (1) only biomedical experts can annotate effectively, (2) there are too many instances in images (e.g., cells) to annotate, and (3) it is hard to label high-dimensional images. We investigate properties in biomedical images and propose a new method to reduce annotation effort by making judicious suggestions on the most effective annotation areas/samples. Further, we find that, in 3D images, using sparse annotation leads to huge performance degradation. We propose a new method that combines representative annotation and ensemble learning to bridge the performance gap with respect to full annotation methods.

 In practice, real-world application scenarios are usually more complicated, which do not necessarily fit the fully supervised setting. For example, extremely limited amounts of data are annotated, there is a huge amount of unlabeled data, and image modalities are highly diverse when they are acquired under different staining conditions, in different facilities or with different protocols and equipment.

 For future work, we plan to continue the research on more robust DL algorithms and their applications in real-world scenarios. (1) We plan to evaluate and make use of automatically generated labels (i.e., pseudo labels) for deep learning training under semi-supervised setting. (2) We will seek to utilize self-supervised and semi-supervised methods to save annotation effort for training DL models while maintaining segmentation performance. (3) We plan to utilize meta-learning to improve the generalizability and transferability of DL models so that they could be deployed to handle diverse data efficiently and effectively. (4) We plan to explore automatic methods to assess the quality of segmentation models, make suggestions for experts, and integrate domain knowledge to refine the segmentation results.