Portfolio item number 1
Short description of portfolio item number 1
Short description of portfolio item number 1
Short description of portfolio item number 2
Published in Medical Image Analysis, 2020
Abstract (short): In this paper, a fully automatic framework is proposed that can: 1) detect and classify six different artifacts, 2) segment artifact instances that have indefinable shapes, 3) provide a quality score for each frame, and 4) restore partially corrupted frames. To detect and classify different artifacts, the proposed framework exploits fast, multi-scale and single stage convolution neural network detector. In addition, we use an encoder-decoder model for pixel-wise segmentation of irregular shaped artifacts. A quality score is introduced to assess video frame quality and to predict image restoration success. Generative adversarial networks with carefully chosen regularization and training strategies for discriminator-generator networks are finally used to restore corrupted frames.
Recommended citation: Sharib Ali, Felix Zhou, Adam Bailey, Barbara Braden, James E. East, Xin Lu, Jens Rittscher. (2021). "A deep learning framework for quality assessment and restoration in video endoscopy." Medical Image Analysis Vol 68, pg. 101900. https://doi.org/10.1016/j.media.2020.101900
Published in Computers in Biology and Medicine', 2022
Abstract: Widely used traditional supervised deep learning methods require a large number of training samples but often fail to generalize on unseen datasets. Therefore, a more general application of any trained model is quite limited for medical imaging for clinical practice. Using separately trained models for each unique lesion category or a unique patient population will require sufficiently large curated datasets, which is not practical to use in a real-world clinical set-up. Few-shot learning approaches can not only minimize the need for an enormous number of reliable ground truth labels that are labour-intensive and expensive, but can also be used to model on a dataset coming from a new population. To this end, we propose to exploit an optimization-based implicit model agnostic meta-learning (iMAML) algorithm under few-shot settings for medical image segmentation. Our approach can leverage the learned weights from diverse but small training samples to perform analysis on unseen datasets with high accuracy. We show that, unlike classical few-shot learning approaches, our method improves generalization capability. To our knowledge, this is the first work that exploits iMAML for medical image segmentation and explores the strength of the model on scenarios such as meta-training on unique and mixed instances of lesion datasets. Our quantitative results on publicly available skin and polyp datasets show that the proposed method outperforms the naive supervised baseline model and two recent few-shot segmentation approaches by large margins. In addition, our iMAML approach shows an improvement of 2\%–4\% in dice score compared to its counterpart MAML for most experiments.
Recommended citation: Rabindra Khadka, Debesh Jha, Steven Hicks, Vajira Thambawita, Michael A. Riegler, Sharib Ali, Pål Halvorsen (2022). "Meta-learning with implicit gradients in a few-shot setting for medical image segmentation." Computers in Biology and Medicine143, vol. 10, pg. 105227. https://doi.org/10.1016/j.compbiomed.2022.105227
Published in IEEE Transactions on Neural Networks and Learning Systems, 2022
Abstract: The increase of available large clinical and experimental datasets has contributed to a substantial amount of important contributions in the area of biomedical image analysis. Image segmentation, which is crucial for any quantitative analysis, has especially attracted attention. Recent hardware advancement has led to the success of deep learning approaches. However, although deep learning models are being trained on large datasets, existing methods do not use the information from different learning epochs effectively. In this work, we leverage the information of each training epoch to prune the prediction maps of the subsequent epochs. We propose a novel architecture called feedback attention network (FANet) that unifies the previous epoch mask with the feature map of the current training epoch. The previous epoch mask is then used to provide hard attention to the learned feature maps at different convolutional layers. The network also allows rectifying the predictions in an iterative fashion during the test time. We show that our proposed feedback attention model provides a substantial improvement on most segmentation metrics tested on seven publicly available biomedical imaging datasets demonstrating the effectiveness of FANet. The source code is available at https://github.com/nikhilroxtomar/FANet.
Recommended citation: Tomar, N.K., Jha, D., Riegler, M., Johansen, H.D., Johansen, D., Rittscher, J., Halvorsen, P., & Ali, S. (2022). "FANet: A Feedback Attention Network for Improved Biomedical Image Segmentation." IEEE transactions on neural networks and learning systems. https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=9741842
Published:
This is a description of your talk, which is a markdown files that can be all markdown-ified like any other post. Yay markdown!
Published:
This is a description of your conference proceedings talk, note the different field in type. You can put anything in this field.
Undergraduate course, University 1, Department, 2014
This is a description of a teaching experience. You can use markdown like any other post.
Workshop, University 1, Department, 2015
This is a description of a teaching experience. You can use markdown like any other post.