A deep fusion-based vision transformer for breast cancer classification
The Institution of Engineering and Technology|John Wiley & Sons
Artikkeli
vertaisarvioitu
Pysyvä osoite
Kuvaus
© 2024 The Author(s). Healthcare Technology Letters published by John Wiley & Sons Ltd on behalf of The Institution of Engineering and Technology.
This is an open access article under the terms of the Creative Commons Attribution License, which permits use, distribution and reproduction in any medium, provided the original work is properly cited.
Breast cancer is one of the most common causes of death in women in the modern world. Cancerous tissue detection in histopathological images relies on complex features related to tissue structure and staining properties. Convolutional neural network (CNN) models like ResNet50, Inception-V1, and VGG-16, while useful in many applications, cannot capture the patterns of cell layers and staining properties. Most previous approaches, such as stain normalization and instance-based vision transformers, either miss important features or do not process the whole image effectively. Therefore, a deep fusion-based vision Transformer model (DFViT) that combines CNNs and transformers for better feature extraction is proposed. DFViT captures local and global patterns more effectively by fusing RGB and stain-normalized images. Trained and tested on several datasets, such as BreakHis, breast cancer histology (BACH), and UCSC cancer genomics (UC), the results demonstrate outstanding accuracy, F1 score, precision, and recall, setting a new milestone in histopathological image analysis for diagnosing breast cancer.
Emojulkaisu
ISBN
ISSN
2053-3713
Aihealue
Kausijulkaisu
Healthcare Technology Letters
OKM-julkaisutyyppi
A1 Alkuperäisartikkeli tieteellisessä aikakauslehdessä
