Advances in Applied Science Research Open Access

  • ISSN: 0976-8610
  • Journal h-index: 57
  • Journal CiteScore: 93.86
  • Average acceptance to publication time (5-7 days)
  • Average article processing time (30-45 days) Less than 5 volumes 30 days
    8 - 9 volumes 40 days
    10 and more volumes 45 days
Reach us +32 25889658

Commentary - (2022) Volume 13, Issue 12

Conduct Learning in Vision-Transformer-Based Mammogram
Jing Wang*
 
Department of Oncology, Beihang University, China
 
*Correspondence: Jing Wang, Department of Oncology, Beihang University, China, Email:

Received: 30-Nov-2022, Manuscript No. AASRFC-23-15465; Editor assigned: 02-Dec-2022, Pre QC No. AASRFC-23-15465(PQ); Reviewed: 16-Dec-2023, QC No. AASRFC-23-15465; Revised: 21-Dec-2022, Manuscript No. AASRFC-23-15465(R); Published: 28-Dec-2022, DOI: 10.36648/0976-8610.13.12.102

Description

Breast mass identification is an important step in mammogram- based early breast cancer detection. However, determining whether a breast lump is malignant or benign can be difficult in the early stages. This problem was handled with the help of convolutional neural networks (CNNs), which resulted in useful developments. CNNs, on the other hand, only focus on a section of the mammography due to repeated convolutions, neglecting the remaining computational complexity. Vision transformers have recently been created as a technique of bypassing these constraints of CNNs, ensuring improved or comparable performance in natural picture categorization.

However, the application of this approach to medical imaging has not been properly investigated. In this study, we developed a transfer learning system based on vision transformers to classify breast mass mammograms. Because of its estimated area under the receiver operating curve of 10, the new model outperformed CNN-based transfer-learning models and vision transformer models trained from scratch. As a result, the approach can be employed in a clinical environment to improve early identification of breast cancer.

Breast cancer is the most frequent cancer among women in the United States, accounting for 30% (or one in three) of all new cases each year. In recent years, annual incidence rates have climbed by 0.5%; nevertheless, the number of breast cancer deaths has decreased by 43% from 1989 to 2020. The decrease in death rates is attributed to better treatment choices and earlier discovery through screening and awareness efforts.

Mammography (MG) is critical for early identification of breast cancer. MG can detect tiny cancers that cannot be felt as masses in their early stages. False diagnoses may occur, however, given to the huge amount of radiological testing and the intricacy of MGs. Computer-Aided Detection (CAD), which uses image processing and pattern recognition, was created to give radiologists with an objective viewpoint.

In this study, we developed a deep-learning technique for mammographic breast cancer detection via transfer learning based on vision transformers. This study makes two significant contributions to the literature. The first is the image-data-balancing module, which is utilised to resolve the class imbalance issue in the mammography dataset. This study’s dataset includes samples from both benign and cancerous tissues, with varying sample sizes. To put it another way, there is a class imbalance, which could lead to bias in model learning. As a solution to this problem, we suggest augmentation-based class balance. Second, we created a vision transformer-based mammography classification transfer-learning approach.

We introduce a vision-transformer-based transfer-learning method for breast mammography classification that tackles the drawbacks of CNN-based transfer-learning approaches by exploiting the self-attention approach of transformers. A complete examination was performed using a number of vision transformer types and versions. As a result, we discovered that vision-transformer-based transfer learning is effective for breast mammography image categorization, providing greater performance at a lower computational cost. Convolutional neural network-based transfer learning outperformed vision transformer-based transfer learning for breast. The classification of a mammogram However, other studies utilising a range of datasets from a number of sources should be examined in order to generalise the results of this study. This result was obtained by training on a single dataset from a single source.

Acknowledgement

None.

Conflict of Interest

The author declares there is no conflict of interest in publishing this article has been read and approved by all named authors.

Citation: Wang J (2022) Conduct Learning in Vision-Transformer-Based Mammogram. Adv Appl Sci Res. 13:102.

Copyright: © 2022 Wang J. This is an open-access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.