Hybrid CNN-Transformer Model for Severity Classification of Multi-organ Damage in Long COVID Patients

Autores/as

Palabras clave:

COVID-19, Chest X-rays, CNN, Vision Transformer, Severity Classification, Deep Learning.

Resumen

Global COVID-19 spread has necessitated the use of rapid and accurate diagnostic procedures to support clinical decision-making, particularly in resource-limited environments. In this work, a hybrid deep model combining Convolutional Neural Networks (CNN) and Transformer architecture is proposed to diagnose COVIDx CXR-3 dataset chest X-ray images into three classes of severity levels: Mild, Moderate, and Severe. The
methodology incorporates data preprocessing techniques such as resizing, normalization, augmentation, and SimpleITK organ segmentation. A DenseNet121-based CNN extracts local features, while global dependencies are extracted by a Vision Transformer. The features from both are fused and fed to a classification head to
generate the predictions. The training was done in PyTorch with learning rate 0.0001, batch size 32 and optimized with Adam optimizer for 50 epochs. Performance measures like Accuracy, Precision, Recall, F1-Score, and Confusion Matrix were computed to measure performance. Results show that the CNN-transformer model which outperforms the CNN-only model that achieved 88%.
This integration has demonstrated a better capability in severity classification and great potential in helping clinicians prioritize care, optimize treatment plans, and allocate resources, thereby improving outcomes in COVID-19 management.

DOI

Descargas

Los datos de descarga aún no están disponibles.

Publicado

2025-07-07

Número

Sección

Artículos Científicos para el número regular

Cómo citar

[1]
“Hybrid CNN-Transformer Model for Severity Classification of Multi-organ Damage in Long COVID Patients”, LAJC, vol. 12, no. 2, pp. 26–39, Jul. 2025, Accessed: Oct. 02, 2025. [Online]. Available: https://lajc.epn.edu.ec/index.php/LAJC/article/view/439