The COVID-19 infection that became the cause of the pandemic was a major problem that people faced. Because the ensemble model takes into account each of the features and losses provided by the models, the resulting loss is lower. In this way, we achieved the Ensemble model accuracy of 0.9867 for the UCSD COVID-CT dataset, while the highest accuracy of the Individual model was 0.945.
Introduction
Motivation
Problem statement
Proposed approach
Convolutional neural network
Across all the models presented in this work, CNN used extensive data to refine its classification of brain tumor grades [15].
Ensemble Machine Learning Algorithms
The results of the software as it has demonstrated its competitiveness and reliability in the identification and diagnosis of three types of brain tumors. In the “Diagnosis of nodular formations in the lungs from computed tomography images based on ensemble training” a classification method using computed tomography was proposed [23]. To evaluate the performance of the proposed system, they used 60 computed tomography (CT) scans compiled by the Lung Image Database Consortium (LIDC), and as a result, all these techniques have shown an improvement in diagnostics.
The authors also proposed a weighted sampling method for stream training, which was called TrResampling and used the TrAd-aBoost algorithm. The algorithm is used to adjust the weights of the source data and the target data.
Datasets
These datasets encouraged us, as novices, to apply our proposed technology in image classification to achieve new opportunities in the fight against this infectious disease. The SARS-CoV-2 dataset contains, for comparison purposes, 1252 CT scans positive for SARS-CoV-2 infection (COVID-19) and 1230 CT scans for patient populations not infected with SARS-CoV-2, f. a total of 2482 CT scans 3-2. The information was collected from real patients in Sao Paulo hospitals, and the goal of this data collection is to promote artificial intelligence research and development to determine if a patient is diagnosed with SARS-CoV-2 by monitoring his or her computed tomography scans [26].
Images from the COVID-CT Sao Paulo, Brazil dataset are shown in Figure 3-2 and appear in the first row, while non-Covid-19 cases appear in the second row. UCSD COVID-CT, the second data set, has the lowest average number of samples over all others. The repository also contains meta-information consisting of patient ID, patient information, gender, image caption, and age.
All images in this dataset were collected from COVID19-related papers from medRxiv, bioRxiv, NEJM, JAMA, Lancet, etc. [28]. The third dataset, the COVIDx-CT Dataset, contains volumetric chest CT scans and a comparison CT image dataset derived from CT imaging data collected by the China National Center for Bioinformation, with 104,009 images from 1,489 patient cases [29]. The first variant consists of cases with confirmed diagnoses, while the second variant includes the entire first possibility as well as some cases presumed to be correctly diagnosed but poorly verified [30].
During the onset of the virus, datasets were updated with the addition of new CT images of patients.
Ensemble learning algorithm
One of the methods we consider in the study is to stack the features extracted by multiple neural networks. First, we took several pretrained Convolutional Neural Network models and fine-tuned their final convolution layer using 3 of the described data sets. Furthermore, another important part of fine-tuning is to freeze the dense layers to fine-tune only convolution blocks.
This was done using Keras built-in functionalities such as the attribute that can be trained by the keras.layers. This was done so that only a pre-trained part of the model was tuned using the CT scan datasets. If we fine-tune the models with the unfrozen dense layers, this will result in the large gradient shifts and in the elimination of the pretrained features in the architectures we use.
To classify the features extracted by the models, we used one of the most efficient linear classifiers, namely Logistic Regression. The algorithm is capable of both classification and regression, but in the task we used Support Vectors Classifier, namely LinearSVC. There are several instances of the meta-algorithms that use boosting: AdaBoost, gradient boosting and others.
Model averaging is a type of collective learning where each individual neural network architecture contributes to the prediction result in the same way as the other individual models.
Models
It is one of the most popular convolutional neural networks, which is simple and practical enough and can work with state-of-the-art efficiency [13]. The architecture shows one of the best performances in the ImageNet dataset, which comprises 1000 classes of 14 million. VGG16 achieved an accuracy score of 92.7%, which was one of the top 5 scores in the dataset.
That is, the architecture of the model was also created by stacking some leftover blocks. As the name suggests, the ResNet50 model includes 50 neural network layers that provide one of the highest accuracies among other modern architectures [6]. One of the main goals of Inception architectures is to deal with two issues related to Convolutional Neural Network architecture.
The architecture of DenseNet models uses several dense blocks, which combine a number of convolution layers that are connected in a feed-forward manner. The model is one of the most powerful CNN models when compared to ResNet and other popular CNN models. EfficientNet-B3 has 12 million parameters and is one of the smallest models used in our training.
However, its MBConv layers and structural simplicity provide one of the best accuracy results with little training time.
Training
Efficient models follow the same general structure as the other popular image recognition models and contain the blocks of convolutional layers followed by fully connected layers. This implies that we must carefully consider the choice of meta-learner block and the parameters of its layers [39]. Furthermore, the width of the layers provides almost the same efficiency as the depth of a series of layers [6].
One can see that the input parameters for training purposes were the same, including batch size and number of epochs. The total time complexity of the CNN architecture depends on several factors, including the depth, width, com- . We can also see that most other CNN architectures performed almost at the same speed with time complexity ranging from 808.5 to 1050 seconds.
This implies that the architecture was either more complex or used the largest number of parameters. Penalties with higher values were applied to the first layer of the meta-learner block, and penalties ranging from 0.3 to 0.5 were applied to dropout layers. To do this, we used built-in Keras functionalities, such as the trainable attribute of keras.layers.
Since the CT datasets we used contain many images with different spatial sizes, they need to be resized beforehand to ensure their compatibility with the input size of the model.
Evaluation
Experiments
- Experimental Setup
- Software
- Evaluation Metrics
- Comparison with Other Models
- Comparison among other ensemble algorithms
- Results
- Performance comparison of the proposed CNN models
- Performance of the Ensemble Learning algorithm
- t-SNE (t-distributed stochastic neighbor embedding)
- Heatmap visualization
The indicator helps us evaluate the performance of the convolutional neural network model in terms of binary classification. Accuracy - is an indicator that describes the proportion of correct answers provided by our algorithm models. As a result of the need to reduce the number of parameters in the CONV layers and optimize the training process, VGGNet was developed.
We present the summary of the results of the EfficientNet-B3 model on the test data set in confusion matrix 4-1 below. Observing the results presented in the table shows that DenseNet201 and VGG19 are more universally applicable and provide better performances. For performance evaluation of the Ensemble Learning algorithm we used 7 of our Convolutional Neural Network architectures and 3 CT datasets.
Each of the above results indicate that the ensemble model outperforms each individual convolutional neural network architecture. It further relies on minimizing the discrepancy within low- and high-dimensional sets of instances. Although this divided the data into two groups, it confounded some of the samples and did not create a distinction.
Grad-CAM is one of the class activation mapping (CAM) approaches used in this study. The predictions of the CNN models differ, implying that the architectures emphasize different parts of the CT scan. The DenseNet201 architecture relies on the inner part of the lungs to predict normal and COVID-19 images.
Conclusion
1] Travers Ching, Daniel S Himmelstein, Brett K Beaulieu-Jones, Alexandr A Kalinin, Brian T Do, Gregory P Way, Enrico Ferrero, Paul-Michael Agapow, Michael Zietz, Michael M Hoffman, et al. Geautomatiseerd diep transferleren- gebaseerde aanpak voor detectie van covid-19-infectie op röntgenfoto's van de borstkas.20] Ziwei Zhu, Zhang Xingming, Guihua Tao, Tingting Dan, Jiao Li, Xijie Chen, Yang Li, Zhichao Zhou, Xiang Zhang, Jinzhao Zhou, et cetera. al. al.
Classification of covid-19 with compressed chest CT image using deep learning on a large patient cohort. Convolutional Neural Network Deep Learning Classification of Brain Tumor Types in Magnetic Resonance Images Using a Developed Web Interface. 25] Gianluca Pontone, Stefano Scafuri, Maria Elisabetta Mancini, Cecilia Agalbato, Marco Guglielmo, Andrea Baggiano, Giuseppe Muscogiuri, Laura Fusini, Daniele Andreini, Saima Mushtaq, et al.
Covidnet-ct: an adapted deep convolutional neural network design for covid-19 case detection in chest CT images. Improved deep neural networks for covid-19 detection from chest CT images with larger, more diverse learning.