Aim This study aims at developing an automatic medical image analysis and detection for accurate classification of brain tumors from MRI dataset. The study implemented our novel MIDNet18 CNN architecture in comparison with the VGG16 CNN architecture for classifying normal brain images from the brain tumor images.
Materials and methods The novel MIDNet-18 CNN architecture comprises 14 convolutional layers, 7 pooling layers, 4 dense layers and 1 classification layer. The dataset used for this study has two classes: Normal Brain MR Images and Brain Tumor MR Images. This binary MRI brain dataset consists of 2918 images as training set, 1458 images as validation set and 212 images as test set. Independent sample size calculated was 7 for each group, keeping GPower at 80%.
Result From the experimental results, the proposed MIDNet18 model obtained 98.7% accuracy. Whereas, the VGG16 model obtained an accuracy of 50%. Hence, the performance of the proposed MIDNet18 model achieved is better than VGG16. Conclusion: The proposed model is proved to be statistically significant with p value <0.001 (Independent sample t-test) than the existing model VGG16.
Key words: Brain image classification, Convolutional neural network, deep learning, Brain tumour, Novel Medical Image Analysis and Detection network (MIDNet 18), VGG16
*Corresponding author: Ramya Mohan, Associate Professor, Saveetha School of Engineering, Saveetha Institute of Medical and Technical Sciences, Saveetha University, Chennai, TamilNadu, India. Email: ramyanallu@saveetha.com
Submitted: 25 October 2021; Accepted: 12 December 2021; Published: 26 January 2022
©2022 Mohan R et al.
This article is distributed under the terms of the Creative Commons Attribution-Non Commercial 4.0 International License.This is an Open Access article distributed under the terms of the Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International (CC BY-NC-SA 4.0). License (http://creativecommons.org/licenses/by-nc-sa/4.0/)
Cancer is one of the major life threatening diseases that is faced by many people around the globe.1 According to the statistics done by Ferlay et al. 20212 the mortality rate is around 10 million in 2020 due to cancer. Cancer, if detected and treated early, can help in preventing the mortality of an individual or could at the least delay the progress of the disease thereby increasing the lifespan of the patient. This research work is in alignment with this goal of aiding the public and the healthcare professionals to deliver a proper and faster treatment with improved diagnostic procedures.
The advances in artificial intelligence are showing tremendous results in varied areas and its emergence in medical image analysis is very promising. Deep learninglearning3 is used extensively in various disease classifications in lungs, skin, kidney, breast, and retina using X-rays, CT scans and MR Images. Most importantly, accuracy or precise classification plays a pivotal role in medical image analysis. To address this point, the novel MIDNET18 was implemented in this paper for the accurate classification of identifying brain tumors from the normal brain MR images.
A wide range of research in CNN on brain tumor classification has been done in the past five years. Around 253 research articles were published in IEEE Xplore and more than 1000 articles have been published related to this. Özyurt4 has done brain tumor detection based on Convolutional neural networks and is one of the highly cited papers.3 proposed deep learning approach for brain tumor classification using residual networks. For the given dataset, this approach gave an accuracy of more than 90% in comparison with other methods. The Discrete Cosine Transform -convolutional neural network -residual neural network50 (DCT-CNN-ResNet50) architecture to classify brain tumors with super-resolution, CNN, and ResNet50 developed the existing CNN algorithms to classify brain tumors from low resolution images.5 This work has used a discrete cosine transform-based image fusion algorithm in combination with CNN. Classification accuracy of 98.4% was obtained for the specified dataset. Rajesh et al. used compressed discrete cosine transform coefficients data as input to CNN.6 This work has been implemented in ResNet50 architecture and performed well with good accuracy. Baranwal et al. proposed a system to classify Glioma, Meningioma and Pituitary brain diseases using CNN and SVM classifiers.7 Kader et al. proposed CNN-DWA model (convolution neural network and deep watershed auto-encoder) for detection and classifying brain tumor images.8
There are a lot of convolutional neural networks that have been architected for various datasets. But not many are designed exclusively for medical datasets. Medical image analysis, classification and detection has to be very precise and our novel MIDNet18 architecture was designed exclusively for it. MIDNet18 CNN architecture has been used on lung, skin, and retina datasets and it has shown promising results in each case. Both the binary and categorical dataset have been trained on our novel MIDNet18 CNN architecture. Our research team is involved in image analysis and is recently working extensively on deep learning methods especially on medical images. MIDNet-18 CNN architecture classifies binary class MRI brain images with simplified model construction, simple methodology and high accuracy. This study was on a binary dataset, where the focus was on identifying the normal brain from the tumor MR Images.
The study was conducted in the AI research lab in saveetha school of engineering, Saveetha institute of Medical and technical sciences. The dataset was downloaded from Kaggle.27 Since it was downloaded from the public database, no ethical clearance was necessary. There are two groups involved in this study. Based on Ranjbarzadeh et al.,9 the sample size calculated for the study was 14 with parameters Alpha 0.05, beta 0.2 and g-power 0.8 (Figure 1).
Figure 1. Sample Data Size Calculation using Clincalc.com keeping G-power at 80% for Binary Classification of Brain MRI images.
The dataset consists of 2918 images belonging to two classes (Presence of tumor and normal brain MRI) under the training folder. The Validation folder consists of 1458 augmented images belonging to both classes. A separate 212 images belonging to both classes were kept in the test folder which was duly marked by the medical experts from the Saveetha medical college and Hospital. This study was done on a novel medical image analysis and detection network (MIDNet18) CNN architecture and its results were compared with VGG16 architecture.
The study was conducted on a MacBook Air with apple chip M1 and 8 GB Memory. All the CNN models were run in Google Colab, which provides a single 12GB NVIDIA Tesla K80 GPU. All analyses are conducted using SPSS software.28 Independent variables in this study are the input variables( brain tumour and non tumour MRI images. The dependent variables are output variables(Accuracy, precision, recall, F1 score). Independent t-test is performed to compare the performance of algorithms.
Figure 2 shows the architecture of our novel MIDNet18 model. The novel MIDNet18 model consists of 14 convolutional layers with 3*3 kernel size. The ReLu activation function was used in all the layers. These convolutional layers extract the features from the image. The input size of the image is maintained as 224*224. The model consists of 7 max pooling layers with pooling size of 2. Though many models use average pooling in their CNN models, max pooling10 is preferred in designing the MIDNet 18 CNN model as it greatly helps in highlighting the brighter pixels.11 The average pooling smoothens the image, thereby declining the possibility of predicting the tumor pixels. Batch normalization is an important addition used in the MIDNet18 model, which helps in avoiding the overfitting of the model. In simple terms, batch normalization helps each layer to learn more independently.
Figure 2. Proposed MIDNET18 Architecture.
Step 1: Dataset: Upload the brain MR image dataset with two groups (Tumors and Normal Brain). The dataset has a separate Training and Testing folder. Each folder has Normal MR images and Brain Tumor MR images under them. Inorder to increase the dataset size- the images are augmented and is kept in the validation folder.
Step 2: Input Layer: Input images from the training folder are resized to 224*244 in order to improve training efficiency.
Step 3: Novel MIDNET18 model:
Convolutional Layers: There are in total 14 convolutional layers with kernel size 3*3 and activation functions as ReLU.
Batch Normalization: Initialize the mean value to 0 and variance to 1 to stabilize the learning process.
MaxPooling Layer: The convolution is downsampled using 2×2 filter with stride of size 2
Dense layers: Each Dense layer is activated with a ReLU activation function and has a set unit.
Fully Connected layer: Softmax activation function is used to classify the image
Step 4: Training: The dataset is trained in the novel MIDNET18 architecture for 100 epochs with a batch size of 92.
Step 5: Testing: The model is evaluated by feeding the images from the test dataset, and the performance is measured using various metrics like Accuracy, Precision, Recall, F1 Score, Area Under Curve, Loss.
VGG16 CNN architecture is a supervised model consisting of 13 convolutional layers and 3 dense layers (Figure 3).12 VGG16 was chosen as a comparison to our architecture because it was one of the convolutional neural networks which won the ILSVRC (Imagenet) competition in 2014. The two algorithms are compared with respect to the testing accuracy, testing loss, F1 score, Area Under Curve, precision and recall.13
Figure 3. VGG 16 Architecture.
Figure 4 represents the training and validation loss of our MIDNet18 model. It is observed that the loss was as high as 90% before the 15th iteration. As the iteration increases, the learning capacity with weight updation makes the MIDNet18 model to reduce the loss parameter. The training and the validation loss reduces in the same pattern until the 100th iteration with slight variation. As the MIDNet18 model architecture is designed with a low complex structure, the loss reduces to zero in the 100th iteration for the given dataset.
Figure 4. Representation of the performance of MIDNet18 in different iterations. Initially training and the validation loss increases. As the iteration increases the loss decreases almost linearly to zero.
From Figure 5, it is observed that the accuracy increased to more than 90% after the 15th iteration. Lower the iteration, less the weight update. Therefore, during the 15th to 20th iteration the accuracy increases to 98% saturation level. Further increase in iteration shows less variation in the accuracy percentage.
Figure 5. Representation of the accuracy performance of MIDNet18 in different iterations. Initially training and validation increased to above 98%. As the iteration increases, the accuracy remains linearly constant.
From Figure 6, it is observed that the loss increased to 76% in the 1st iteration. When the iteration increases, the learning capacity and weight update of the VGG16 model reduces to 69%. Even with the further increase in iterations, the loss remains at 69% for the given dataset in the VGG16 model. The loss percentage of 69% shows the poor performance of the VGG16 model for the given dataset.
Figure 6. Representation of the performance of VGG16 in different iterations. Initially training and the validation loss increases to76% at 1st iteration. As the iteration increases the loss decreases almost 69%. Constant loss exists till the 100th iteration.
It can be inferred from Figure 7 that the training accuracy of VGG16 increases and decreases randomly. There is a random pattern of variation of training accuracy for increase in iteration. Validation accuracy rates have not shown any significant variation. It remains constant at 50% for any increase in the epoch. Learning and training capability of VGG16 decreases for the given brain MRI image dataset.
Figure 7. Representation of the training and validation accuracy performance of VGG16 in different iterations. Initially the training accuracy is 48% and there is drastic change as the iteration increases. Validation accuracy remains linearly, showing 50% constant value from the 1st iteration to the 100th iteration.
From Figure 16, Independent t-test was used to compare the accuracy of MIDNet18 and VGG16 and the proposed algorithm shows a statistically significant difference of P < 0.001. The MIDNet18 model obtained 98.7% accuracy. Whereas, the VGG16 model obtained an accuracy of 50%. Hence, the performance of the proposed MIDNet18 model achieved is better than VGG16
Figure 8. Training and Validation AUC (Area Under Curve) of VGG 16 Model. It is inferred from the training that AUC is not stable and highly nonlinear with the increase in the number of iterations. But the validation AUC in VGG 16 remains constant at 50%.
Figure 9. MIDNet18 Training and validation AUC. It is inferred from the training that AUC in MIDNet 18 nearly reaches 99% and remains constant for an increase in iteration. In Validation, AUC in MIDNet 18 slightly shows increase and decrease before the 55th iteration. Increase in iteration shows that AUC remains constant at around 99%.
Figure 10. VGG16 training and validation F1 score. It is inferred that VGG16 is too unstable with respect to performance metric F1 scores for both training and validation.
Figure 11. MIDNet18 Training and validation – F1 score. It is inferred from the training that the F1 score in MIDNet 18 nearly reaches 100% and remains almost constant for an increase in iterations. In Validation, the F1 score in MIDNet 18 slightly shows an increase and decrease before the 50th iteration. Increase in iteration shows that F1 score almost remains constant at around 100%.
Figure 12. VGG 16 - Training and validation- Precision. It is inferred from the training that precision is not stable and highly nonlinear for increase in the iterations. In Validation, precision in VGG 16 remains constant at 50% for any change in iteration.
Figure 13. MIDNet18 Training and validation – Precision. It is inferred from the training that the precision in MIDNet 18 nearly reaches 99% for change in iteration and remains almost constant for an increase in iterations. In Validation, the precision in MIDNet 18 shows an increase and decrease before the 50th iteration. Increase in iteration shows that precision almost reaches to 98%.
Figure 14. VGG 16 - Training and validation- Recall. It is inferred from the training that recall is not stable and highly nonlinear for increase in the iterations. In Validation, recall in VGG 16 remains constant at 50% for any change in iteration.
Figure 15. MIDNet18 Training and validation – Recall. It is inferred from the training that the recall metric calculation in MIDNet 18 nearly reaches 99% for change in iteration and remains almost constant for an increase in iterations. In Validation, the recall in MIDNet 18 shows an increase and decrease before the 50th iteration. Increase in iteration shows that precision almost reaches to 98%.
Figure 16. Box Plot graphical representation of the comparison of Mean Accuracy obtained from MIDNet18 and VGG16 model for Binary dataset classification of brain MRI images. The mean accuracy of MIDNet 18 is better than VGG16 and the standard deviation of MIDNet 18 is significantly higher than VGG16. X Axis: MIDNet 18 vs VGG16 Y Axis: Mean accuracy of detection ± 1 SD.
In Table 1, the proposed MIDNet18 model achieves training and testing accuracy of 99.42% & 98.78% in comparison with VGG16 which gives training and testing accuracy of 49.23% & 50%. Moreover, our proposed model is 50% more accurate than VGG16. In comparison with the loss performance metrics, testing loss of MIDNet18 is too low at around 2.01 respectively. Whereas VGG 16 gives a high testing loss of 69.31% respectively. This result clearly shows that the VGG16 model does not provide accurate detection for medical image datasets. Similarly, various other performance metrics are measured to prove the MIDNet18 model’s accurate performance. Area Under Curve (AUC) for VGG16 (Figure 8) of 50% which is too low when compared with MIDNet18 (Figure 9) is 99.98%. F1 score (Table 1) evaluation metric is measured for both training and testing. MIDNet18 (Figure 11) gives F1 scores of 98.79% and 98.76% respectively in comparison with VGG16 (Figure 10) which has a low training and testing F1 scores of 66% . Precision of 98.78% is achieved for MIDNet18 (Figure 13) and only 50% for VGG16 (Fig 12). Recall value of MIDNet 18 (Figure 15) is 98.78% whereas VGG16 (Figure 14) is 50%. Finally, it can be concluded that MIDNet18 outperforms VGG16 in comparison with all considered performance metrics in classification of tumour and non tumour brain images of the given dataset.
Table 1. Training Accuracy, Testing Accuracy, Training Loss, Testing Loss, AUC, F1 Score, Precision, Recall Value for Binary Classification (Tumour, Non Tumour) Brain Image Dataset for CNN Models
Architectures | Training Acc (%) | Testing Acc (%) | Testing Loss (%) | AUC (%) | F1 Score (%) | Precision(%) | Recall(%) |
---|---|---|---|---|---|---|---|
Our Novel MIDNet-18 | 99.42 | 98.78 | 02.01 | 99.98 | 98.79 | 98.78 | 98.78 |
VGG-16 | 49.23 | 50 | 69.31 | 50 | 66.66 | 50 | 50 |
In Table 2, the significance value smaller than 0.001 showed that our hypothesis of usage of the MIDNet18 model holds good. With respect to changes in the input values (independent variables) the corresponding output values (dependent variables) also change.
Table 2. Multiple Comparisons of Binary Classification Dataset of Brain MRI Images Using Dependent Variables as Accuracy. The Mean Difference, Std.Error, Significance Value and Confidence Interval of Algorithms VGG16 and Proposed MIDNET-18 aAre Obtained.
Independent Samples Test | |||||||||
---|---|---|---|---|---|---|---|---|---|
Levene’s Test for Equality of Variances | t-Test for Equality of Means | ||||||||
F | Sig. | t | df | Sig. (2-tailed) | Mean Difference | Std. Error Difference | 95% Confidence Interval of the Difference | ||
Lower | Upper | ||||||||
Accuracy | |||||||||
Equal variances assumed | 13.790 | 0.000 | 114.315 | 198 | 0.000 | 48.520 | 0.424 | 47.683 | 49.357 |
Equal variances not assumed | 114.315 | 104.869 | 0.000 | 48.520 | 0.424 | 47.678 | 49.362 |
In Table 3, mean accuracy and standard deviation values are obtained for MIDNet 18 and VGG 16 architecture. N is the number of iterations considered for training the brain MRI dataset. Here our proposed work achieves high mean accuracy of 97.63% in 100 iterations. Further increase in iteration doesn’t show much improvement in the accuracy value. Standard deviation of MIDNet18 is too low of 4.18%. For the same statistical analysis, the VGG 16 model shows mean accuracy of 49.31% only. Thereby concludes that VGG 16 is not suitable for brain medical image classification
Table 3. Statistical Parameter Analysis of Accuracy using SPSS. MIDNet 18 Model Gives Mean Accuracy of 97.83% Compared to VGG 16
Group Statistics | |||||
---|---|---|---|---|---|
Algorithm | N | Mean | Std. Deviation | Std. Error Mean | |
Accuracy | MIDNet-18 | 100 | 97.83 | 4.183 | 0.418 |
VGG-16 | 100 | 49.31 | 0.720 | 0.072 |
In this study, it was observed that the novel MIDNet18 model performed significantly better than the standard VGG16 model with a p-value of 0.001 and a test accuracy of 98.78%. Various studies have used CNN models to classify medical images.8,14–16 In order to help radiologists detect brain tumors, an automatic tumor classification model should be in place.17 However, this model needs to be more precise in order to be useful. The RBFNN model has a recognition accuracy of 99.6%. The MRI images of the patient’s brain were used to perform brain tumor differentiation using CNN algorithms.18 Sharma et al. and Badža and Barjaktarović implemented deep convolutional neural network (DCNN) model for brain tumor classification.19,20 Pattern classification was used in this study to differentiate primary brain tumors from metastases, and grade them.21 Considering analysis, the accuracy, sensitivity, and specificity of binary SVM classification was 85%, 87%, and 79%, respectively.22 In this paper, brain tumor MRI datasets were investigated using images using a hybrid approach. An integrated approach to brain tumor classification uses discrete wavelet transforms for feature extraction and genetic algorithms to reduce the number of features, as well as support vector machines (SVMs) with RMS values of close to 0.1.23 An ensemble of deep features and machine learning classifiers is proposed for brain tumor classification by the author. This method attained 93.22% and 90.26% as training and testing accuracy, respectively.24 MRI images were classified using convolutional neural networks into healthy tissue classes and six classes: gliomas, brain metastases, meningiomas, pituitary adenomas, acoustic neuromas and normal. This method achieved the sensitivities, PPVs, AUCs, and AUPRCs ranged from 91% to 97%, 73% to 99%, 0.97 to 0.98, and 0.9 to 1.0, respectively.25 Morad and Al-Dabbas proposed a combination of techniques for filtering, segmenting, and selecting features.26 Median and Slantlet filtering techniques were used to extract features, as well as K-means clustering and Morphological operations. Based on results from the method, the average accuracy was 97.1%, the area under the curve was 0.98, the sensitivity was 91.9%, and the specificity was 98.0%. As a result, the method is more accurate and faster than existing methods. Gu et al. 2021 implemented Convolutional Dictionary Learning with Local Constraints (CDLLC) for brain image classification.14 It was able to classify brain tumor MR images with 88% accuracy when compared to other methods.
Our proposed MIDNet-18 model outperformed the VGG16 model in brain tumour medical image classification of tumour and non tumour images. Our MIDNet18 model learned well and gave high accuracy in binary classification of more than 98% which has a significant difference of p-value less than 0.001(Independent sample t-test). MIDNet18 proves to be a high performer with other metrics in terms of accuracy, precision, recall and F1 score. From the performance analysis of the MIDNet18 model it is observed that the model can be used in other non-medical applications.
Authors have no conflicts of interest in this manuscript.
Author Ramya Mohan was involved in MIDNET18 architecture development, implementation, data collection, data analysis, and manuscript writing. Author Rama Arunmozhi was involved in data collection, data analysis and data validation. Author Kirupa Ganapathy was involved in conceptualization, manuscript writing, document editing and critical review of manuscripts and interpretation of Results.
The authors would like to express the gratitude towards Saveetha School of Engineering, Saveetha Institute of Technical and Medical Sciences for providing the necessary infrastructure to carry out this work successfully.
We are thankful for the following organizations for providing the financial support that enabled us to complete this study.
Saveetha University
Saveetha Institute of Medical and Technical Sciences
Saveetha School of Engineering
1. Siegel RL, Miller KD, Jemal A, “Cancer Statistics.” 2015. April 2, 2015. https://www.cancer.gov/about-cancer/understanding/statistics .
2. Ferlay, Jacques, Murielle Colombet, Isabelle Soerjomataram, Donald M. Parkin, Marion Piñeros, Ariana Znaor, and Freddie Bray. 2021. “Cancer Statistics for the Year 2020: An Overview.” International Journal of Cancer. 10.1002/ijc.33588.
3. Abdelaziz Ismael, Sarah Ali, Ammar Mohammed, and Hesham Hefny. 2020. “An Enhanced Deep Learning Approach for Brain Cancer MRI Images Classification Using Residual Networks.” Artificial Intelligence in Medicine 102 (January): 101779.
4. Özyurt Fatih, Eser Sert, Engin Avci, and Esin Dogantekin. n.d. Brain Tumor Detection Based on Convolutional Neural Network with Neutrosophic Expert Maximum Fuzzy Sure Entropy. Infinite Study.Volume 147, 2019, 106830, ISSN 263-2241, 10.1016/j.measurement.2019.07.058. (https://www.sciencedirect.com/science/article/pii/S0263224119306876).
5. Anand Deshpande, Vania V.Estrela, Prashant Patavardhan, “The DCT-CNN-ResNet50 Architecture to Classify Brain Tumors with Super-Resolution, Convolutional Neural Network, and the ResNet50.” 2021. Neuroscience Informatics 1 (4):100013.
6. Rajesh, Bulla, Mohammed Javed, Ratnesh, and Shubham Srivastava. 2019. “DCT-CompCNN: A Novel Image Classification Network Using JPEG Compressed DCT Coefficients.” 2019 IEEE Conference on Information and Communication Technology. 10.1109/cict48419.2019.9066242.
7. Baranwal, Shubham Kumar, Krishnkant Jaiswal, Kumar Vaibhav, Abhishek Kumar, and R. Srikantaswamy. 2020. “Performance Analysis of Brain Tumour Image Classification Using CNN and SVM.” 2020, 3(10), Second International Conference on Inventive Research in Computing Applications (ICIRCA). 10.1109/icirca48905.2020.9183023.
8. Kader, Isselmou Abd El, Isselmou Abd El Kader, Guizhi Xu, Zhang Shuai, and Sani Saminu. 2021. “Brain Tumor Detection and Classification by Hybrid CNN-DWA Model Using MR Images.” Current Medical ImagingFormerly: Current Medical Imaging Reviews. 10.2174/1573405617666210224113315.
9. Ranjbarzadeh Ramin, Abbas Bagherian Kasgari, Saeid Jafarzadeh Ghoushchi, Shokofeh Anari, Maryam Naseri, and Malika Bendechache. 2021. “Brain Tumor Segmentation Based on Deep Learning and an Attention Mechanism Using MRI Multi-Modalities Brain Images.” Scientific Reports 11 (1): 10930.
10. Murray, Naila, and Florent Perronnin. 2014. “Generalized Max Pooling.” 2014 IEEE Conference on Computer Vision and Pattern Recognition. 10.1109/cvpr.2014.317.
11. Hang, Siang Thye, and Masaki Aono. 2017. “Bi-Linearly Weighted Fractional Max Pooling.” Multimedia Tools and Applications. 10.1007/s11042-017-4840-5.
12. Jiang, Zhi-Peng, Yi-Yang Liu, Zhen-En Shao, and Ko-Wei Huang. 2021. “An Improved VGG16 Model for Pneumonia Image Classification.” Applied Sciences. 10.3390/app112311185.
13. Kaur, Prabhjot, Shilpi Harnal, Rajeev Tiwari, Fahd S. Alharithi, Ahmed H. Almulihi, Irene Delgado Noya, and Nitin Goyal. 2021. “A Hybrid Convolutional Neural Network Model for Diagnosis of COVID-19 Using Chest X-Ray Images.” International Journal of Environmental Research and Public Health 18 (22). 10.3390/ijerph182212191.
14. Gu, Xiaoqing, Zongxuan Shen, Jing Xue, Yiqing Fan, and Tongguang Ni. 2021. “Brain Tumor MR Image Classification Using Convolutional Dictionary Learning With Local Constraint.” Frontiers in Neuroscience 15 (May): 679847.
15. Bijen Khagi, Goo Rak Kwon, “Convolutional Neural Network-Based Natural Image and MRI Classification Using Gaussian Activated Parametric (GAP) Layer”, Access IEEE, vol. 9, pp. 96930-96947, 2021.
16. Roy, Sanjiban Sekhar, Nishant Rodrigues, and Y-H Taguchi. 2020. “Incremental Dilations Using CNN for Brain Tumor Classification.” Applied Sciences. 10.3390/app10144915.
17. Jia, Zheshu, and Deyun Chen. 2020. “Brain Tumor Identification and Classification of MRI Images Using Deep Learning Techniques.” IEEE Access. 10.1109/access.2020.3016319.
18. Irmak, Emrah. 2021. “Multi-Classification of Brain Tumor MRI Images Using Deep Convolutional Neural Network with Fully Optimized Framework.” Iranian Journal of Science and Technology, Transactions of Electrical Engineering. 10.1007/s40998-021-00426-9.
19. Sharma, Kirti, Ketna Khanna, Sapna Gambhir, and Mohit Gambhir. 2022. “Study on Brain Tumor Classification Through MRI Images Using a Deep Convolutional Neural Network.” International Journal of Information Retrieval Research. 10.4018/ijirr.289610.
20. Badža, Milica M., and Marko Č. Barjaktarović. 2020. “Classification of Brain Tumors from MRI Images Using a Convolutional Neural Network.” Applied Sciences. 10.3390/app10061999.
21. Zacharaki, Evangelia I., Sumei Wang, Sanjeev Chawla, Dong Soo Yoo, Ronald Wolf, Elias R. Melhem, and Christos Davatzikos. 2009a. “Classification of Brain Tumor Type and Grade Using MRI Texture and Shape in a Machine Learning Scheme.” Magnetic Resonance in Medicine. 10.1002/mrm.22147.
22. Zacharaki Evangelia I, Wang S, Chawla S, Yoo DS, Wolf R, Melhem ER, et al. MRI-based classification of brain tumor type and grade using SVM-RFE. In: IEEE international symposium on biomedical imaging: From nano to macro. 2009b. 10.1109/isbi.2009.5193232.
23. Sanjeev Kumar, Chetna Dabas, Sunila Godara, “Classification of Brain MRI Tumor Images: A Hybrid Approach.” 2017. Procedia Computer Science 122 (January): 510–17.
24. Kang, Jaeyong, Zahid Ullah, and Jeonghwan Gwak. 2021. “MRI-Based Brain Tumor Classification Using Ensemble of Deep Features and Machine Learning Classifiers.” Sensors 21 (6). 10.3390/s21062222.
25. Chakrabarty S, Sotiras A, Milchenko M, LaMontagne P, Hileman M, Marcus D. MRI-based identification and classification of major intracranial tumor types by using a 3D convolutional neural network: A retrospective multi-institutional analysis. Radiol Artif Intell. 2021;3(5):e200301. 10.1148/ryai.2021200301.
26. Morad, Ameer Hussian, and Hadeel Moutaz Al-Dabbas. 2020. “Classification of Brain Tumor Area for MRI Images.” Journal of Physics. Conference Series 1660 (November): 12059.
27. “Kaggle: Your Machine Learning and Data Science Community.” n.d. Accessed December 17, 2021. https://www.kaggle.com/.
28. Kremelberg, David. 2010. Practical Statistics: A Quick and Easy Guide to IBM® SPSS® Statistics, STATA, and Other Statistical Software. SAGE Publications, Incorporated.