A Study on the Classification of Cancers with Lung Cancer Pathological Images Using Deep Neural Networks and Self-Attention Structures
Main Article Content
Keywords
Image Classification, Deep Learning, Self-Attention, Depthwise Convolution, ResNet, Convolution Neural Network, Lung Cancer Classification
Abstract
In this paper, we propose a ResNet-based lung cancer pathology image classification model using deep neural networks and self-attention modules. With a shortcut structure that adds input as an output, which is ResNet's concept, we not only solve the vanishing gradient problem but also perform well even when layers are piled densely. Based on this idea, the pre-activation structure in which the output enters the input as it is was used by moving the position of batch normalization and activation function in front of the weight layer was used. ResNet's bottleneck structure is made up of layers with 1x1, 3x3, and 1x1 convolution layers, which are utilized for depth wise convolution and 1x1 convolution, respectively, to conduct convolution operations in the channel direction. In addition, channel attention and spatial attention were used as self-attention modules that focus on certain features after the bottleneck structure. Finally, batch normalization and activation functions were used, and after using the Funnel Activation function considering two-dimensional space as the activation function, the model is constructed with a fully connected layer with average pooling and activation function as sigmoid. The accuracy, precision, recall, and f1-score of our method are 82.83%, 83%, 84.14%, and 83.56 respectively. We show through experiments that our method is better than existing ResNet-based models.
References
2. Hong, J. Y., Park, S. H., & Jung, Y. J. (2020). Artificial intelligence based medical imaging: An overview. Journal of radiological science and technology, ISSN: 2288-3509(Print); 2384-1168(Online), Korean Society of Radiological Science, 43(3), 195-208. doi : https://doi.org/10.17946/jrst.2020.43.3.195
3. Yadav, S. S., & Jadhav, S. M. (2019). Deep convolutional neural network based medical image classification for disease diagnosis. Journal of Big Data, ISSN: 2196-1115, Springer, 6(1), 1-18. doi: https:// doi.org/ 10.1186/ s40537-019-0276-2
4. He, K., Zhang, X., Ren, S., & Sun, J. (2016). Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition, ISSN: 1063-6919, IEEE Computer Society, 770-778. doi : https://doi.org/ 10.1109/CVPR.2016.90W
5. Guo, Y., Li, Y., Wang, L., & Rosing, T. (2019, July). Depthwise convolution is all you need for learning multiple visual domains. In Proceedings of the AAAI Conference on Artificial Intelligence, ISSN : 2374-3468(Online); 2159-5399(Print), AAAI-19, 33, 8368-8375. doi : https://doi.org/10.1609/aaai.v33i01.33018368
6. Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A. N., ... & Polosukhin, I. (2017). Attention is all you need. Advances in neural information processing systems, ISSN: 1049-5258, 30., 6000-6010. doi : https://doi.org/10.48550/arXiv.1706.03762
7. Wang, Q., Wu, B., Zhu, P.F., Li, P., Zuo, W., & Hu, Q. (2020). ECA-Net: Efficient Channel Attention for Deep Convolutional Neural Networks. 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), ISSN: 1063-6919, IEEE Computer Society, 11531-11539. doi : https:// doi.org/ 10.1109/ CVPR42600.2020.01155
8. Woo, S., Park, J., Lee, J. Y., & Kweon, I. S. (2018). Cbam: Convolutional block attention module. In Proceedings of the European conference on computer vision (ECCV), 3-19. doi : https://doi.org/ 10.1007/978-3-030-01234-2_1
9. Nair, V., & Hinton, G. E. (2010, January). Rectified linear units improve restricted boltzmann machines. In International Conference on Machine Learning(ICML)., 807-814
10. He, K., Zhang, X., Ren, S., & Sun, J. (2015). Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In Proceedings of the IEEE international conference on computer vision, ISSN: 1550-5499, Institute of Electrical and Electronics Engineers Inc, 1026-1034. doi : https://doi.org/ 10.1109/ ICCV.2015.123
11. Ma, N., Zhang, X., & Sun, J. (2020, August). Funnel activation for visual recognition. In European Conference on Computer Vision, 351-368. doi : https://doi.org/10.1007/978-3-030-58621-8_21
12. Szegedy, C., Vanhoucke, V., Ioffe, S., Shlens, J., & Wojna, Z. (2016). Rethinking the inception architecture for computer vision. In Proceedings of the IEEE conference on computer vision and pattern recognition, ISSN: 1063-6919, IEEE Computer Society, 2818-2826. doi : https://doi.org/10.1109/CVPR.2016.308
13. He, T., Zhang, Z., Zhang, H., Zhang, Z., Xie, J., & Li, M. (2019). Bag of tricks for image classification with convolutional neural networks. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, ISSN: 1063-6919, IEEE Computer Society, 558-567. doi : https://doi.org/ 10.1109/ CVPR. 2019.00065
14. He, K., Zhang, X., Ren, S., & Sun, J. (2016, October). Identity mappings in deep residual networks. In European conference on computer vision, Springer, 630-645. doi : https://doi.org/ 10.1007/978-3-319-46493-0_38
15. Chollet, F. (2017). Xception: Deep Learning with Depthwise Separable Convolutions. 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), ISSN: 1063-6919, IEEE Computer Society, 1800-1807. doi : https://doi.org/10.1109/CVPR.2017.195
16. Lee, S., Kim, Y. J. and Choi, Y. J. (2018). Does She Advance Her Development in The Face of Cancer? A Structural Equation Model of Posttraumatic Growth after Diagnosed with Cancer. International Journal of Advanced Nursing Education and Research, vol.3, no.2, Nov. 2018, pp.1-10, doi: https:// doi.org/ 10.21742/IJANER.2018.3.2.01
17. Sahamijoo, A., Piltan, F., Jaberi, S. M., Sulaiman, N. B. (2015). Prevent the Risk of Lung Cancer Progression Based on Fuel Ratio Optimization. International Journal of u - and e - Service, Science and Technology, NADIA, vol.8, no.2, Feb. (2015), pp.45-60, doi:10.14257/ijunnesst.2015.8.2.05.