Optimizing Capsule Endoscopy Detection: A Deep Learning Approach with L-Softmax and Laplacian-SGD

Sana Danish*, Nimra Shoket Ali**, Jamshaid Ul Rahman***
*-** Abdus Salam School of Mathematical Sciences, Government College University, Lahore, Pakistan.
*** School of Mathematical Sciences, Jiangsu University, Zhenjiang, China.
Periodicity:July - December'2024

Abstract

Capsule endoscopy has emerged as a non-invasive diagnostic tool for gastrointestinal diseases; however, efficient disease classification remains a challenge due to the inherent complexities of image analysis. Furthermore, the extensive time required for manual examination of capsule endoscopy images has led researchers and clinicians to seek timeefficient automated detection methods. This is where the profound advantages of deep learning (DL) become crucial. This research proposes a novel approach that combines L-Softmax with Laplacian Smoothing Stochastic Gradient Descent (LSSGD) within a ResNet architecture to enhance disease classification accuracy in capsule endoscopy images from the Kvasir dataset. The L-Softmax function is integrated into the DL framework, facilitating better class separation and feature representation. Additionally, LSSGD is employed to mitigate overfitting and enhance model generalization. Experimental results demonstrate that our methodology is stable and easy to utilize in capsule endoscopy.

Keywords

Capsule Endoscopy, Deep Learning, Laplacian Smoothing Stochastic Gradient Descent, L-Softmax.

How to Cite this Article?

Danish, S., Ali, N. S., and Rahman, J. Ul. (2024). Optimizing Capsule Endoscopy Detection: A Deep Learning Approach with L-Softmax and Laplacian-SGD. i-manager’s Journal on Mathematics, 13(2), 10-21.

References

[1]. Agarap, A. F. (2018). Deep learning using rectified linear units (relu). arXiv preprint arXiv:1803.08375.
[6]. Canann, S. A., Tristano, J. R., & Staten, M. L. (1998). An approach to combined laplacian and optimization-based smoothing for Triangular, Quadrilateral, and Quad-Dominant meshes. Indian Medical Register (IMR), 1, 479-494.
[19]. Lin, H., & Jegelka, S. (2018). Resnet with one-neuron hidden layers is a universal approximator. Advances in Neural Information Processing Systems, 31.
[22]. O'Shea, K. (2015). An introduction to convolutional neural networks. arXiv preprint arXiv:1511.08458.
[28]. Ruder, S. (2016). An overview of gradient descent optimization algorithms. arXiv preprint arXiv:1609.04747.
[30]. Soydaner, D. (2020). A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence, 34(13), 2052013.
[39]. Zhang, Z., & Sabuncu, M. (2018). Generalized cross entropy loss for training deep neural networks with noisy labels. Advances in Neural Information Processing Systems, 31.
[40]. Zhou, J., Jiang, T., Li, Z., Li, L., & Hong, Q. (2019, September). Deep speaker embedding extraction with channel-wise feature responses and additive supervision softmax loss function. In Interspeech (pp. 2883-2887).
[41]. Zinkevich, M., Weimer, M., Li, L., & Smola, A. (2010). Parallelized stochastic gradient descent. Advances in Neural Information Processing Systems, 23, 1-9.
If you have access to this article please login to view the article or kindly login to purchase the article

Purchase Instant Access

Single Article

North Americas,UK,
Middle East,Europe
India Rest of world
USD EUR INR USD-ROW
Pdf 35 35 200 20
Online 35 35 200 15
Pdf & Online 35 35 400 25

Options for accessing this content:
  • If you would like institutional access to this content, please recommend the title to your librarian.
    Library Recommendation Form
  • If you already have i-manager's user account: Login above and proceed to purchase the article.
  • New Users: Please register, then proceed to purchase the article.