Federated Learning-Based IoT Security Model for Privacy Preserving Analytics

Navanath N. Kumbhar*, Prashant V. Mane-Deshmukh**
* Department of Electronics Mudhoji College, Phaltan Maharashtra.
** Department of Electronics JSPMs Rajarshi Shahu Commerce And Science College Uruli Dewachi, Pune, Maharashtra.
Periodicity:July - December'2025
DOI : https://doi.org/10.26634/jcc.12.2.22361

Abstract

The exponential expansion of the Internet of Things (IoT) ecosystem has accelerated the need for real-time, distributed data analytics while intensifying privacy and security risks arising from centralized data collection. Federated Learning (FL) provides a promising alternative by collaboratively training global models across edge devices without exposing raw data. Nevertheless, conventional FL frameworks face multiple challenges, including susceptibility to gradient inversion, membership inference, and poisoning attacks, as well as significant communication and energy overheads on resource-constrained IoT nodes. To address these limitations, FL-ISM is proposed, a federated learning–based IoT security model that integrates secure aggregation, calibrated differential privacy, and Byzantine-resilient optimization with reputation-aware client selection and communication compression mechanisms. The system and threat model are formally defined, privacy and robustness guarantees are derived, and FL-ISM is evaluated on intrusion and anomaly detection benchmarks under non-IID data conditions. Experimental results demonstrate that FL-ISM not only achieves competitive predictive performance but also reduces uplink traffic and effectively mitigates backdoor and inference attacks, thereby enabling scalable, privacy-preserving, and secure analytics in safety-critical IoT environments.

Keywords

Federated Learning, IoT, Secure Aggregation, Differential Privacy, Adversarial Robustness, Edge Computing.

How to Cite this Article?

Kumbhar, N. N., and Mane-Deshmukh, P. V. (2025). Federated Learning-Based IoT Security Model for Privacy Preserving Analytics. i-manager’s Journal on Cloud Computing, 12(2), 35-41. https://doi.org/10.26634/jcc.12.2.22361

References

[3]. An, S., Li, Y., Lin, Z., Liu, Q., Chen, B., Fu, Q., & Lou, J. G. (2022). Input-tuning: Adapting unfamiliar inputs to frozen pretrained models. arXiv preprint arXiv:2203.03131.
[5]. Bagdasaryan, E., Veit, A., Hua, Y., Estrin, D., & Shmatikov, V. (2020). How to backdoor federated learning. In International Conference on Artificial Intelligence and Statistics (pp. 2938-2948). PMLR.
[6]. Blanchard, P., El Mhamdi, E. M., Guerraoui, R., & Stainer, J. (2017). Machine learning with adversaries: Byzantine tolerant gradient descent. Advances in Neural Information Processing Systems, 30.
[8]. Carlini, N., Liu, C., Erlingsson, Ú., Kos, J., & Song, D. (2019). The secret sharer: Evaluating and testing unintended memorization in neural networks. In 28th USENIX Security Symposium (USENIX security 19) (pp. 267-284).
[10]. Fang, M., Cao, X., Jia, J., & Gong, N. (2020). Local model poisoning attacks to {Byzantine-Robust} federated learning. In 29th USENIX Security Symposium (USENIX Security 20) (pp. 1605-1622).
[12]. Konečný, J., McMahan, H. B., Yu, F. X., Richtárik, P., Suresh, A. T., & Bacon, D. (2016). Federated learning: Strategies for improving communication efficiency. arXiv preprint arXiv:1610.05492.
[13]. Wang, H., Sreenivasan, K., Rajput, S., Vishwakarma, H., Agarwal, S., Sohn, J. Y., & Papailiopoulos, D. (2020). Attack of the tails: Yes, you really can backdoor federated learning. Advances in Neural Information Processing Systems, 33, 16070-16084.
[16]. Li, T., Sahu, A. K., Zaheer, M., Sanjabi, M., Talwalkar, A., & Smith, V. (2020). Federated optimization in heterogeneous networks. Proceedings of Machine Learning and Systems, 2, 429-450.
[17]. McMahan, B., Moore, E., Ramage, D., Hampson, S., & y Arcas, B. A. (2017). Communication-efficient learning of deep networks from decentralized data. In Artificial Intelligence and Statistics (pp. 1273-1282). PMLR.
[22]. Popovici, D., Stelzig, J., & Ugarte, L. (2021). Higher-page Bott–Chern and Aeppli cohomologies and applications. Journal für die reine und angewandte Mathematik (Crelles Journal), 2021(777), 157-194.
[23]. Sagar, S., Li, C. S., Loke, S. W., & Choi, J. (2023). Poisoning attacks and defenses in federated learning: A survey. arXiv preprint arXiv:2301.05795.
[26]. Zhu, L., Liu, Z., & Han, S. (2019). Deep leakage from gradients. Advances in Neural Information Processing Systems, 32.
If you have access to this article please login to view the article or kindly login to purchase the article

Purchase Instant Access

Single Article

North Americas,UK,
Middle East,Europe
India Rest of world
USD EUR INR USD-ROW
Pdf 35 35 200 20
Online 15 15 200 15
Pdf & Online 35 35 400 25

Options for accessing this content:
  • If you would like institutional access to this content, please recommend the title to your librarian.
    Library Recommendation Form
  • If you already have i-manager's user account: Login above and proceed to purchase the article.
  • New Users: Please register, then proceed to purchase the article.