Mostrar el registro sencillo del ítem
Robust Federated Learning With Contrastive Learning and Meta-Learning
| dc.contributor.author | Zhang, Huan | |
| dc.contributor.author | Chen, Yuxian | |
| dc.contributor.author | Li, Kuanching | |
| dc.contributor.author | Li, Yuhui | |
| dc.contributor.author | Zhou, Sisi | |
| dc.contributor.author | Liang, Wei | |
| dc.contributor.author | Poniszewska-Maranda, Aneta | |
| dc.date | 2026-03-26 | |
| dc.date.accessioned | 2026-03-09T13:30:05Z | |
| dc.date.available | 2026-03-09T13:30:05Z | |
| dc.identifier.citation | H. Zhang, Y. Chen, K. Li, Y. Li, S. Zhou, W. Liang, A. Poniszewska-Maranda. Robust Federated Learning with Contrastive Learning and Meta-Learning, International Journal of Interactive Multimedia and Artificial Intelligence, vol. 9, no. 6, pp. 38-51, 2026, http://doi.org/10.9781/ijimai.2025.09.004KeywordsContrastive Learning, Federated Learning, Meta-Learning, Non-Independent and Identically Distribution (Non-IID).AbstractFederated learning is regarded as an effective approach to addressing data privacy issues in the era of artificial intelligence. Still, it faces the challenges of unbalanced data distribution and client vulnerability to attacks. Current research solves these challenges but ignores the situation where abnormal updates account for a large proportion, which may cause the aggregated model to contain excessive abnormal information to deviate from the normal update direction, thereby reducing model performance. Some are not suitable for non-Independent and Identically Distribution (non-IID) situations, which may lead to the lack of information on small category data under non-IID and, thus, inaccurate prediction. In this work, we propose a robust federated learning architecture, called FedCM, which integrates contrastive learning and meta-learning to mitigate the impact of poisoned client data on global model updates. The approach improves features by leveraging extracted data characteristics combined with the previous round of local models through contrastive learning to improve accuracy. Additionally, a meta-learning method based on Gaussian noise model parameters is employed to fine-tune the local model using a global model, addressing the challenges posed by non-independent and identically distributed data, thereby enhancing the model’s robustness. Experimental validation is conducted on real datasets, including CIFAR10, CIFAR100, and SVHN. The experimental results show that FedCM achieves the highest average model accuracy across all proportions of attacked clients. In the case of a non-IID distribution with a parameter of 0.5 on CIFAR10, under attack client proportions of 0.2, 0.5, and 0.8, FedCM improves the average accuracy compared to the baseline methods by 8.2%, 7.9%, and 4.6%, respectively. Across different proportions of attacked clients, FedCM achieves at least 4.6%, 5.2%, and 0.45% improvements in average accuracy on the CIFAR10, CIFAR100, and SVHN datasets, respectively. FedCM converges faster in all training groups, especially showing a clear advantage on the SVHN dataset, where the number of training rounds required for convergence is reduced by approximately 34.78% compared to other methods.DOI: 10.9781/ijimai.2025.09.004Robust Federated Learning With Contrastive Learning and Meta-LearningHuan Zhang1, Yuxiang Chen1,2, Kuanching Li1,2* , Yuhui Li3, Sisi Zhou1, Wei Liang1,2, Aneta Poniszewska-Maranda41 School of Computer Science and Engineering, Hunan University of Science and Technology, Xiangtan (China)2 Sanya Research Institute, Hunan University of Science and Technology, Sanya (China)3 Department of Computing, The Hong Kong Polytechnic University, Kowloon, Hong Kong SAR (China)4 Institute of Information Technology, Lodz University of Technology, Łódź (Poland)* Corresponding author: aliric@hnust.edu.cn Received 18 December 2024 | Accepted 9 April 2025 | Published 26 September 2025 I. IntroductionIn the era of artificial intelligence, where various types of data (such as images, audio, and text) are growing exponentially, and demands for data privacy protection are becoming increasingly stringent, federated learning decentralizes model training to the client side. It eliminates the need to share private data by transmitting local model updates, which are then aggregated on a server to form a global model. Federated learning effectively addresses the data requirements of artificial intelligence and is widely applied in finance [1], Internet of Things [2], healthcare [3] and other fields [4], [5]. For example, in COVID-19 testing, federated learning is used, and different medical institutions use private chest X-ray images to train models, which can avoid privacy regulations, privacy leaks, and other issues [3]. However, two major factors affecting model performance are the unbalanced data distribution and clients’ vulnerability to attacks, in federated learning.Whether it is the unbalanced data distribution or the local data anomalies caused by client attacks, both can lead to deviations in local model training, thereby reducing the accuracy of the global model. | es_ES |
| dc.identifier.uri | https://reunir.unir.net/handle/123456789/19145 | |
| dc.description.abstract | Federated learning is regarded as an effective approach to addressing data privacy issues in the era of artificial intelligence. Still, it faces the challenges of unbalanced data distribution and client vulnerability to attacks. Current research solves these challenges but ignores the situation where abnormal updates account for a large proportion, which may cause the aggregated model to contain excessive abnormal information to deviate from the normal update direction, thereby reducing model performance. Some are not suitable for non-Independent and Identically Distribution (non IID) situations, which may lead to the lack of information on small category data under non-IID and, thus, inaccurate prediction. In this work, we propose a robust federated learning architecture, called FedCM, which integrates contrastive learning and meta-learning to mitigate the impact of poisoned client data on global model updates. The approach improves features by leveraging extracted data characteristics combined with the previous round of local models through contrastive learning to improve accuracy. Additionally, a meta-learning method based on Gaussian noise model parameters is employed to fine-tune the local model using a global model, addressing the challenges posed by non-independent and identically distributed data, thereby enhancing the model’s robustness. Experimental validation is conducted on real datasets, including CIFAR10, CIFAR100, and SVHN. The experimental results show that FedCM achieves the highest average model accuracy across all proportions of attacked clients. In the case of a non-IID distribution with a parameter of 0.5 on CIFAR10, under attack client proportions of 0.2, 0.5, and 0.8, FedCM improves the average accuracy compared to the baseline methods by 8.2%, 7.9%, and 4.6%, respectively. Across different proportions of attacked clients, FedCM achieves at least 4.6%, 5.2%, and 0.45% improvements in average accuracy on the CIFAR10, CIFAR100, and SVHN datasets, respectively. FedCM converges faster in all training groups, especially showing a clear advantage on the SVHN dataset, where the number of training rounds required for convergence is reduced by approximately 34.78% compared to other methods. | es_ES |
| dc.language.iso | eng | es_ES |
| dc.publisher | UNIR | es_ES |
| dc.relation.uri | https://doi.org/10.9781/ijimai.2025.09.004 | es_ES |
| dc.rights | openAccess | es_ES |
| dc.subject | Contrastive Learning | es_ES |
| dc.subject | Federated Learning | es_ES |
| dc.subject | Meta-Learning | es_ES |
| dc.subject | NonIndependent and Identically Distribution (Non-IID) | es_ES |
| dc.title | Robust Federated Learning With Contrastive Learning and Meta-Learning | es_ES |
| dc.type | article | es_ES |
| reunir.tag | ~IJIMAI | es_ES |
| dc.identifier.doi | https://doi.org/10.9781/ijimai.2025.09.004 |





