Abstract
Recently, large language models (LLMs) have been gaining a lot of interest due to their adaptability and extensibility in emerging applications, including communication networks. It is anticipated that zero touch network and service management (ZSM) will be able to support mobile edge computing networks and LLMs as a service, as they aim to provide network automation and service management without manual intervention. However, LLMs are vulnerable to data and model privacy issues that affect the trustworthiness of LLMs to be deployed for user-based services. In this article, we explore the security vulnerabilities associated with fine-tuning LLMs in ZSM in particular the membership inference attack. We define the characteristics of an attack network that can perform a membership inference attack if the attacker has access to the fine-tuned model for the downstream task. We show that the membership inference attacks are effective for any downstream task, which can lead to personal data breach when using LLM as a service. The experimental results show that the attack success rate of maximum 92% can be achieved on named entity recognition task. Based on the experimental analysis, we discuss possible defense mechanisms and present potential research directions to make the LLMs more trustworthy in the context of 6G networks.
| Original language | English |
|---|---|
| Journal | IEEE Communications Magazine |
| DOIs | |
| Publication status | Accepted/In press - 2025 |
| Externally published | Yes |
Fingerprint
Dive into the research topics of 'Pathway to Secure and Trustworthy ZSM for LLMs: Attacks, Defense, and Opportunities'. Together they form a unique fingerprint.Cite this
- APA
- Author
- BIBTEX
- Harvard
- Standard
- RIS
- Vancouver