Daily Arxiv

This is a page that curates AI-related papers published worldwide.
All content here is summarized using Google Gemini and operated on a non-profit basis.
Copyright for each paper belongs to the authors and their institutions; please make sure to credit the source when sharing.

Breaking Android with AI: A Deep Dive into LLM-Powered Exploitation

Created by
  • Haebom

Author

Wanni Vidulige Ishan Perera, Xing Liu, Fan Liang, Junyi Zhang

Outline

This paper explores the automation of Android penetration testing using artificial intelligence (AI) and large-scale language models (LLMs), specifically the detection and execution of rooting techniques using PentestGPT. We compare existing manual rooting processes with AI-based exploit generation methods to evaluate the efficiency, reliability, and scalability of AI-based automated penetration testing. We use the Genymotion Android emulator to implement both manual and AI-generated scripts for automated rooting, and develop a web application integrating the OpenAI API to automate LLM-based script generation. We evaluate the effectiveness of AI-based exploits, analyze their strengths and weaknesses, and provide security recommendations, including ethical aspects and exploitability. Our findings demonstrate that while LLMs simplify the exploitation process, human intervention is necessary for accuracy and ethical application.

Takeaways, Limitations

Takeaways:
We present the efficiency and potential of automating Android penetration testing using AI-based tools.
We analyze the pros and cons of generating automated exploits using LLM.
Provides a discussion on the ethical issues and exploit prevention of AI-based penetration testing.
It presents a new perspective on AI-based cybersecurity research and mobile security.
Limitations:
The research subject may be limited to a specific LLM (PentestGPT) and Android environment.
Further research is needed to verify the accuracy and reliability of the output results of LLM.
Testing across various Android versions and device environments may be limited.
A comprehensive solution to the ethical concerns of AI-based automated penetration testing may be lacking.
👍