Dr. Obieda Ananbeh

PhD in computer science

Sr.Software Engineer

Microsoft Certified

Java Developer

AI

Dr. Obieda Ananbeh

PhD in computer science

Sr.Software Engineer

Microsoft Certified

Java Developer

AI

Assessing the effectiveness of large language models for Java vulnerability repair A comparative study

Research Overview

The integration of advanced artificial intelligence into software engineering presents significant opportunities for automated vulnerability remediation. This study conducts a rigorous comparative evaluation of prominent large language models to determine their proficiency in repairing Java based software vulnerabilities. By systematically assessing multiple models, the research aims to quantify their accuracy, identify prevalent limitations, and benchmark their performance against conventional repair methodologies.

Methodology and Analytical Approach

The investigation utilizes a structured dataset of known Java vulnerabilities to rigorously test the repair capabilities of selected large language models. The evaluation metrics encompass repair precision, contextual understanding of the codebase, and the computational efficiency of the deployed models. The empirical results highlight both the potential of generative artificial intelligence in accelerating security patches and the current boundaries of these technologies when confronted with complex architectural vulnerabilities.

Implications for Software Security

This comparative analysis contributes critical empirical evidence to the domain of automated software security. It equips practitioners and researchers with a precise understanding of the operational readiness of large language models for Java vulnerability repair. Furthermore, it establishes a formal foundation for developing more robust and security focused artificial intelligence frameworks tailored for enterprise software maintenance.

Read The Paper