A Reinforcement Learning-Guided Aquila Optimizer with Adaptive Strategy Selection for Enhanced Optimization in Complex Search Spaces
DOI:
https://doi.org/10.14741/ijmcr/v.14.2.16Keywords:
Reinforcement Learning, Aquila Optimizer, RLG-AO, Adaptive Strategy Selection, Metaheuristic Optimization, Benchmark Functions, Optimization AlgorithmsAbstract
This research introduces a Reinforcement Learning-Guided Aquila Optimizer (RLG-AO) designed to solve complex global optimization problems through adaptive strategy selection. Traditional Aquila Optimizer (AO) methods rely on fixed or iteration-based switching strategies, which reduce flexibility in dynamic environments and can lead to premature convergence. To address this issue, the developed approach integrates a reinforcement learning mechanism that dynamically selects search strategies based on real-time feedback during optimization. The process is modeled as a sequential decision-making task, where the RL component chooses suitable operators using state information of fitness improvement, population diversity, and overall search progress. The model’s performance is assessed using benchmark functions like Sphere, Rastrigin, and Ackley. Results show that RLG-AO improves convergence speed, enhances solution quality, and increases robustness compared to the standard AO. Overall, adaptive strategy selection proves effective in boosting performance across challenging optimization scenarios.
