Redirigiendo al acceso original de articulo en 20 segundos...
Inicio  /  Algorithms  /  Vol: 16 Par: 5 (2023)  /  Artículo
ARTÍCULO
TITULO

Adjustable Pheromone Reinforcement Strategies for Problems with Efficient Heuristic Information

Nikola Ivkovic    
Robert Kudelic and Marin Golub    

Resumen

Ant colony optimization (ACO) is a well-known class of swarm intelligence algorithms suitable for solving many NP-hard problems. An important component of such algorithms is a record of pheromone trails that reflect colonies? experiences with previously constructed solutions of the problem instance that is being solved. By using pheromones, the algorithm builds a probabilistic model that is exploited for constructing new and, hopefully, better solutions. Traditionally, there are two different strategies for updating pheromone trails. The best-so-far strategy (global best) is rather greedy and can cause a too-fast convergence of the algorithm toward some suboptimal solutions. The other strategy is named iteration best and it promotes exploration and slower convergence, which is sometimes too slow and lacks focus. To allow better adaptability of ant colony optimization algorithms we use ?-best, max-?-best, and 1/?-best strategies that form the entire spectrum of strategies between best-so-far and iteration best and go beyond. Selecting a suitable strategy depends on the type of problem, parameters, heuristic information, and conditions in which the ACO is used. In this research, we use two representative combinatorial NP-hard problems, the symmetric traveling salesman problem (TSP) and the asymmetric traveling salesman problem (ATSP), for which very effective heuristic information is widely known, to empirically analyze the influence of strategies on the algorithmic performance. The experiments are carried out on 45 TSP and 47 ATSP instances by using the MAX-MIN ant system variant of ACO with and without local optimizations, with each problem instance repeated 101 times for 24 different pheromone reinforcement strategies. The results show that, by using adjustable pheromone reinforcement strategies, the MMAS outperformed in a large majority of cases the MMAS with classical strategies.

 Artículos similares

       
 
Xiaoxia Zhang, Xin Shen and Ziqiao Yu    
Quality of service multicast routing is an important research topic in networks. Research has sought to obtain a multicast routing tree at the lowest cost that satisfies bandwidth, delay and delay jitter constraints. Due to its non-deterministic polynomi... ver más
Revista: Algorithms

 
Hui Wang, Youming Li, Liliang Zhang, Yexian Fan and Zhiliang Li    
The self-deployment of nodes with non-uniform coverage in underwater acoustic sensor networks (UASNs) is challenging because it is difficult to access the three-dimensional underwater environment. The problem is further complicated if network connectivit... ver más
Revista: Applied Sciences