Resumen
Finding roots of equations is at the heart of most computational science. A well-known and widely used iterative algorithm is Newton?s method. However, its convergence depends heavily on the initial guess, with poor choices often leading to slow convergence or even divergence. In this short note, we seek to enlarge the basin of attraction of the classical Newton?s method. The key idea is to develop a relatively simple multiplicative transform of the original equations, which leads to a reduction in nonlinearity, thereby alleviating the limitation of Newton?s method. Based on this idea, we derive a new class of iterative methods and rediscover Halley?s method as the limit case. We present the application of these methods to several mathematical functions (real, complex, and vector equations). Across all examples, our numerical experiments suggest that the new methods converge for a significantly wider range of initial guesses. For scalar equations, the increase in computational cost per iteration is minimal. For vector functions, more extensive analysis is needed to compare the increase in cost per iteration and the improvement in convergence of specific problems.