Resumen
Objects that possess mass (e.g., automobiles, manufactured items, etc.) translationally accelerate in direct proportion to the force applied scaled by the object?s mass in accordance with Newton?s Law, while the rotational companion is Euler?s moment equations relating angular acceleration of objects that possess mass moments of inertia. Michel Chasles?s theorem allows us to simply invoke Newton and Euler?s equations to fully describe the six degrees of freedom of mechanical motion. Many options are available to control the motion of objects by controlling the applied force and moment. A long, distinguished list of references has matured the field of controlling a mechanical motion, which culminates in the burgeoning field of deterministic artificial intelligence as a natural progression of the laudable goal of adaptive and/or model predictive controllers that can be proven to be optimal subsequent to their development. Deterministic A.I. uses Chasle?s claim to assert Newton?s and Euler?s relations as deterministic self-awareness statements that are optimal with respect to state errors. Predictive controllers (both continuous and sampled-data) derived from the outset to be optimal by first solving an optimization problem with the governing dynamic equations of motion lead to several controllers (including a controller that twice invokes optimization to formulate robust, predictive control). These controllers are compared to each other with noise and modeling errors, and the many figures of merit are used: tracking error and rate error deviations and means, in addition to total mean cost. Robustness is evaluated using Monte Carlo analysis where plant parameters are randomly assumed to be incorrectly modeled. Six instances of controllers are compared against these methods and interpretations, which allow engineers to select a tailored control for their given circumstances. Novel versions of the ubiquitous classical proportional-derivative, ?PD? controller, is developed from the optimization statement at the outset by using a novel re-parameterization of the optimal results from time-to-state parameterization. Furthermore, time-optimal controllers, continuous predictive controllers, and sampled-data predictive controllers, as well as combined feedforward plus feedback controllers, and the two degree of freedom controllers (i.e., 2DOF). The context of the term ?feedforward? used in this study is the context of deterministic artificial intelligence, where analytic self-awareness statements are strictly determined by the governing physics (of mechanics in this case, e.g., Chasle, Newton, and Euler). When feedforward is combined with feedback per the previously mentioned method (provenance foremost in optimization), the combination is referred to as ?2DOF? or two degrees of freedom to indicate the twice invocation of optimization at the genesis of the feedforward and the feedback, respectively. The feedforward plus feedback case is augmented by an online (real time) comparison to the optimal case. This manuscript compares these many optional control strategies against each other. Nominal plants are used, but the addition of plant noise reveals the robustness of each controller, even without optimally rejecting assumed-Gaussian noise (e.g., via the Kalman filter). In other words, noise terms are intentionally left unaddressed in the problem formulation to evaluate the robustness of the proposed method when the real-world noise is added. Lastly, mismodeled plants controlled by each strategy reveal relative performance. Well-anticipated results include the lowest cost, which is achieved by the optimal controller (with very poor robustness), while low mean errors and deviations are achieved by the classical controllers (at the highest cost). Both continuous predictive control and sampled-data predictive control perform well at both cost as well as errors and deviations, while the 2DOF controller performance was the best overall.