Resumen
Modern IT solutions, such as multi-agent systems, require the use of mechanisms that will introduce certain social elements to improve the process of communication. Such mechanisms may be trust and reputation models, which allow a very important aspect of human relations, i.e. trust, to be introduced between autonomous software agents. Models that are currently proposed usually fail to take into account openness of present systems or mobility of agents, which allows them to move across systems. According to the authors of this paper, agents from the same system should be evaluated in a different way than agents from a different multi-agent system. The concept of a trust model proposed in this paper takes into account the above mentioned factors and enables a simple evaluation of other agents depending on the system from which they come and the action they are designed to perform.