The requirements for ensuring functional safety have always been very high.Modern safety-related systems are becoming increasingly complex, making also the safety integrity assessment more complex and time-consuming. ...The requirements for ensuring functional safety have always been very high.Modern safety-related systems are becoming increasingly complex, making also the safety integrity assessment more complex and time-consuming. This trend is further intensified by the fact that AI-based algorithms are finding their way into safety-related systems or will do so in the future. However, existing and expected standards and regulations for the use of AI methods pose significant challenges for the development of embedded AI software in functional safety-related systems. The consideration of essential requirements from various perspectives necessitates an intensive examination of the subject matter, especially as diferent standards have to be taken into account depending on the final application. There are also diferent targets for the “safe behavior” of a system depending on the target application. While stopping all movements of a machine in industrial production plants is likely to be considered a “safe state”, the same condition might not be considered as safe in flying aircraft, driving cars or medicine equipment like heart pacemaker. This overall complexity is operationalized in our approach in such a way that it is straightforward to monitor conformity with the requirements. To support safety integrity assessments and reduce the required efort, a Self-Enforcing Network(SEN) model is presented in which developers or safety experts can indicate the degree of fulfillment of certain requirements with possible impact on the safety integrity of a safety-related system. The result evaluated by the SEN model indicates the achievable safety integrity level of the assessed system, which is additionally provided by an explanatory component.展开更多
In open normative multi-agent communities,an agent is not usually and explicitly given the norms of the host agents.Thus,when it is not able to adapt the communities's norms,it is totally deprived of accessing res...In open normative multi-agent communities,an agent is not usually and explicitly given the norms of the host agents.Thus,when it is not able to adapt the communities's norms,it is totally deprived of accessing resources and services from the host.Such circumstance severely affects its performance resulting in failure to achieve its goal.Consequently,this study attempts to overcome this deficiency by proposing a technique that enables an agent to detect the host's potential norms via self-enforcement and update its norms even in the absence of sanctions from a third-party.The authors called this technique as the potential norms detection technique(PNDT).The PNDT consists of five components: Agent's belief base; observation process; potential norms mining algorithm(PNMA);verification process; and updating process.The authors demonstrate the operation of the PNMA algorithm by testing it on a typical scenario and analyzing the results on several perspectives.The tests' results show that the PNDT performs satisfactorily albeit the success rate depends on the environment variables settings.展开更多
文摘The requirements for ensuring functional safety have always been very high.Modern safety-related systems are becoming increasingly complex, making also the safety integrity assessment more complex and time-consuming. This trend is further intensified by the fact that AI-based algorithms are finding their way into safety-related systems or will do so in the future. However, existing and expected standards and regulations for the use of AI methods pose significant challenges for the development of embedded AI software in functional safety-related systems. The consideration of essential requirements from various perspectives necessitates an intensive examination of the subject matter, especially as diferent standards have to be taken into account depending on the final application. There are also diferent targets for the “safe behavior” of a system depending on the target application. While stopping all movements of a machine in industrial production plants is likely to be considered a “safe state”, the same condition might not be considered as safe in flying aircraft, driving cars or medicine equipment like heart pacemaker. This overall complexity is operationalized in our approach in such a way that it is straightforward to monitor conformity with the requirements. To support safety integrity assessments and reduce the required efort, a Self-Enforcing Network(SEN) model is presented in which developers or safety experts can indicate the degree of fulfillment of certain requirements with possible impact on the safety integrity of a safety-related system. The result evaluated by the SEN model indicates the achievable safety integrity level of the assessed system, which is additionally provided by an explanatory component.
文摘In open normative multi-agent communities,an agent is not usually and explicitly given the norms of the host agents.Thus,when it is not able to adapt the communities's norms,it is totally deprived of accessing resources and services from the host.Such circumstance severely affects its performance resulting in failure to achieve its goal.Consequently,this study attempts to overcome this deficiency by proposing a technique that enables an agent to detect the host's potential norms via self-enforcement and update its norms even in the absence of sanctions from a third-party.The authors called this technique as the potential norms detection technique(PNDT).The PNDT consists of five components: Agent's belief base; observation process; potential norms mining algorithm(PNMA);verification process; and updating process.The authors demonstrate the operation of the PNMA algorithm by testing it on a typical scenario and analyzing the results on several perspectives.The tests' results show that the PNDT performs satisfactorily albeit the success rate depends on the environment variables settings.