期刊文献+
共找到2篇文章
< 1 >
每页显示 20 50 100
A requirements model for AI algorithms in functional safety-critical systems with an explainable self-enforcing network from a developer perspective
1
作者 Christina Klüver Anneliesa Greisbach +1 位作者 Michael Kindermann Bernd Püttmann 《Security and Safety》 2024年第4期61-85,共25页
The requirements for ensuring functional safety have always been very high.Modern safety-related systems are becoming increasingly complex, making also the safety integrity assessment more complex and time-consuming. ... The requirements for ensuring functional safety have always been very high.Modern safety-related systems are becoming increasingly complex, making also the safety integrity assessment more complex and time-consuming. This trend is further intensified by the fact that AI-based algorithms are finding their way into safety-related systems or will do so in the future. However, existing and expected standards and regulations for the use of AI methods pose significant challenges for the development of embedded AI software in functional safety-related systems. The consideration of essential requirements from various perspectives necessitates an intensive examination of the subject matter, especially as diferent standards have to be taken into account depending on the final application. There are also diferent targets for the “safe behavior” of a system depending on the target application. While stopping all movements of a machine in industrial production plants is likely to be considered a “safe state”, the same condition might not be considered as safe in flying aircraft, driving cars or medicine equipment like heart pacemaker. This overall complexity is operationalized in our approach in such a way that it is straightforward to monitor conformity with the requirements. To support safety integrity assessments and reduce the required efort, a Self-Enforcing Network(SEN) model is presented in which developers or safety experts can indicate the degree of fulfillment of certain requirements with possible impact on the safety integrity of a safety-related system. The result evaluated by the SEN model indicates the achievable safety integrity level of the assessed system, which is additionally provided by an explanatory component. 展开更多
关键词 Functional safety Safety-critical systems Requirements for AI methods Explainable self-enforcing networks(SEN)
原文传递
Development and Implementation of a Technique for Norms-Adaptable Agents in Open Multi-Agent Communities 被引量:1
2
作者 MAHMOUD Moamin AHMAD Mohd Sharifuddin MOHD YUSOFF Mohd Zaliman 《Journal of Systems Science & Complexity》 SCIE EI CSCD 2016年第6期1519-1537,共19页
In open normative multi-agent communities,an agent is not usually and explicitly given the norms of the host agents.Thus,when it is not able to adapt the communities's norms,it is totally deprived of accessing res... In open normative multi-agent communities,an agent is not usually and explicitly given the norms of the host agents.Thus,when it is not able to adapt the communities's norms,it is totally deprived of accessing resources and services from the host.Such circumstance severely affects its performance resulting in failure to achieve its goal.Consequently,this study attempts to overcome this deficiency by proposing a technique that enables an agent to detect the host's potential norms via self-enforcement and update its norms even in the absence of sanctions from a third-party.The authors called this technique as the potential norms detection technique(PNDT).The PNDT consists of five components: Agent's belief base; observation process; potential norms mining algorithm(PNMA);verification process; and updating process.The authors demonstrate the operation of the PNMA algorithm by testing it on a typical scenario and analyzing the results on several perspectives.The tests' results show that the PNDT performs satisfactorily albeit the success rate depends on the environment variables settings. 展开更多
关键词 Agent-based simulation normative agent norms deatetcin self-enforcement agent.
原文传递
上一页 1 下一页 到第
使用帮助 返回顶部