WAISE 2019: Second International Workshop on Artificial Intelligence Safety Engineering

Финляндия, Турку
10 сентября 2019

Форма участия: Очная

Срок подачи заявок: 13.05.2019

Индексирование сборника: Springer

Организаторы: WAISE​ Committees

emal: [email protected]

Повысить рейтинг

Повышение рейтинга дает возможность оставаться объявлению в топе на главной странице, а также на страницах городов, тематик, связанных с объявлением.

Текущее значение: 0

Research, engineering and regulatory frameworks are needed to achieve the full potential of Artificial Intelligence (AI) because they will guarantee a standard level of safety and settle issues such as compliance with ethical standards and liability for accidents involving, for example, autonomous cars. Designing AI-based systems for operation in proximity to and/or in collaboration with humans implies that current safety engineering and legal mechanisms need to be revisited to ensure that individuals –and their properties– are not harmed and that the desired benefits outweigh the potential unintended consequences.

The different approaches taken to AI safety go from pure theoretical (moral philosophy or ethics) to pure practical (engineering). Making progress with developing safe AI-based systems requires to combine philosophy and theoretical science with applied science and engineering. This should become an interdisciplinary approach covering technical (engineering) aspects of how to actually create, test, deploy, operate and evolve safe AI-based systems, as well as broader strategic, ethical and policy issues.

Increasing levels of AI in “smart” sensory-motor loops allow intelligent systems to perform in increasingly dynamic uncertain complex environments with increasing degrees of autonomy, with human being progressively ruled out from the control loop. Adaptation to the environment is being achieved by Machine Learning (ML) methods rather than more traditional engineering approaches, such as system modelling and programming. Recently, certain ML methods are proving themselves very promising, such as deep learning, reinforcement learning and their combination. However, the inscrutability or opaqueness of the statistical models for perception and decision-making we build through them pose yet another challenge. Moreover, the combination of autonomy and inscrutability in these AI-based systems is particularly challenging in safety-critical applications, such as autonomous vehicles, personal care or assistive robots and collaborative industrial robots.

WAISE will explore new ideas on safety engineering for AI-based systems, ethically aligned design, regulation and standards for AI-based systems. In particular, WAISE will provide a forum for thematic presentations and in-depth discussions about safe AI architectures, ML safety, safe human-machine interaction, bounded morality and safety considerations in automated decision-making systems, in a way that makes AI-based systems more trustworthy, accountable and ethically aligned.

WAISE aims at bringing together experts, researchers, and practitioners, from diverse communities, such as AI, safety engineering, ethics, standardization and certification, robotics, cyber-physical systems, safety-critical systems, and application domain communities such as automotive, healthcare, manufacturing, agriculture, aerospace, critical infrastructures, and retail.

Смотреть похожие мероприятия

Shopping Cart