Missed the latest AI Explorer?

Catch up here!

Back to Public AI Systems Page

Introduction to Trustworthiness in Computing Systems

Foundations for AI 

Picture2

Abstract: General concepts of trust in AI and autonomous systems can be derived from engineering concepts of dependable and secure computing systems. In computing systems, trust is formally defined as the dependence of one system on another, and the acceptance that the other system is also dependable. [Avižienis, et al, 2004]. This 

dependence can be either human/machine or machine/machine. Resilience is related to trustworthiness as the ability of the system to withstand instability, unexpected conditions, and gracefully return to predictable, but possibly degraded, performance. This is a system-of-systems concern, and trust must be considered as both a characteristic of an individual system or subsystem and as relationships between the system and other systems including humans. In systems engineering, trust can be categorized into a s

et of dependability and security attributes: the ability of a system to avoid service failures and cover the interrelated foundational attributes of availability, reliability, safety, integrity, confidentiality, and maintainability. These attributes work together to ensure the system’s successful application, and demonstration of those attributes over time engenders trust. Thus, trust is a systems engineering concept. In this talk we will explore modeling trust as a set of system attributes. 

Presenter: Tom McDermott is the Deputy Director and Chief Technology Officer of the Systems Engineering Research Center (SERC) at Stevens Institute of Technology in Hoboken, NJ. With the SERC he develops new research strategies and is leading research on digital transformation, education, security, and artificial intelligence applications. Mr. McDermott also teaches system architecture concepts, systems thinking and decision making, and engineering leadership. He is a lecturer for Stevens as well as Georgia Tech and Agnes Scott College, both in Atlanta, GA. He provides executive level consulting as a futurist and organizational strategy expert, applying systems approaches to enterprise planning. He currently serves on the INCOSE Board of Directors as Director of Strategic Integration. 

 

The Confiance.AI Programme Guillermo.jpg

Abstract: Confiance.AI is a French national programme launched within the framework of the French National Grand Challenge “Security, dependability and certification of AI-based systems” managed by the French Innovation Council.  

 

Confiance.AI is the largest technological research programme of the #IAforHumanity plan. The programme brings together a unique community of actors across industry, start-ups, education and research to design and industrialize trustworthy AI-based critical systems, with the final goal of ensuring the competitiveness and sovereignty of the French industry. 

 

The main objective of the programme is to design and deliver a “trustworthy environment” for engineering industrial AI-based products and systems. The environment relies on methodological and technological bricks that complement the existing development environments of the industrial partners of the programme. It supports the design, verification, validation, qualification, and deployment of AI-based systems in an industrial context and at large scale. Industrial sectors participating to the programme include aerospace, automobile, maritime, energy, IT, defence, security, and manufacturing. 

In this webinar, we will present the scope, organisation and expected outcomes of the Confiance.AI programme, with a focus on the systems approach that has been defined within the programme. 

 

Presenters: Guillermo CHALÉ GÓNGORA is Product Line & Systems Engineering Director at Thales Corporate Engineering. He has over 20 years of experience in Systems Engineering and Product Line Engineering in the Energy, Infrastructure, Automotive, Railway, Aerospace and Defence sectors, where he has worked on the tailoring and application of SE, MBSE and PLE to the development of complex systems and products. Over the years, he has been particularly interested in systems thinking & critical thinking, safety-critical systems, formal methods, architecture description languages, modelling & simulation, and autonomous systems. He holds a PhD on Energy & Thermal Systems, Masters on Energy Conversion and Internal Combustion Engines, and an Engineering Degree on Mechanical-Electrical Engineering. Guillermo is founder and former Chair of the PLE International WG and the Automotive WG of INCOSE, and member of the INCOSE Transportation WG and the MBSE Initiative. 

 

AI Explainability and AI in Sci-FI

May 4, 2022 A tour of the subject of explainable AI—a very hot topic in AI these days. After that, in honor of Star Wars Day, we’ll take a look at how AI appears in Star Wars, and Star Trek also, and see how AI is depicted in the science fiction future.

Ali_Raz

A System Engineer’s Guide to Explainable AI

Abstract: System Engineers (SEs) are faced with incorporating Artificial Intelligence and Machine Learning (AI/ML) based solutions into modern systems for meeting complex technological and societal needs. It is imperative for SEs to characterize the behavior of AI/ML based components that often appears as black boxes even to the component designers. This talk will introduce key concepts of Explainable AI (XAI) that creates a window into the black-box nature of AL/ML based component. We will debunk some common myths about explainability and discuss how SEs can utilize XAI for test and evaluation of systems with AI/ML-components.

Presenter: Dr. Ali Raz (CSEP) is an Assistant Professor of Systems Engineering and an Assistant Director of Intelligent Systems and Integration at George Mason University C4I and Cyber Center. Dr. Raz research interests are in integrating autonomous systems and brings together a system of systems perspective with artificial intelligence and information fusion. Dr. Raz is the current co-chair for INCOSE’s Artificial Intelligence Working Group and a co-chair of the Complex Systems Working Group. He holds a BSc and MSc in Electrical Engineering from Iowa State University and a doctorate in Aeronautics and Astronautics from Purdue University.

Barclay_BrownEverything I Know about Artificial Intelligence I Learned from the Movies

Abstract: Much of what most people “know” about artificial intelligence traces its source to science fiction mythology. Science fiction representations of robots, created minds, and intelligent machines spans a wide range of perspectives. Speculative books may describe possible futures of AI, but movies and TV series can show what a world would be like if these scenarios became reality. Star Wars and Star Trek, show nearly polar opposites in how AI turns out in a highly advanced human (and non-human) civilization.

Presenter: Dr. Barclay R. Brown, ESEP,  is Associate Director for Research in AI at Collins Aerospace, a division of Raytheon Technologies. Before joining Collins, he was an Engineering Fellow in Raytheon Missiles and Defense, focusing on MBSE, and prior to that he was the Global Solution Executive for the Aerospace and Defense Industry at IBM. Dr. Brown holds a bachelor’s degree in Electrical Engineering, master’s degrees in Psychology and Business and a PhD in Industrial and Systems Engineering. He has taught systems engineering and systems thinking at several universities, and is a certified Expert Systems Engineering Professional (ESEP), certified Systems Engineering Quality Manager, and CIO of INCOSE for 2021-2023. He chairs the INCOSE AI Systems working group.