Full Menu and site Navigation
A better world through a systems approach

Artificial Intelligence Systems


NEXT AI Explorer Event:

Co-design of Trustworthy AI and Systems
Responsible Artificial intelligence

Wednesday, July 27, 2022

10:30 AM
US Eastern Time

Register Now

AI Explorer events feature two brief (TED-style) talks on key Artificial Intelligence
topics and allow time for questions and discussion by all participants



INCOSE Members Only:
Replays are available on the

AI Systems Working Group Intranet Page


Co-design of Trustworthy AI and Systems

Dr. Zoe Szajnfarber

Professor and Chair of the Department of Engineering Management
and Systems Engineering, George Washington University

Abstract: The Designing Trustworthy AI Systems (DTAIS) program is built around a core tension between the opportunity for ubiquitous AI to transform work for social good and the emergent risks around bias, security and privacy that arise as AI tools are increasingly embedded in core routines and value generating institutional functions. This presentation will provide an overview of DTAIS research on trust and system architecture, focusing on two in-progress studies. First, in the context of image classification, we explore how a typical focus on accuracy scores (this classifier is 89% accurate) can hide critical differences in the distribution of classification errors (when it’s wrong, how wrong is it?). We relate differences in error distribution to the formation of trust among end-users. Second, in the context of human in/on-the-loop AI systems, we examine perceived and revealed differences between human control over the systems, and discuss the implications of these differences for policy.

Presenter: Dr. Zoe Szajnfarber is Professor and Chair of the Department of Engineering Management and Systems Engineering at the George Washington University. She is also the PI and co-Director of the NSF-funded Designing Trustworthy AI Systems (DTAIS) traineeship program. Dr. Szajnfarber’s research focuses on the design and development of complex systems, and the organizations that create them. She holds a bachelors in Engineering Science from the University of Toronto, dual master’s degrees in Aeronautics & Astronautics and Technology & Policy from MIT and a PhD in Engineering Systems, also from MIT.

AnjanaSusarlaResponsible Artificial intelligence

Dr. Anjana Susarla
Omura-Saxena Endowed Professor of Responsible AI
Eli Broad College of Business
Michigan State University

Abstract: The talk will discuss algorithmic bias, fairness metrics and bias mitigation practices and end with a discussion of algorithmic harms to evaluate fairness metrics. The specific example of digital health literacy will be examined. Studies suggest that one in three US adults use the Internet to diagnose or learn about a health concern. However, such access to health information online could exacerbate the disparities in health information availability and use. Health information seeking behavior (HISB) refers to the ways in which individuals seek information about their health, risks, illnesses, and health-protective behaviors. For patients engaging in searches for health information on digital media platforms, health literacy divides can be exacerbated both by their own lack of knowledge and by algorithmic recommendations, with results that disproportionately impact disadvantaged populations, minorities, and low health literacy users. I will report on exploratory investigation of the above challenges by examining whether responsible and representative recommendations can be generated using advanced analytic methods applied to a large corpus of videos and their metadata on a chronic condition (diabetes) from the YouTube social media platform.

Presenter: Anjana Susarla is the Omura-Saxena Endowed Professor of Responsible AI at the Eli Broad College of Business, Michigan State University. She earned an undergraduate degree in Mechanical Engineering from the Indian Institute of Technology, a graduate degree in Business Administration from the Indian Institute of Management and a Ph.D. in Information Systems from the University of Texas at Austin. Her work has appeared in several academic journals and peer-reviewed conferences. She has published op-eds and her work has been quoted in several media outlets.

Mission & Objectives

The general goals of the AI Systems WG are to identify needs of the international AI community (industry, academia, government) which are well-suited for contributions by INCOSE, and to provide expertise across SE functions and lifecycle management that can be used by industry to promote the development of AI Systems. The specific goals are to: 1) identify and communicate emerging AI technologies that can be applied to the engineering of systems (AI for SE), including AI that appertains to industries of the Future, and 2) develop and communicate advances in SE methods needed to effectively engineer systems with embedded AI (SE for AI). To meet these goals, the following research objectives have been identified: 

  • Explore Human-AI collaboration

  • Evaluate Safety and Security of AI systems

  • Measure and evaluate AI technologies through standards and benchmarks

  • Understand and promote workforce development and STEM initiatives

  • Contribute to public-private partnerships and affiliations to accelerate advances  

  • Establish best practices for using AI techniques in Systems and Systems Engineering


Barclay R. Brown, Ph.D., ESEP, Assoc. Director AI Research Collins Aerospace

Ali Raz, George Mason University

Tom McDermott - Stevens Institute of Technology

Interested?! Please contact the chairs for how to get involved!

Working Group Products

SE & AI Primer

Planned Working Sessions at the Next Events

Planned Presentations at the Next Events