Missed the latest AI Explorer?

Catch up here!

Back to Public AI Systems Page

Large Language Models and Book Index Generation - Nov 9, 2022

Ad-hoc Data Exploration using Conversational AI and Human-AI collaborative knowledge discovery in tradespace exploration

Sep 14, 2022, 10:30 am Eastern

 

Ad-hoc Data Exploration using Conversational AI

Anand Ranganathan, Chief AI Officer at Unscrambl

Anand_Picture1

Conversational AI is getting more and more widely used for customer support and employee support use-cases. In this session, I'm going to talk about how it can be extended for data analysis and data science use-cases, i.e., how users can interact with a bot to ask ad-hoc analytical questions on data in relational databases. This allows people to explore complex datasets using a combination of text and voice questions, in natural language, and then get back results in a combination of natural language and visualizations. For example, they can ask questions like "How many cases of Covid were there in the last 2 months by state and gender" or "Why did the number of deaths from Covid increase in May 2022", and get back results in a combination of text and charts.

Presenter Bio

Anand Ranganathan is a co-founder and the Chief AI Officer at Unscrambl. He is leading Unscrambl's product development in natural language processing, automated insights and data story-telling.  Before joining Unscrambl, he was a Master Inventor and Research Scientist at IBM. He received his PhD in Computer Science from UIUC, and his BTech from the IIT-Madras. He also has over 70 academic journal and conference publications and 30 patent filings in his name.

Human-AI collaborative knowledge discovery in tradespace exploration

Dani Selva, Assistant Professor of Aerospace Engineering at Texas A&M UniversityDani_Selva_2022

Tradespace exploration is a method used in the early design of complex systems to explore a wide range of design alternatives across conflicting figures of merit in a rigorous and systematic way. AI tools have been used in tradespace exploration for decades, primarily to improve efficiency in the evaluation and search of the design space. However, what engineers need from AI is more than just better design optimization/search capabilities to find the best possible designs according to the models. Engineers need to learn what is driving those results, why they are getting those results, and under what circumstances they remain valid, so they can justify their design decisions to stakeholders. In summary, the goal is insight, a rather subjective and ill-defined construct that has been studied in the visual and data analytics community for decades. In this talk, I advocate for the use of AI agents to help engineers explore the large and complex datasets resulting from tradespace exploration in the context of their own knowledge and expertise of the problem. I will summarize a few studies we have done in my lab to better understand what humans learn in design space exploration, how AI assistants can maximize that learning, and the subtle relationship between how much engineers learn about a problem and other more traditional outcomes of tradespace exploration such as the quantity, quality and diversity of designs explored. 

Presenter Bio

Daniel "Dani" Selva is an Assistant Professor of Aerospace Engineering at Texas A&M University, where he directs the Systems Engineering, Architecture, and Knowledge (SEAK) Lab. His research interests focus on the application of knowledge engineering, global optimization and machine learning techniques to systems engineering and design, with a strong focus on space systems. Dani has a dual background in electrical engineering and aerospace engineering, with degrees from MIT, Universitat Politecnica de Catalunya in Barcelona, Spain, and Supaero in Toulouse, France. Before his PhD, Dani worked for four years in Kourou (French Guiana) as an avionics specialist within the Ariane 5 Launch team, where he successfully launched 21 Ariane 5 rockets to space. He is a Senior Member of AIAA and IEEE, and the Secretary of the AIAA Intelligent Systems Technical Committee.

Co-design of Trustworthy AI and Systems and Responsible Artificial intelligence

ZoeSzajnfarber

Co-design of Trustworthy AI and Systems

Dr. Zoe Szajnfarber

Professor and Chair of the Department of Engineering Management
and Systems Engineering, George Washington University

Abstract: The Designing Trustworthy AI Systems (DTAIS) program is built around a core tension between the opportunity for ubiquitous AI to transform work for social good and the emergent risks around bias, security and privacy that arise as AI tools are increasingly embedded in core routines and value generating institutional functions. This presentation will provide an overview of DTAIS research on trust and system architecture, focusing on two in-progress studies. First, in the context of image classification, we explore how a typical focus on accuracy scores (this classifier is 89% accurate) can hide critical differences in the distribution of classification errors (when it’s wrong, how wrong is it?). We relate differences in error distribution to the formation of trust among end-users. Second, in the context of human in/on-the-loop AI systems, we examine perceived and revealed differences between human control over the systems, and discuss the implications of these differences for policy.

Presenter: Dr. Zoe Szajnfarber is Professor and Chair of the Department of Engineering Management and Systems Engineering at the George Washington University. She is also the PI and co-Director of the NSF-funded Designing Trustworthy AI Systems (DTAIS) traineeship program. Dr. Szajnfarber’s research focuses on the design and development of complex systems, and the organizations that create them. She holds a bachelors in Engineering Science from the University of Toronto, dual master’s degrees in Aeronautics & Astronautics and Technology & Policy from MIT and a PhD in Engineering Systems, also from MIT.

 

AnjanaSusarlaResponsible Artificial intelligence

Dr. Anjana Susarla
Omura-Saxena Endowed Professor of Responsible AI
Eli Broad College of Business
Michigan State University

Abstract: The talk will discuss algorithmic bias, fairness metrics and bias mitigation practices and end with a discussion of algorithmic harms to evaluate fairness metrics. The specific example of digital health literacy will be examined. Studies suggest that one in three US adults use the Internet to diagnose or learn about a health concern. However, such access to health information online could exacerbate the disparities in health information availability and use. Health information seeking behavior (HISB) refers to the ways in which individuals seek information about their health, risks, illnesses, and health-protective behaviors. For patients engaging in searches for health information on digital media platforms, health literacy divides can be exacerbated both by their own lack of knowledge and by algorithmic recommendations, with results that disproportionately impact disadvantaged populations, minorities, and low health literacy users. I will report on exploratory investigation of the above challenges by examining whether responsible and representative recommendations can be generated using advanced analytic methods applied to a large corpus of videos and their metadata on a chronic condition (diabetes) from the YouTube social media platform.

Presenter: Anjana Susarla is the Omura-Saxena Endowed Professor of Responsible AI at the Eli Broad College of Business, Michigan State University. She earned an undergraduate degree in Mechanical Engineering from the Indian Institute of Technology, a graduate degree in Business Administration from the Indian Institute of Management and a Ph.D. in Information Systems from the University of Texas at Austin. Her work has appeared in several academic journals and peer-reviewed conferences. She has published op-eds and her work has been quoted in several media outlets.

Introduction to Trustworthiness in Computing Systems

Foundations for AI 

Picture2

Abstract: General concepts of trust in AI and autonomous systems can be derived from engineering concepts of dependable and secure computing systems. In computing systems, trust is formally defined as the dependence of one system on another, and the acceptance that the other system is also dependable. [Avižienis, et al, 2004]. This 

dependence can be either human/machine or machine/machine. Resilience is related to trustworthiness as the ability of the system to withstand instability, unexpected conditions, and gracefully return to predictable, but possibly degraded, performance. This is a system-of-systems concern, and trust must be considered as both a characteristic of an individual system or subsystem and as relationships between the system and other systems including humans. In systems engineering, trust can be categorized into a s

et of dependability and security attributes: the ability of a system to avoid service failures and cover the interrelated foundational attributes of availability, reliability, safety, integrity, confidentiality, and maintainability. These attributes work together to ensure the system’s successful application, and demonstration of those attributes over time engenders trust. Thus, trust is a systems engineering concept. In this talk we will explore modeling trust as a set of system attributes. 

Presenter: Tom McDermott is the Deputy Director and Chief Technology Officer of the Systems Engineering Research Center (SERC) at Stevens Institute of Technology in Hoboken, NJ. With the SERC he develops new research strategies and is leading research on digital transformation, education, security, and artificial intelligence applications. Mr. McDermott also teaches system architecture concepts, systems thinking and decision making, and engineering leadership. He is a lecturer for Stevens as well as Georgia Tech and Agnes Scott College, both in Atlanta, GA. He provides executive level consulting as a futurist and organizational strategy expert, applying systems approaches to enterprise planning. He currently serves on the INCOSE Board of Directors as Director of Strategic Integration. 

 

The Confiance.AI Programme Guillermo.jpg

Abstract: Confiance.AI is a French national programme launched within the framework of the French National Grand Challenge “Security, dependability and certification of AI-based systems” managed by the French Innovation Council.  

 

Confiance.AI is the largest technological research programme of the #IAforHumanity plan. The programme brings together a unique community of actors across industry, start-ups, education and research to design and industrialize trustworthy AI-based critical systems, with the final goal of ensuring the competitiveness and sovereignty of the French industry. 

 

The main objective of the programme is to design and deliver a “trustworthy environment” for engineering industrial AI-based products and systems. The environment relies on methodological and technological bricks that complement the existing development environments of the industrial partners of the programme. It supports the design, verification, validation, qualification, and deployment of AI-based systems in an industrial context and at large scale. Industrial sectors participating to the programme include aerospace, automobile, maritime, energy, IT, defence, security, and manufacturing. 

In this webinar, we will present the scope, organisation and expected outcomes of the Confiance.AI programme, with a focus on the systems approach that has been defined within the programme. 

 

Presenters: Guillermo CHALÉ GÓNGORA is Product Line & Systems Engineering Director at Thales Corporate Engineering. He has over 20 years of experience in Systems Engineering and Product Line Engineering in the Energy, Infrastructure, Automotive, Railway, Aerospace and Defence sectors, where he has worked on the tailoring and application of SE, MBSE and PLE to the development of complex systems and products. Over the years, he has been particularly interested in systems thinking & critical thinking, safety-critical systems, formal methods, architecture description languages, modelling & simulation, and autonomous systems. He holds a PhD on Energy & Thermal Systems, Masters on Energy Conversion and Internal Combustion Engines, and an Engineering Degree on Mechanical-Electrical Engineering. Guillermo is founder and former Chair of the PLE International WG and the Automotive WG of INCOSE, and member of the INCOSE Transportation WG and the MBSE Initiative. 

 

AI Explainability and AI in Sci-FI

May 4, 2022 A tour of the subject of explainable AI—a very hot topic in AI these days. After that, in honor of Star Wars Day, we’ll take a look at how AI appears in Star Wars, and Star Trek also, and see how AI is depicted in the science fiction future.

Ali_Raz

A System Engineer’s Guide to Explainable AI

Abstract: System Engineers (SEs) are faced with incorporating Artificial Intelligence and Machine Learning (AI/ML) based solutions into modern systems for meeting complex technological and societal needs. It is imperative for SEs to characterize the behavior of AI/ML based components that often appears as black boxes even to the component designers. This talk will introduce key concepts of Explainable AI (XAI) that creates a window into the black-box nature of AL/ML based component. We will debunk some common myths about explainability and discuss how SEs can utilize XAI for test and evaluation of systems with AI/ML-components.

Presenter: Dr. Ali Raz (CSEP) is an Assistant Professor of Systems Engineering and an Assistant Director of Intelligent Systems and Integration at George Mason University C4I and Cyber Center. Dr. Raz research interests are in integrating autonomous systems and brings together a system of systems perspective with artificial intelligence and information fusion. Dr. Raz is the current co-chair for INCOSE’s Artificial Intelligence Working Group and a co-chair of the Complex Systems Working Group. He holds a BSc and MSc in Electrical Engineering from Iowa State University and a doctorate in Aeronautics and Astronautics from Purdue University.

Barclay_BrownEverything I Know about Artificial Intelligence I Learned from the Movies

Abstract: Much of what most people “know” about artificial intelligence traces its source to science fiction mythology. Science fiction representations of robots, created minds, and intelligent machines spans a wide range of perspectives. Speculative books may describe possible futures of AI, but movies and TV series can show what a world would be like if these scenarios became reality. Star Wars and Star Trek, show nearly polar opposites in how AI turns out in a highly advanced human (and non-human) civilization.

Presenter: Dr. Barclay R. Brown, ESEP,  is Associate Director for Research in AI at Collins Aerospace, a division of Raytheon Technologies. Before joining Collins, he was an Engineering Fellow in Raytheon Missiles and Defense, focusing on MBSE, and prior to that he was the Global Solution Executive for the Aerospace and Defense Industry at IBM. Dr. Brown holds a bachelor’s degree in Electrical Engineering, master’s degrees in Psychology and Business and a PhD in Industrial and Systems Engineering. He has taught systems engineering and systems thinking at several universities, and is a certified Expert Systems Engineering Professional (ESEP), certified Systems Engineering Quality Manager, and CIO of INCOSE for 2021-2023. He chairs the INCOSE AI Systems working group.

 

 

 

 

Contact Us