Compared to a collection of the same but independent individuals, the members of a team when interdependent are significantly more productive. Yet, interdependence is insufficiently studied to provide an efficient operational architecture for human-machine or machine-machine teams. Interdependence in a team creates bi-stable effects among humans characterized by tradeoffs that affect the design, performance, networks and other aspects of operating autonomous human-machine teams.
To solve these next-generation problems, the AI and systems engineering (SE) of human-machine teams require multidisciplinary approaches. Namely, the science of interdependence for autonomous human-machine teams requires contributions not only from AI, including machine learning (ML); and from SE, including the verification and validation of systems using AI or ML; but also other disciplines to establish an approach that allows a human and machine to operate as teammates, including simulation and training environments where humans and machines can co-adapt their operational outcomes yet with stable outcomes assured by evidence based frameworks. As a general rule, users interfacing with machine learning algorithms require the information fusion (IF) of data to achieve limited autonomous operations, but as autonomy increases, a wider spectrum of features become necessary, like transfer learning.
Fundamentally, for human-machine teams to become autonomous, the science of how humans and machines operate interdependently in a team requires contributions from, among others, the social sciences to study how context is interdependently constructed among teammates; how trust is affected when humans and machines depend upon each other; how human-machine teams are to train with each other; how human-machine teams need a bidirectional language of explanation, the law to determine legal responsibilities from misbehavior and accidents, ethics to know the limits of morality, and sociology to guide the appropriate team behaviors across society and different cultures, the workplace, healthcare, and combat. We need to know the psychological impact on humans when teaming with machines that can think faster than humans even in relatively mundane situations like with self-driving cars, or the more complex but still traditional decision situations like in combat (that is, in-the-loop; for example, with the Navy’s Ghost fleet; the Army’s self-driving combat convoy teams; the Marine Corps’ ordinance disposal teams), or the more daunting scenarios with humans as observers of decisions (that is, on-the-loop; for example, with the Air Force's aggressive, dispensable, attritable drones flying wing for an F-35).
Topics will include AI and machine learning, autonomy; systems engineering; human-machine teams (hmt); machine explanations of decisions; and context.
The format of the symposium will include invited talks (60 minutes) and regular speakers (30 minutes).
By January 15, 2020, contributors should submit to organizers an abstract, outline or paper of 2-8 pages; use APA references. A call for book chapters will follow after the symposium.
W. F. Lawless (Paine College, email@example.com), Ranjeev Mittu and Don Sofge, Thomas Shortell, Tom McDermott, and Brian Jalaian (USARMY RDECOM ARL)
For More Information
For more information, please see the supplementary symposium site.