Abstract: General concepts of trust in AI and autonomous systems can be derived from engineering concepts of dependable and secure computing systems. In computing systems, trust is formally defined as the dependence of one system on another, and the acceptance that the other system is also dependable. [Avižienis, et al, 2004]. This dependence can be either human/machine or machine/machine. Resilience is related to trustworthiness as the ability of the system to withstand instability, unexpected conditions, and gracefully return to predictable, but possibly degraded, performance. This is a system-of-systems concern, and trust must be considered as both a characteristic of an individual system or subsystem and as relationships between the system and other systems including humans. In systems engineering, trust can be categorized into a set of dependability and security attributes: the ability of a system to avoid service failures and cover the interrelated foundational attributes of availability, reliability, safety, integrity, confidentiality, and maintainability. These attributes work together to ensure the system’s successful application, and demonstration of those attributes over time engenders trust. Thus, trust is a systems engineering concept. In this talk we will explore modeling trust as a set of system attributes.