Trust and STS-AD Theory

The STS-AD is based on the Trust Model proposed by Hoff & Bashir [2], who conducted an extensive literature review to elaborate trust-influencing factors. Thereby, they distinguish between 3 major components of trust, dispositional, situational, and learned. 

Dispositional trust "represents an individual's overall tendency to trust automation, independent of context or a specific system", and is influenced by an operator's culture, age, gender, personality traits, etc. 

Situational trust represents factors that describe the variability of the situation, including both external variability (type and complexity of a system, task difficulty, workload, perceived risks and benefits, the organizational setting and the framing of a task) and the internal variability of the operator (self-confidence, subject matter expertise, mood, or attentional capacity. 

Learned trust incorporates factors like system expectations, brand/system reputation, etc.; but also trust factors emerging from interaction (reliability, validity, predictability, etc.) and design features (appearance, ease-of-use, etc.)

 

Existing scales focus either on measuring trust on a more general level (such as the "Automation Trust Scale" by Jian, Bisantz, and Drury [3], or contain a relatively large number of scale items (such as [4,5]), which makes it hard to administer them multiple times during an experiment. 

 

The STS-AD adds to the construct by providing a measurement for situational trust that addresses its most relevant factors, while being short enough so that it can be used multiple times during a condition/experiment. By statistical analysis, we have shown that situational trust is indeed a separate construct differing from, for example, the more general trust/distrust dimensions of the "Automation Trust Scale". We suggest to combine the STS-AD with the "Automation Trust Scale" in a way, that the STS-AD is used at least once (after an experimental condition), or better multiple times (during conditions to gain maximum benefit), while measuring general trust before and after the experiment. The STS-AD is a first approach to empirically validate Hoff & Bashirs Model [1], and we aim at developing other concrete measurements for Dispositional or Learned Trust soon. 

References

[1] Situational Trust Scale for Automated Driving (STS-AD): Development and Initial Validation. Brittany E. Holthausen, Philipp Wintersberger, Bruce N. Walker, and Andreas Riener. To be presented @ the 12th International Conference on Automotive User Interfaces and Interactive Vehicular Applications, 2020, Washington DC, USA

 

[2] Trust in automation: Integrating empirical evidence on factors that influence trust. Kevin Anthony Hoff and Masooda Bashir. 2015.  Human factors 57, 3 (2015), 407–434.

 

[3] Foundations for an empirically determined scale of trust in automated systems. Jiun-Yin Jian, Ann M Bisantz, and Colin G Drury. 2000.  International journal of cognitive ergonomics 4, 1 (2000), 53–71.

 

[4] Theoretical considerations and development of a questionnaire to measure trust in automation. Moritz Körber. 2018.  In Congress of the International Ergonomics Association. Springer, 13–30.

 

[5] Towards the development of an inter-cultural scale to measure trust in automation. Shih-Yi Chien, Zhaleh Semnani-Azad, Michael Lewis, and Katia Sycara. 2014. In International conference on cross-cultural design. Springer, 35–46.

 

First Workshop on Trust in the Age of Automated Driving. Brittany E. Noah, Philipp Wintersberger, Alexander G. Mirnig, Shailie Thakkar, Fei Yan, Thomas M. Gable, Johannes Kraus, and Roderick McCall. 2017. In Proceedings of the 9th International Conference on Automotive User Interfaces and Interactive Vehicular Applications Adjunct (AutomotiveUI ’17). ACM, New York, NY, USA, 15–21. DOI: http://dx.doi.org/10.1145/3131726.3131733

 

Second Workshop on Trust in the Age of Automated Driving. Wintersberger, P., Noah, B. E., Kraus, J., McCall, R., Mirnig, A. G., Kunze, A., ... & Walker, B. N. (2018, September).  In Adjunct Proceedings of the 10th International Conference on Automotive User Interfaces and Interactive Vehicular Applications (pp. 56-64). ACM.

 

Third workshop on trust in automation: how does trust influence interaction. Holthausen, B. E., Wintersberger, P., Becerra, Z., Mirnig, A. G., Kunze, A., & Walker, B. N. (2019, September).  In Adjunct Proceedings of the 11th International Conference on Automotive User Interfaces and Interactive Vehicular Applications (pp. 13-18).

 

Trust Calibration Through Reliability Displays in Automated Vehicles. Brittany E. Noah and Bruce N. Walker. 2017. In Proceedings of the Companion of the 2017 ACM/IEEE International Conference on Human-Robot Interaction (HRI ’17). ACM, New York, NY, USA, 361–362. DOI: http://dx.doi.org/10.1145/3029798.3034802

 

Fostering User Acceptance and Trust in Fully Automated Vehicles: Evaluating the Potential of Augmented Reality. Wintersberger, P., Frison, A. K., Riener, A., & Sawitzky, T. V. (2019).  PRESENCE: Virtual and Augmented Reality, 27(1), 46-62.