How do humans interact with autonomous systems? Although mixed human-autonomy teams (“HATs”) outperform autonomy-only and human-only teams, particularly in uncertain or incompletely described tasks, we do not fully understand how humans react to and interact with autonomous teammates. Further, while autonomous teammate models can be produced a priori from first-principles physics and dynamical system modeling, human behavior is far more complex and far less understood. Thus, we seek to examine and understand how human behavior changes when humans work on a team with autonomous teammates, in the specific context of autonomous vehicles.
Fig 1: Video of the CARLA simulation platform, currently under development for use in human-in-the-loop testing.
Aspects of this work have been presented in the following papers: [1], [2], [3], [4], [5]
@inproceedings{mai_ACC_2025,address={Denver, CO, USA},title={Modeling human-autonomy team steering behavior in shared-autonomy driving scenarios},booktitle={2025 {American} {Controls} {Conference} ({ACC}) (Under review)},publisher={IFAC},author={Mai, Rene and Daveron, Kara and Julius, Agung and Mishra, Sandipan},year={2025}}
2024
IFAC
Analysis of human steering behavior differences in human-in-control and autonomy-in-control driving
Rene Mai, Agung Julius, and Sandipan Mishra
In 2024 IFAC Workshop on Cyber- Physical Human Systems (CPHS) (Accepted), Oct 2024
We derive and validate a generalization of the two-point visual control model, an accepted cognitive science model for human steering behavior. The generalized model is needed as current steering models are either insufficiently accurate or too complex for online state estimation. We demonstrate that the generalized model replicates specific human steering behavior with high precision (85% reduction in modeling error) and integrate this model into a human-as-advisor framework where human steering inputs are used for state estimation. As a benchmark study, we use this framework to decipher ambiguous lane markings represented by biased lateral position measurements. We demonstrate that, with the generalized model, the state estimator can accurately estimate the true vehicle state, providing lateral state estimates with under 0.15 m error across participants. However, without the generalized model, the estimator cannot accurately estimate the vehicle’s lateral state.
@article{mai_generalized_2024,title={Generalized two-point visual control model of human steering for accurate state estimation},issn={2689-6117},url={https://doi.org/10.1115/1.4066630},doi={10.1115/1.4066630},urldate={2024-09-25},journal={ASME Letters in Dynamic Systems and Control},author={Mai, Rene and Sears, Katherine and Roessling, Grace and Julius, Agung and Mishra, Sandipan},month=sep,year={2024},pages={1--10}}
We derive and validate a generalization of the two-point visual control model, an accepted cognitive science model for human steering behavior. The generalized model is needed as current steering models are either insufficiently accurate or too complex for online state estimation. We demonstrate that the generalized model replicates specific human steering behavior with high precision (85% reduction in modeling error) and integrate this model into a human-as-advisor framework where human steering inputs are used for state estimation. As a benchmark study, we use this framework to decipher ambiguous lane markings represented by biased lateral position measurements. We demonstrate that, with the generalized model, the state estimator can accurately estimate the true vehicle state, providing lateral state estimates with under 0.15 m error across participants. However, without the generalized model, the estimator cannot accurately estimate the vehicle’s lateral state.
@article{mai_generalized,bibtex_show=true,author={Mai, Rene E. and Sears, Katherine and Roessling, Grace and Julius, Agung and Mishra, Sandipan},title={{Generalized Two-Point Visual Control Model of Human Steering for Accurate State Estimation1}},journal={ASME Letters in Dynamic Systems and Control},volume={5},number={1},pages={011004},year={2024},month=oct,issn={2689-6117},doi={10.1115/1.4066630},url={https://doi.org/10.1115/1.4066630},eprint={https://asmedigitalcollection.asme.org/lettersdynsys/article-pdf/5/1/011004/7387153/aldsc\_5\_1\_011004.pdf}}
This paper presents a human-as-advisor architecture for shared human-machine autonomy in dynamic systems. In the human-as-advisor architecture, the human provides suggested control actions to the autonomous system; the system uses a model of the human controller to ascertain the system’s state as perceived by the human. The system combines this information with additional sensor measurements, yielding an improved state estimate. We apply this architecture to the problem of lane-centering an autonomous vehicle in the presence of conflicting lane markings that render the true lane center uncertain. We model conflicting lane markings with a multi-component Gaussian mixture model. The humansuggested course of action is interpreted as an additional sensor measurement, which a Kalman filter is designed to combine with a speedometer and camera for improving the state estimate. With human input from our human-as-advisor architecture, the vehicle centers itself in the lane; without human input, the vehicle does not center itself. We also demonstrate the human-as-advisor architecture is robust to additive output matrix uncertainty and non-linear perturbations in the human model used to interpret the human-suggested control actions.
@inproceedings{mai_human-as-advisor_2023,address={San Diego, CA, USA},title={Human-as-advisor in the loop for autonomous lane-keeping},isbn={9798350328066},url={https://ieeexplore.ieee.org/document/10156374/},doi={10.23919/ACC55779.2023.10156374},language={en},urldate={2023-09-05},booktitle={2023 {American} {Control} {Conference} ({ACC})},publisher={IEEE},author={Mai, Rene and Mishra, Sandipan and Julius, Agung},month=may,year={2023},pages={3895--3900}}