Rethinking Control: The Evolving Human Role in a World of Smart Machines
In this video (an excerpt of our next Frontiers of Software Engineering interview), Genaina Rodrigues discusses the critical and evolving role of humans in autonomous and robotic systems, emphasizing the distinction between "Human in the loop" (where humans are directly involved in decision-making and can intervene) and "Human on the loop" (where humans supervise or monitor the system). She argues that humans are an inevitable part of these processes, whether as active or passive stakeholders. She uses several examples to illustrate her points, including challenges in autonomous driving and autonomous avionics. A significant portion of her discussion focuses on the Boeing 737 MAX crashes. She explains that the accident wasn't just a system failure but involved a cascade of issues, including a crucial decision by the airline not to implement an optional feature that would have allowed pilots to more easily override the MCAS (Maneuvering Characteristics Augmentation System). A sensor misinterpretation led the automated system to make incorrect and ultimately fatal decisions, without sufficient means for the pilots to regain control effectively. Genaina stresses that for safety-critical systems, "Human in the loop" is essential, meaning humans must have the capability to take control, and this shouldn't be an optional add-on. She also highlights the importance of "explainability" (the system's ability to explain its decisions) and "traceability" (the ability to audit and understand why failures occurred). This is crucial because systems operate based on programmed interpretations of relevance, which might not always capture all unforeseen environmental variables, stakeholder goals, or the nuances of human reasoning. She concludes that we are still in the early stages of understanding these complex human-system interactions. She believes that what are currently considered "extra" normative requirements for human involvement, explainability, and auditing will likely become standard in system development, especially for safety-critical applications. She encourages software engineers and the broader community to actively contribute their knowledge to design safer and more reliable autonomous systems by deeply considering these human factors.

In this video (an excerpt of our next Frontiers of Software Engineering interview), Genaina Rodrigues discusses the critical and evolving role of humans in autonomous and robotic systems, emphasizing the distinction between "Human in the loop" (where humans are directly involved in decision-making and can intervene) and "Human on the loop" (where humans supervise or monitor the system). She argues that humans are an inevitable part of these processes, whether as active or passive stakeholders.
She uses several examples to illustrate her points, including challenges in autonomous driving and autonomous avionics. A significant portion of her discussion focuses on the Boeing 737 MAX crashes. She explains that the accident wasn't just a system failure but involved a cascade of issues, including a crucial decision by the airline not to implement an optional feature that would have allowed pilots to more easily override the MCAS (Maneuvering Characteristics Augmentation System). A sensor misinterpretation led the automated system to make incorrect and ultimately fatal decisions, without sufficient means for the pilots to regain control effectively.
Genaina stresses that for safety-critical systems, "Human in the loop" is essential, meaning humans must have the capability to take control, and this shouldn't be an optional add-on. She also highlights the importance of "explainability" (the system's ability to explain its decisions) and "traceability" (the ability to audit and understand why failures occurred). This is crucial because systems operate based on programmed interpretations of relevance, which might not always capture all unforeseen environmental variables, stakeholder goals, or the nuances of human reasoning.
She concludes that we are still in the early stages of understanding these complex human-system interactions. She believes that what are currently considered "extra" normative requirements for human involvement, explainability, and auditing will likely become standard in system development, especially for safety-critical applications. She encourages software engineers and the broader community to actively contribute their knowledge to design safer and more reliable autonomous systems by deeply considering these human factors.