CIRCE Scenario
Introduction
To demonstrate the practical application and utility of our framework, we present an illustrative case study that also serves as a step-by-step guide for its implementation.
To ensure neutrality and demonstrate the generalizability of our system, we deliberately chose a scenario from a published research study unrelated to our own group or work. We conducted a keyword search on Google Scholar to identify various research efforts focused on explanation systems in smart home environments. We reviewed their implementations and evaluations to assess whether our framework could help the authors more effectively evaluate and understand their systems’ behavior, specifically by allowing them to simulate their system in a way that real end-users can interact with it.
From the pool of relevant studies, we selected the following scenario as an example. It is used in this documentation both to demonstrate the feasibility of our framework and to provide a practical, walk-through example of how to configure and run a scenario using our system.
We selected the study:
“CIRCE: a Scalable Methodology for Causal Explanations in Cyber-Physical Systems”, published at the IEEE International Conference on Autonomic Computing and Self-Organizing Systems (ACSOS), 2024. DOI: 10.1109/ACSOS61780.2024.00026
Abstract Summary: CIRCE proposes a scalable method to generate contextual, interpretable, and reactive causal explanations in cyber-physical systems such as smart homes. It determines the cause of a user-questioned fact at runtime using a local abduction technique and LIME-based approximation. CIRCE was validated via simulated smart home scenarios, but no user study was performed to evaluate the effectiveness or usability of the generated explanations.
The original study evaluated its methodology by simulating a smart home to generate causal explanations, focusing on system-centric metrics like performance and the accuracy of the explanation versus an expected outcome. However, this evaluation did not involve end-users to assess the human-centric aspects of the explanations, such as their effectiveness, clarity, or how well they were understood.
This presents an ideal use case for our framework, demonstrating its capability to bridge the gap between technical validation and essential user-centered evaluation.
Adapting CIRCE’s Example Scenarios in V-SHiNE
In the CIRCE paper, four distinct explanatory situations are described. Each involves a user asking for the cause of a system behavior (e.g., why a device is on, why a temperature is low, etc.), and CIRCE provides causally sound answers.
Situation 1 — Recreated in V-SHiNE
- User Task: Turn off the Heater
- System Behavior: The heater cannot be turned off due to environmental constraints.
- Triggering Explanations: The user initiates on-demand questions to understand the situation.
Interaction Flow:
- The user tries to turn off the heater but cannot.
- The user clicks the explanation button and asks:
→ “Why can’t I turn off the heater?” - The system returns:
→ "The indoor temperature is lower than 15°C" - The user then asks:
→ “Why is the temperature below 15°C?” - The system returns: → "The window is open and the outside temperature is lower than 15°C"
- The user may choose to abort the task.