Skip to main content

Explanation Engine

Inroduction

Understanding how intelligent systems make decisions is critical in building trustworthy, usable, and transparent smart environments. In recent years, this has led to growing interest in explainable systems, systems that not only act autonomously but also provide human-understandable reasons for their behavior.

In smart home environments, users often interact with complex automation rules, adaptive device behaviors, and sensor-triggered events. Without explanations, such interactions can lead to confusion, misunderstanding, or loss of control.

To address this, explanations serve as a bridge between system logic and human mental models. They help users answer questions like:

  • Why did the lamp turn off automatically?
  • Why can’t I use the oven right now?
  • What caused the heating to activate?

In this scope, ehe Explanation Engine is the component that generates contextual explanations and feedback based on participants’ interactions with devices and automation rules within the smart home environment. These explanations are designed to enhance system transparency, improve usability, and support user understanding during task execution.

Our framework enables seamless integration and simulation of the Explanation Engine within a virtual smart environment, making it possible to study a wide range of aspects related to explanation and explainability in intelligent systems.

By using this setup, researchers can conduct both quantitative and qualitative analyses to assess:

  • The effectiveness of explanations in supporting user performance and comprehension,
  • The differences between various explanation types, such as:
    • Causal explanations (why something happened),
    • Counterfactual explanations (what would have happened if…),
    • Contrastive explanations (why this instead of that),
  • The impact of different explanation provision strategies, such as:
    • Proactive/automated explanations (delivered by the system automatically),
    • On-demand/user-triggered explanations (delivered upon request).

This flexibility supports fine-grained experimental design and controlled studies on Explainable Smart Environments and Computer-Human Interaction.

Integration Options You can integrate the Explanation Engine in one of two ways:

  1. Integrated Engine (Default)
    Use the built-in explanation logic that is tightly coupled with the simulation environment. This engine automatically monitors interactions and provides the pre-defined explanations based on the defined rules and states.

  2. Custom API Endpoint
    Connect the system to an external or custom explanation API. This approach is ideal if you want to use your own backend logic, machine learning model, or dynamic explanation strategy.

    • The system sends relevant data (e.g., device states, rule matches, user actions) to your API.
    • The API returns an explanation string or object to be rendered in the UI.

JSON Schema

The configuration of the Explanation Engine is managed through a separate file, explanation.json, and is intended to provide modular control over how and when explanations are issued, and which explanation system is activated.

Loading ....
JSON Schema Code

Explanation Engine Schema

{
"$schema": "http://json-schema.org/draft-07/schema#",
"title": "Explanation Engine Configuration",
"description": "Configuration schema for the explanation engine system",
"type": "object",
"properties": {
"explanation_trigger": {
"type": "string",
"enum": [
"pull",
"push",
"interactive"
],
"description": "Defines when explanations are triggered. 'pull' caches explanations until user requests via button click. 'push' shows explanations immediately when conditions are met. 'interactive' enables immediate explanations plus user message input for external explanation engine."
},
"explanation_engine": {
"type": "string",
"enum": [
"integrated",
"external"
],
"description": "Type of explanation engine to use. 'integrated' for simple rule-based explanations, 'external' for complex custom logic."
},
"external_explanation_engine": {
"type": "object",
"description": "Configuration for external explanation engine. Contains engine type and API endpoint.",
"properties": {
"external_engine_type": {
"type": "string",
"enum": [
"rest",
"ws"
],
"description": "Communication protocol for external explanation engine. 'rest' for REST API, 'ws' for WebSocket."
},
"external_explanation_engine_api": {
"type": "string",
"format": "uri",
"description": "URL of the external explanation engine API endpoint (without trailing slash)."
}
},
"required": [
"external_engine_type",
"external_explanation_engine_api"
],
"additionalProperties": false
},
"integrated_explanation_engine": {
"type": "object",
"description": "Collection of explanation rules for the integrated engine. Key is the explanation ID, value is the explanation text (can include HTML).",
"patternProperties": {
"^[a-zA-Z0-9_]+$": {
"type": "string",
"description": "Explanation text that can include HTML tags for formatting"
}
},
"additionalProperties": false
},
"explanation_rating": {
"type": "string",
"enum": [
"like"
],
"description": "Enable rating system for explanations. Currently only 'like' (thumbs up/down) is supported."
}
},
"required": [
"explanation_trigger",
"explanation_engine"
],
"allOf": [
{
"if": {
"properties": {
"explanation_engine": {
"const": "external"
}
}
},
"then": {
"required": [
"external_explanation_engine"
]
}
},
{
"if": {
"properties": {
"explanation_engine": {
"const": "integrated"
}
}
},
"then": {
"required": [
"integrated_explanation_engine"
]
}
}
],
"additionalProperties": false
}
PropertyTypeDescription
explanation_triggerstringDetermines when explanations are shown. Options: "pull", "push", or "interactive".
explanation_enginestringSelects the explanation system. Options: "integrated" or "external".
explanation_ratingstringSpecifies how users rate explanations. Options: "like".
integrated_explanation_engineobjectRequired if explanation_engine is "integrated"; maps device IDs to static explanation strings.

Complete Configuration Example

{
"explanation_trigger": "push",
"explanation_engine": "integrated",
"explanation_rating": "like",
"integrated_explanation_engine": {
"deep_fryer_01": "The deep fryer automatically turns off when the cooker hood is not running. This is a safety feature to prevent smoke buildup.",
"mixer_01": "The mixer is automatically disabled after 22:00 to reduce noise for neighbors.",
"lamp_01": "The lamp automatically turns on when you open a book to provide adequate reading light."
}
}

Property: Explanation Trigger

Controls when and how explanations are delivered to participants:

  • pull: Explanations are cached when rule conditions trigger, but only shown when the user clicks "Explain Me!" button. This allows participants to discover contradictions first before seeking explanations.
  • push: Explanations are shown immediately when rule conditions are met or when the external explanation engine emits an explanation.
  • interactive: Explanations are shown immediately like push mode, plus enables a chat input for users to send custom messages to the external explanation engine for interactive explanations.
Choosing Trigger Type
  • Use pull when you want participants to discover issues independently before seeking help.
  • Use push for educational scenarios where immediate feedback is beneficial.
  • Use interactive when you want immediate explanations plus the ability for users to ask follow-up questions via chat.

Property: Explanation Engine

Specifies which explanation system is used to generate and deliver explanations:

  • integrated Uses the built-in explanation engine defined directly within the simulation. This mode relies on predefined mappings between devices and explanation strings. It’s simple, fast, and fully contained within the simulation environment, ideal for controlled experiments where consistency and low complexity are desired.

  • external: Sends explanation queries to an external API endpoint. This allows for integration of custom explanation logic, dynamic generation (e.g., with LLMs), or connection to logging systems. The API should return a structured explanation object.

When using external, you must also specify the communication protocol within the external_explanation_engine object:

{
"explanation_engine": "external",
"external_explanation_engine": {
"external_engine_type": "rest",
"external_explanation_engine_api": "https://your-api.com/explanation"
}
}
Choosing an Engine Type

Use integrated for straightforward, repeatable studies or offline deployments. Use external when your study requires flexible, adaptive, or user-personalized explanations powered by external models or logic.

Property: Explanation Rating

Controls whether participants can rate (the usefulness**) of the explanations they receive.

{
"explanation_rating": "like"
}

Supported Values:

  • like: Enables a simple thumbs up/down feedback mechanism after each explanation is shown. It helps
    • Understand how users perceive the relevance and helpfulness of the explanations.
    • Identify which explanations are working and which need refinement.
info

Additional rating modes (e.g., 5-star scale, open comments) may be supported in future versions. For now, only "like" or "none" (if omitted) are valid options

Integrated Explanation Engine

The integrated explanation engine provides a straightforward approach for rule-based explanations without requiring external infrastructure.

Creating Explanations

Define explanations as key-value pairs where:

  • Key: Explanation ID referenced in rule actions
  • Value: Explanation text (supports HTML formatting)
{
"integrated_explanation_engine": {
"coffee_machine_01": "The coffee machine turns off automatically after making 5 cups to prevent overheating.",
"security_system_01": "The security system <strong>requires all doors to be locked</strong> before activation.",
"thermostat_01": "Energy saving mode automatically reduces temperature during <em>unoccupied hours</em>."
}
}

Integration with Rules

Explanation generation can be explicitly bound to rule execution. That is, generating an explanation becomes a deliberate action in the system’s rule set, just like turning off a device or adjusting a temperature.

This design allows researchers to precisely specify:

  • When and Under which conditions an explanation is generated
  • Which explanation text is associated with the event

To trigger an explanation when a rule is executed, include an Explanation action within the rule’s action block. This action references an explanation ID previously defined in the explanation.json.

This approach allows fine-grained control over when explanations are generated based on system logic and environmental context.

This mechanism controls when the explanation is generated and queued, not when it is shown to the participant.

Showing the explanation is governed separately by the explanation_trigger setting (e.g., pull, push, or interactive).

{
"name": "Coffee Machine Auto-Off",
"precondition": [
{
"type": "Device",
"device": "coffee_machine",
"condition": {
"name": "Cups Made",
"operator": ">=",
"value": 5
}
}
],
"action": [
{
"type": "Device_Interaction",
"device": "coffee_machine",
"interaction": {
"name": "Power",
"value": false
}
},
{
"type": "Explanation",
"explanation": "coffee_machine_01"
}
]
}

This rule disables the coffee machine after 5 cups and generates the explanation with "coffee_machine_01" ID (see last example above) that is to explain the action. For a full description of rule structure, preconditions, and supported action types, see Rules.mdx.

External Explanation Engine

An external explanation engine allows researchers to implement customized, intelligent, or adaptive explanation strategies that go beyond static rule-based logic. This can include:

  • Domain-specific reasoning algorithms
  • Machine learning or LLM-based explanation generation
  • Integration with external systems, datasets, or user profiles

In this setup, the explanation generation logic is external to our framework. The internal mechanics of how explanations are created (e.g., inference models, prompt engineering, heuristics) are not managed or constrained by the framework.

Instead, our framework functions as a middleware that:

  1. Collects runtime context during the simulation or task execution, including:

    • User identity or role (if available)
    • User interactions (e.g., device toggles, movement, button clicks)
    • Game state and environmental context (e.g., current time, temperature, weather)
  2. Sends this context as a structured request to a configured external API endpoint

  3. Receives the generated explanation from the external service

  4. Displays the explanation in the GUI according to the explanation_trigger setting

The external_explanation_engine object is required when the explanation_engine is set to "external". It contains the configuration for the external explanation engine, including the communication protocol and API endpoint.

Supported Types

  • rest

    Communicates with the external explanation engine via HTTP REST API. Typically uses endpoints such as:

    • POST /logger — to send context logs (optional)
    • POST /explanation — to request an explanation based on current context
{
"external_explanation_engine": {
"external_engine_type": "rest",
"external_explanation_engine_api": "https://example.com/engine"
}
}
  • ws

    Connects via WebSocket for real-time, bidirectional communication. Suitable for use cases requiring ongoing dialogue or rapid system-user interaction.

{
"external_explanation_engine": {
"external_engine_type": "ws",
"external_explanation_engine_api": "ws://example.com:8080"
}
}

REST API Implementation

Setup Configuration Example

{
"explanation_trigger": "push",
"explanation_engine": "external",
"external_explanation_engine": {
"external_engine_type": "rest",
"external_explanation_engine_api": "https://your-domain.com/engine"
},
"explanation_rating": "like"
}

💡 You can change the explanation_trigger between "pull", "push", or "interactive" based on your study needs.

Required API Endpoints

When using a REST-based external explanation engine ("external_engine_type": "rest"), your API must implement the following two endpoints:

EndpointMethodPurposeRequired?
1- /loggerPOSTSend context and logging dataYes
2- /explanationPOSTRequest explanation (with or without message)Yes
POST /logger

This endpoint is called whenever a participant interacts with the environment or when relevant system events occur. It provides the external engine with rich contextual information about the participant's current state, environment, and activity history.

Environment Data Sources

The environment array contains data from two sources:

  • Task Environment Variables: Context defined in the current task's environment field (e.g., Weather, Temperature)
  • User Custom Data: Session variables passed via URL context (e.g., user group, study condition, user type)

For more details on session context and user variables, see Session Context & User Variables.

Request Payload Example:

{
"user_id": "64bdb062-cb25-487f-8373-c56ac18fba5a",
"current_task": "make_coffee",
"ingame_time": "08:32",
"environment": [
{ "name": "Weather", "value": "Sunny" },
{ "name": "Temperature", "value": 20 },
{ "name": "group", "value": "control" },
{ "name": "user_type", "value": "novice" }
],
"devices": [
{
"device": "coffee_machine",
"interactions": [
{
"name": "Power",
"value": true
},
{
"name": "Cups Made",
"value": 3
}
]
}
],
"logs": [
{
"type": "DEVICE_INTERACTION",
"device_id": "coffee_machine",
"interaction": {
"name": "Power",
"value": true
},
"timestamp": 1739280520
}
]
}

This endpoint is passive and does not return an explanation — it exists to keep the external engine context-aware and updated.

POST /explanation

This endpoint is called when an explanation is requested, either on pull (e.g., user clicks "Explain Me!") or on push/interactive (based on rule triggers and system configuration).

Standard Request:

{
"user_id": "64bdb062-cb25-487f-8373-c56ac18fba5a"
}

Request with User Message (if explanation_trigger is "interactive"):

{
"user_id": "64bdb062-cb25-487f-8373-c56ac18fba5a",
"user_message": "Why did the coffee machine suddenly turn off?"
}

API Response Format

The API must respond with a structured JSON object:

Show Explanation:

{
"success": true,
"show_explanation": true,
"explanation": "The coffee machine automatically turns off after making 5 cups to prevent overheating and ensure optimal coffee quality."
}

No Explanation:

{
"success": true,
"show_explanation": false
}

WebSocket Implementation

If you set "external_engine_type": "ws", the framework will open a WebSocket connection to the configured server and communicate using event-based messages. This allows for real-time explanation exchange, useful for interactive or dialog-based explainable systems.

Setup Configuration Example

{
"explanation_trigger": "pull",
"explanation_engine": "external",
"external_explanation_engine": {
"external_engine_type": "ws",
"external_explanation_engine_api": "ws://your-domain.com:8080"
},
}

WebSocket Events

All communication is in JSON format, sent over the open WebSocket channel.

WebSocket Event Summary

Event NameDirectionTriggerPurpose
user_logOutgoingOn participant action or system eventSends runtime context (user, device, environment) to the engine
explanation_requestOutgoingWhen user clicks "Explain Me!" or rule triggers explanationRequests an explanation from the external engine
explanation_receivalIncomingOn response from external engineReceives and displays explanation text in the simulation GUI
Outgoing: user_log

Sent whenever participant actions generate logs, such as device interactions.

{
"user_id": "64bdb062-cb25-487f-8373-c56ac18fba5a",
"current_task": "make_coffee",
"ingame_time": "08:32",
"environment": [
{ "name": "Weather", "value": "Sunny" },
{ "name": "group", "value": "control" }
],
"devices": [
{
"device": "coffee_machine",
"interactions": [
{
"name": "Power",
"value": true
}
]
}
],
"logs": {
"type": "DEVICE_INTERACTION",
"device_id": "coffee_machine",
"interaction": {
"name": "Power",
"value": true
},
"timestamp": 1739280520
}
}
Outgoing: explanation_request

Sent when an explanation is requested by the participant, either due to a trigger or user action.

{
"user_id": "64bdb062-cb25-487f-8373-c56ac18fba5a",
"timestamp": 1739792380,
"user_message": "Why can't I turn on the deep fryer?"
}
Incoming: explanation_receival

The explanation engine responds with an explanation_receival message. This is then shown in the user interface.

{
"user_id": "64bdb062-cb25-487f-8373-c56ac18fba5a",
"explanation": "The deep fryer requires the cooker hood to be active for safety ventilation.",
}
  • explanation: The explanation text to display.

Log Types Reference

The Explanation Engine receives detailed logs about participant interactions throughout the smart home simulation. These logs help external or integrated engines reason about user behavior, device states, and rule activations to generate relevant explanations.

Understanding these log types is essential when designing:

  • Context-aware explanation systems
  • User modeling algorithms
  • Task performance analysis

Overview of Log Types

Log TypeTrigger EventPurpose
DEVICE_INTERACTIONUser interacts with a deviceTracks changes made by the participant
RULE_TRIGGERAutomation rule is triggeredCaptures rule activations and their resulting actions
TASK_BEGINNew task startsMarks the start of a task session
TASK_COMPLETEDTask successfully completedMarks the end of a successful task
TASK_TIMEOUTTask ends due to time expirationCaptures task failure from timeout
ROOM_SWITCHUser moves to another roomCaptures spatial navigation across rooms
WALL_SWITCHUser looks at another wall in same roomCaptures intra-room navigation
ENTER_DEVICE_CLOSEUPUser enters device close-up viewTracks focus and engagement with a device
EXIT_DEVICE_CLOSEUPUser exits close-up viewReturns to wall view from close-up
ABORT_TASKUser explicitly gives up on a taskCaptures task abandonment with optional reasoning

DEVICE_INTERACTION

Generated when participants change device settings:

{
"type": "DEVICE_INTERACTION",
"metadata": {
"device_id": "deep_fryer",
"interaction": {
"name": "Power",
"value": true
}
},
"timestamp": 1748860205
}

RULE_TRIGGER

Generated when smart home rules activate:

{
"type": "RULE_TRIGGER",
"metadata": {
"rule_id": "deep_fryer_rule",
"rule_action": [
{
"device": "deep_fryer",
"property": {
"name": "Power",
"value": false
}
}
]
},
"timestamp": 1748860205
}

TASK_BEGIN

Generated when a new task starts:

{
"type": "TASK_BEGIN",
"metadata": {
"task_id": "make_coffee"
},
"timestamp": 1748860190
}

TASK_COMPLETED

Generated when participants successfully complete all task goals:

{
"type": "TASK_COMPLETED",
"metadata": {
"task_id": "make_coffee"
},
"timestamp": 1748862020
}

TASK_TIMEOUT

Generated when a task expires due to time limit before completion:

{
"type": "TASK_TIMEOUT",
"metadata": {
"task_id": "make_coffee"
},
"timestamp": 1748862033
}

ROOM_SWITCH

Generated when participants move between rooms using doors:

{
"type": "ROOM_SWITCH",
"metadata": {
"destination_room": "kitchen",
"destination_wall": "wall1"
},
"timestamp": 1748860201
}

WALL_SWITCH

Generated when participants navigate between walls within the same room:

{
"type": "WALL_SWITCH",
"metadata": {
"room": "Shared Room",
"wall": "0"
},
"timestamp": 1748860200
}

ENTER_DEVICE_CLOSEUP

Generated when participants click on a device to enter its detailed interaction view:

{
"type": "ENTER_DEVICE_CLOSEUP",
"metadata": {
"device": "coffee_machine"
},
"timestamp": 1748860202
}

EXIT_DEVICE_CLOSEUP

Generated when participants exit from device closeup view back to wall view:

{
"type": "EXIT_DEVICE_CLOSEUP",
"metadata": {
"device": "coffee_machine"
},
"timestamp": 1748860203
}

ABORT_TASK

Track when participants abandon tasks:

{
"type": "ABORT_TASK",
"metadata": {
"task_id": "make_coffee",
"abort_reason": "I believe this task is impossible."
},
"timestamp": 1748862023
}

Implementation Examples

Study: Impossible Task Detection

{
"explanation_trigger": "pull",
"explanation_engine": "integrated",
"explanation_rating": "like",
"integrated_explanation_engine": {
"contradiction_01": "This task cannot be completed because the security system prevents the coffee machine from operating during night hours.",
"safety_override_01": "The smoke detector has triggered an automatic shutdown of all kitchen appliances.",
"energy_limit_01": "The home's energy management system has reached its daily limit and disabled non-essential devices."
}
}

Study: AI-Powered Explanations

{
"explanation_trigger": "push",
"explanation_engine": "external",
"external_explanation_engine": {
"external_engine_type": "rest",
"external_explanation_engine_api": "https://ai-explainer.your-lab.edu/api"
},
"explanation_rating": "like"
}