mirror of
https://github.com/CCOSTAN/Home-AssistantConfig.git
synced 2026-04-23 16:47:11 +00:00
Enhance Home Assistant dashboard designer documentation and validation scripts #1566
- Added workflow notes for direct updates and post-edit validation steps in README.md. - Updated SKILL.md to clarify the workflow for editing dashboards and added validation order. - Revised dashboard_rules.md to emphasize direct-update workflow and validation requirements. - Enhanced validate_lovelace_view.py to enforce new validation rules for card styles and section constraints. - Improved error handling and validation logic for dashboard sections in validate_lovelace_view.py.
This commit is contained in:
@@ -67,6 +67,13 @@ Invoke in chat:
|
||||
|
||||
Then describe what you want in natural language (what to change + where + any constraints). The skill will infer the structured intent internally and enforce the button-card-first / layout constraints defined in `SKILL.md`.
|
||||
|
||||
Workflow notes:
|
||||
- This skill uses direct updates for `config/dashboards/**` (no staged rollout workflow in-skill).
|
||||
- It requires post-edit validation in this order:
|
||||
- `pwsh -NoProfile -File tools/validate_dashboards.ps1`
|
||||
- `pwsh -NoProfile -File tools/ha_ui_smoke.ps1`
|
||||
- `python codex_skills/homeassistant-dashboard-designer/scripts/validate_lovelace_view.py <changed-view.yaml>`
|
||||
|
||||
Examples:
|
||||
- "Refactor `config/dashboards/infrastructure/partials/mariadb_sections.yaml` to match the existing Infrastructure design language. Preserve existing templates and keep diffs small."
|
||||
- "Add a new Infrastructure view for Docker containers using the same layout rules as the other views (4 columns desktop / 2 columns mobile)."
|
||||
|
||||
@@ -175,21 +175,28 @@ Fallback behavior when Stitch is unavailable:
|
||||
|
||||
## Workflow (Do This In Order)
|
||||
|
||||
1. Read the target dashboard/view/partials/templates to understand existing patterns and avoid drift.
|
||||
2. Determine intent from the user's request and existing dashboard context: `infra` (NOC), `home`, `energy`, or `environment`. Keep one intent per view.
|
||||
3. Validate entities and services before editing:
|
||||
1. Use direct-update mode for this repo:
|
||||
- Edit production YAML directly (no staged dashboard copies in this skill workflow).
|
||||
- Keep rollback safety by minimizing diffs and validating before HA restart/reload.
|
||||
2. Read the target dashboard/view/partials/templates and include targets before editing:
|
||||
- Confirm the exact source view/partial/template files.
|
||||
- Confirm referenced `!include` files/directories exist and are in-scope.
|
||||
3. Determine intent from the user's request and existing dashboard context: `infra` (NOC), `home`, `energy`, or `environment`. Keep one intent per view.
|
||||
4. Validate entities and services before editing:
|
||||
- Prefer the Home Assistant MCP for live entity/service validation (required when available).
|
||||
- Record the MCP validation step in the work notes before writing YAML.
|
||||
- If MCP is not available, ask the user to confirm entity IDs and services (do not guess).
|
||||
4. Draft layout with constraints: a top-level `grid` and optional `vertical-stack` groups.
|
||||
5. Draft layout with constraints: a top-level `grid` and optional `vertical-stack` groups.
|
||||
- If using Stitch, first summarize `stitch_intent` and treat it as advisory input to this step.
|
||||
- After removals, reflow cards/sections upward to collapse gaps and reduce empty rows.
|
||||
5. Implement using Tier 1 cards first; reuse existing templates; avoid one-off styles.
|
||||
6. If fallback cards are necessary, add an inline comment explaining why Tier 1 cannot satisfy the requirement.
|
||||
7. Validate:
|
||||
6. Implement using Tier 1 cards first; reuse existing templates; avoid one-off styles.
|
||||
7. If fallback cards are necessary, add an inline comment explaining why Tier 1 cannot satisfy the requirement.
|
||||
8. Validate in this order:
|
||||
- Run `pwsh -NoProfile -File tools/validate_dashboards.ps1`.
|
||||
- Run `pwsh -NoProfile -File tools/ha_ui_smoke.ps1`.
|
||||
- Run Home Assistant config validation (`ha core check` or `homeassistant --script check_config`) when available.
|
||||
- Optionally run `scripts/validate_lovelace_view.py` from this skill against the changed view file to catch violations early.
|
||||
8. Failure behavior:
|
||||
- Run `scripts/validate_lovelace_view.py` from this skill against each changed view file.
|
||||
9. Failure behavior:
|
||||
- If requirements can't be met: state the violated rule and propose a compliant alternative.
|
||||
- If validation fails: stop, surface the error output, and propose corrected YAML. Do not leave invalid config applied.
|
||||
|
||||
|
||||
@@ -149,6 +149,7 @@ Anti-drift checklist:
|
||||
|
||||
If working in this repo's `config/dashboards/` tree:
|
||||
- Do not edit `config/.storage` (runtime state).
|
||||
- Use direct-update workflow for dashboard YAML in this repo (no staged dashboard promotion flow in this skill).
|
||||
- Includes must use absolute container paths starting with `/config/`.
|
||||
- Views are one file per view, and the dashboard file uses `!include_dir_list`.
|
||||
- Files under `config/dashboards/**/*.yaml` must include the standard `@CCOSTAN` header block.
|
||||
@@ -160,3 +161,9 @@ When available, use the Home Assistant MCP to validate:
|
||||
- Service names and payload fields used by actions (for example, `button.press`, `script.*`, etc.).
|
||||
|
||||
If MCP is not available, do not guess entity IDs. Ask the user to confirm them.
|
||||
|
||||
## Validation Chain (Required Before Restart/Reload)
|
||||
|
||||
- Run `pwsh -NoProfile -File tools/validate_dashboards.ps1`.
|
||||
- Run `pwsh -NoProfile -File tools/ha_ui_smoke.ps1`.
|
||||
- Run `scripts/validate_lovelace_view.py` for each changed view file.
|
||||
|
||||
@@ -58,6 +58,8 @@ ALLOWED_CARD_TYPES = {
|
||||
"grid",
|
||||
"vertical-stack",
|
||||
"custom:button-card",
|
||||
"custom:layout-card",
|
||||
"custom:vertical-stack-in-card",
|
||||
"custom:flex-horseshoe-card",
|
||||
"custom:mini-graph-card",
|
||||
# Tier-2 fallbacks (validator does not enforce justification comments).
|
||||
@@ -157,6 +159,22 @@ def _walk_cards(node: Any, node_path: str, findings: list[Finding]) -> None:
|
||||
message="Per-card styles: are not allowed; move styling into centralized templates.",
|
||||
)
|
||||
)
|
||||
if "style" in node:
|
||||
findings.append(
|
||||
Finding(
|
||||
level="ERROR",
|
||||
path=node_path,
|
||||
message="Per-card style: is not allowed; move styling into centralized templates.",
|
||||
)
|
||||
)
|
||||
if "extra_styles" in node:
|
||||
findings.append(
|
||||
Finding(
|
||||
level="ERROR",
|
||||
path=node_path,
|
||||
message="Per-card extra_styles: is not allowed; move styling into centralized templates.",
|
||||
)
|
||||
)
|
||||
if "card_mod" in node:
|
||||
findings.append(
|
||||
Finding(
|
||||
@@ -165,6 +183,17 @@ def _walk_cards(node: Any, node_path: str, findings: list[Finding]) -> None:
|
||||
message="Per-card card_mod: is not allowed on button-card instances; use shared snippets or templates.",
|
||||
)
|
||||
)
|
||||
state_entries = node.get("state")
|
||||
if _is_sequence(state_entries):
|
||||
for sidx, state_entry in enumerate(state_entries):
|
||||
if _is_mapping(state_entry) and "styles" in state_entry:
|
||||
findings.append(
|
||||
Finding(
|
||||
level="ERROR",
|
||||
path=f"{node_path}.state[{sidx}]",
|
||||
message="Per-card state styles are not allowed; move styling into centralized templates.",
|
||||
)
|
||||
)
|
||||
|
||||
cards = node.get("cards")
|
||||
if _is_sequence(cards):
|
||||
@@ -181,6 +210,82 @@ def _load_yaml(path: Path) -> Any:
|
||||
return yaml.load(txt, Loader=_Loader)
|
||||
|
||||
|
||||
def _validate_sections_wrapper(sections: list[Any], findings: list[Finding]) -> None:
|
||||
"""Validate first sections wrapper constraints from dashboard rules."""
|
||||
if not sections:
|
||||
findings.append(
|
||||
Finding(
|
||||
level="ERROR",
|
||||
path="$.sections",
|
||||
message="sections list cannot be empty for a sections view.",
|
||||
)
|
||||
)
|
||||
return
|
||||
|
||||
first_section = sections[0]
|
||||
if not _is_mapping(first_section):
|
||||
findings.append(
|
||||
Finding(
|
||||
level="ERROR",
|
||||
path="$.sections[0]",
|
||||
message="First section must be a mapping.",
|
||||
)
|
||||
)
|
||||
return
|
||||
|
||||
column_span = first_section.get("column_span")
|
||||
if column_span != 4:
|
||||
findings.append(
|
||||
Finding(
|
||||
level="ERROR",
|
||||
path="$.sections[0].column_span",
|
||||
message="First section must set column_span: 4.",
|
||||
)
|
||||
)
|
||||
|
||||
cards = first_section.get("cards")
|
||||
if not _is_sequence(cards):
|
||||
findings.append(
|
||||
Finding(
|
||||
level="ERROR",
|
||||
path="$.sections[0].cards",
|
||||
message="First section must define cards as a list.",
|
||||
)
|
||||
)
|
||||
return
|
||||
|
||||
if len(cards) != 1:
|
||||
findings.append(
|
||||
Finding(
|
||||
level="ERROR",
|
||||
path="$.sections[0].cards",
|
||||
message="First section must contain exactly one wrapper card.",
|
||||
)
|
||||
)
|
||||
return
|
||||
|
||||
wrapper_card = cards[0]
|
||||
if not _is_mapping(wrapper_card):
|
||||
findings.append(
|
||||
Finding(
|
||||
level="ERROR",
|
||||
path="$.sections[0].cards[0]",
|
||||
message="Wrapper card must be a mapping.",
|
||||
)
|
||||
)
|
||||
return
|
||||
|
||||
grid_options = wrapper_card.get("grid_options")
|
||||
if not _is_mapping(grid_options) or grid_options.get("columns") != "full":
|
||||
findings.append(
|
||||
Finding(
|
||||
level="ERROR",
|
||||
path="$.sections[0].cards[0].grid_options.columns",
|
||||
message='Wrapper card must set grid_options.columns: "full".',
|
||||
)
|
||||
)
|
||||
|
||||
|
||||
def main(argv: list[str]) -> int:
|
||||
ap = argparse.ArgumentParser()
|
||||
ap.add_argument("view_yaml", type=Path, help="Path to a Lovelace view YAML file")
|
||||
@@ -225,6 +330,7 @@ def main(argv: list[str]) -> int:
|
||||
)
|
||||
)
|
||||
elif _is_sequence(sections):
|
||||
_validate_sections_wrapper(sections, findings)
|
||||
for sidx, section in enumerate(sections):
|
||||
spath = f"$.sections[{sidx}]"
|
||||
if not _is_mapping(section):
|
||||
|
||||
@@ -0,0 +1,127 @@
|
||||
import contextlib
|
||||
import importlib.util
|
||||
import io
|
||||
import sys
|
||||
import tempfile
|
||||
import unittest
|
||||
from pathlib import Path
|
||||
|
||||
|
||||
MODULE_PATH = (
|
||||
Path(__file__).resolve().parents[1] / "scripts" / "validate_lovelace_view.py"
|
||||
)
|
||||
MODULE_NAME = "validate_lovelace_view"
|
||||
SPEC = importlib.util.spec_from_file_location(MODULE_NAME, MODULE_PATH)
|
||||
if SPEC is None or SPEC.loader is None:
|
||||
raise RuntimeError(f"Unable to load module spec from {MODULE_PATH}")
|
||||
MODULE = importlib.util.module_from_spec(SPEC)
|
||||
sys.modules[MODULE_NAME] = MODULE
|
||||
SPEC.loader.exec_module(MODULE)
|
||||
|
||||
|
||||
class ValidateLovelaceViewTests(unittest.TestCase):
|
||||
def _run_validator(self, yaml_text: str, *extra_args: str) -> tuple[int, str, str]:
|
||||
with tempfile.TemporaryDirectory() as tmpdir:
|
||||
view_path = Path(tmpdir) / "view.yaml"
|
||||
view_path.write_text(yaml_text, encoding="utf-8")
|
||||
|
||||
stdout = io.StringIO()
|
||||
stderr = io.StringIO()
|
||||
with contextlib.redirect_stdout(stdout), contextlib.redirect_stderr(stderr):
|
||||
rc = MODULE.main([str(view_path), *extra_args])
|
||||
return rc, stdout.getvalue(), stderr.getvalue()
|
||||
|
||||
def test_valid_classic_view_passes(self) -> None:
|
||||
yaml_text = """
|
||||
title: Main
|
||||
path: main
|
||||
cards:
|
||||
- type: custom:button-card
|
||||
template:
|
||||
- status_chip
|
||||
"""
|
||||
rc, out, err = self._run_validator(yaml_text)
|
||||
self.assertEqual(rc, 0, out + err)
|
||||
self.assertEqual(out.strip(), "")
|
||||
self.assertEqual(err.strip(), "")
|
||||
|
||||
def test_horizontal_stack_fails(self) -> None:
|
||||
yaml_text = """
|
||||
title: Main
|
||||
path: main
|
||||
cards:
|
||||
- type: horizontal-stack
|
||||
cards:
|
||||
- type: custom:button-card
|
||||
template: status_chip
|
||||
"""
|
||||
rc, out, _ = self._run_validator(yaml_text)
|
||||
self.assertEqual(rc, 1)
|
||||
self.assertIn("Disallowed card type: horizontal-stack", out)
|
||||
|
||||
def test_button_card_missing_template_fails(self) -> None:
|
||||
yaml_text = """
|
||||
title: Main
|
||||
path: main
|
||||
cards:
|
||||
- type: custom:button-card
|
||||
entity: light.kitchen
|
||||
"""
|
||||
rc, out, _ = self._run_validator(yaml_text)
|
||||
self.assertEqual(rc, 1)
|
||||
self.assertIn("custom:button-card must set template", out)
|
||||
|
||||
def test_sections_wrapper_rule_fails_when_full_width_missing(self) -> None:
|
||||
yaml_text = """
|
||||
title: Sections
|
||||
path: sections
|
||||
type: sections
|
||||
sections:
|
||||
- type: grid
|
||||
column_span: 4
|
||||
cards:
|
||||
- type: custom:vertical-stack-in-card
|
||||
cards:
|
||||
- type: custom:button-card
|
||||
template: status_chip
|
||||
"""
|
||||
rc, out, _ = self._run_validator(yaml_text)
|
||||
self.assertEqual(rc, 1)
|
||||
self.assertIn('Wrapper card must set grid_options.columns: "full".', out)
|
||||
|
||||
def test_sections_wrapper_rule_passes_when_full_width_wrapper_present(self) -> None:
|
||||
yaml_text = """
|
||||
title: Sections
|
||||
path: sections
|
||||
type: sections
|
||||
sections:
|
||||
- type: grid
|
||||
column_span: 4
|
||||
cards:
|
||||
- type: custom:vertical-stack-in-card
|
||||
grid_options:
|
||||
columns: "full"
|
||||
cards:
|
||||
- type: custom:button-card
|
||||
template: status_chip
|
||||
"""
|
||||
rc, out, err = self._run_validator(yaml_text)
|
||||
self.assertEqual(rc, 0, out + err)
|
||||
self.assertEqual(out.strip(), "")
|
||||
self.assertEqual(err.strip(), "")
|
||||
|
||||
def test_unknown_include_tag_is_parse_safe(self) -> None:
|
||||
yaml_text = """
|
||||
title: Included Sections
|
||||
path: included_sections
|
||||
type: sections
|
||||
sections: !include ../sections/core.yaml
|
||||
"""
|
||||
rc, out, err = self._run_validator(yaml_text)
|
||||
self.assertEqual(rc, 0, out + err)
|
||||
self.assertIn("validator cannot inspect included sections content", out)
|
||||
self.assertEqual(err.strip(), "")
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
unittest.main()
|
||||
88
codex_skills/homeassistant-yaml-dry-verifier/README.md
Normal file
88
codex_skills/homeassistant-yaml-dry-verifier/README.md
Normal file
@@ -0,0 +1,88 @@
|
||||
<h1 align="center">
|
||||
<a name="logo" href="https://www.vCloudInfo.com/tag/iot"><img src="https://raw.githubusercontent.com/CCOSTAN/Home-AssistantConfig/master/x_profile.png" alt="Bear Stone Smart Home" width="200"></a>
|
||||
<br>
|
||||
Bear Stone Smart Home Documentation
|
||||
</h1>
|
||||
<h4 align="center">Be sure to :star: my configuration repo so you can keep up to date on any daily progress!</h4>
|
||||
|
||||
<div align="center">
|
||||
|
||||
[](https://x.com/ccostan)
|
||||
[](https://www.youtube.com/vCloudInfo?sub_confirmation=1)
|
||||
[](https://github.com/CCOSTAN) <br>
|
||||
[](https://github.com/CCOSTAN/Home-AssistantConfig/blob/master/config/.HA_VERSION)
|
||||
[](https://github.com/CCOSTAN/Home-AssistantConfig/commits/master)
|
||||
[](https://github.com/CCOSTAN/Home-AssistantConfig/commits/master)
|
||||
|
||||
</div>
|
||||
|
||||
# HA YAML DRY Verifier (Codex Skill)
|
||||
|
||||
This directory contains the `homeassistant-yaml-dry-verifier` skill and the CLI used to detect repeated YAML structures in Home Assistant automations/scripts/packages.
|
||||
|
||||
### Quick navigation
|
||||
- You are here: `codex_skills/homeassistant-yaml-dry-verifier/`
|
||||
- [Repo overview](../../README.md) | [Codex skills](../README.md) | [Packages](../../config/packages/README.md) | [Scripts](../../config/script/README.md)
|
||||
|
||||
## What This Skill Does
|
||||
|
||||
- Detects repeated `trigger`, `condition`, `action`, and `sequence` blocks.
|
||||
- Detects repeated entries inside those blocks.
|
||||
- Detects duplicate entries within a single block (`INTRA`).
|
||||
- Detects package-defined scripts called from multiple files (`CENTRAL_SCRIPT`).
|
||||
- Collapses noisy ENTRY reports when they are already fully explained by an identical `FULL_BLOCK` finding.
|
||||
|
||||
## CLI Usage
|
||||
|
||||
Run on one file:
|
||||
|
||||
```bash
|
||||
python codex_skills/homeassistant-yaml-dry-verifier/scripts/verify_ha_yaml_dry.py config/packages/bearclaw.yaml
|
||||
```
|
||||
|
||||
Run on broader scope:
|
||||
|
||||
```bash
|
||||
python codex_skills/homeassistant-yaml-dry-verifier/scripts/verify_ha_yaml_dry.py config/packages config/script
|
||||
```
|
||||
|
||||
Strict mode (non-zero exit if findings exist):
|
||||
|
||||
```bash
|
||||
python codex_skills/homeassistant-yaml-dry-verifier/scripts/verify_ha_yaml_dry.py config/packages config/script --strict
|
||||
```
|
||||
|
||||
## Output Model
|
||||
|
||||
The CLI prints:
|
||||
- Scan summary counts
|
||||
- `FULL_BLOCK` findings
|
||||
- `ENTRY` findings
|
||||
- `INTRA` findings
|
||||
- `CENTRAL_SCRIPT` findings
|
||||
|
||||
Exit codes:
|
||||
- `0`: success (or findings in non-strict mode)
|
||||
- `1`: findings present in strict mode
|
||||
- `2`: parse/path errors
|
||||
|
||||
## Notes
|
||||
|
||||
- This verifier intentionally keeps text output and a small CLI surface.
|
||||
- It does not implement suppression files, severity scoring, JSON output, or diff-only mode.
|
||||
- Use it as a fast pre-refactor signal and pair with Home Assistant config validation before restart/reload.
|
||||
|
||||
**All of my configuration files are tested against the most stable version of home-assistant.**
|
||||
|
||||
<a name="bottom" href="https://github.com/CCOSTAN/Home-AssistantConfig#logo"><img align="right" border="0" src="https://raw.githubusercontent.com/CCOSTAN/Home-AssistantConfig/master/config/www/custom_ui/floorplan/images/branding/up_arrow.png" width="25" ></a>
|
||||
|
||||
**Still have questions on my Config?** <br>
|
||||
**Message me on X :** [](https://www.x.com/ccostan)
|
||||
|
||||
<p align="center">
|
||||
<a target="_blank" href="https://www.buymeacoffee.com/vCloudInfo"><img src="https://www.buymeacoffee.com/assets/img/BMC-btn-logo.svg" alt="Buy me a coffee"><span style="margin-left:5px">You can buy me a coffee</span></a><a target="_blank" href="https://www.buymeacoffee.com/vCloudInfo"><img src="https://www.buymeacoffee.com/assets/img/BMC-btn-logo.svg" alt="Buy me a coffee"></a>
|
||||
<br>
|
||||
<a href="https://eepurl.com/dmXFYz"><img align="center" border="0" src="https://raw.githubusercontent.com/CCOSTAN/Home-AssistantConfig/master/config/www/custom_ui/floorplan/images/branding/email_link.png" height="50" ></a><br>
|
||||
<a href="https://www.vcloudinfo.com/p/affiliate-disclosure.html">
|
||||
Affiliate Disclosure
|
||||
</a></p>
|
||||
@@ -43,7 +43,7 @@ python codex_skills/homeassistant-yaml-dry-verifier/scripts/verify_ha_yaml_dry.p
|
||||
|
||||
3. Prioritize findings in this order:
|
||||
- `FULL_BLOCK`: repeated full trigger/condition/action/sequence blocks.
|
||||
- `ENTRY`: repeated individual entries inside those blocks.
|
||||
- `ENTRY`: repeated individual entries inside those blocks (excluding entries already fully covered by a `FULL_BLOCK` duplicate).
|
||||
- `INTRA`: duplicate entries inside a single block.
|
||||
- `CENTRAL_SCRIPT`: script is defined in `config/packages` but called from 2+ YAML files.
|
||||
|
||||
@@ -74,6 +74,7 @@ Always report:
|
||||
- Parse errors (if any).
|
||||
- Duplicate groups by kind (`trigger`, `condition`, `action`, `sequence`).
|
||||
- Central script placement findings (`CENTRAL_SCRIPT`) with definition + caller files.
|
||||
- Script caller detection should include direct `service: script.<id>` and `script.turn_on`-style entity targeting when present.
|
||||
- Concrete refactor recommendation per group.
|
||||
- Resolution status for each finding (`resolved`, `deferred-with-blocker`).
|
||||
|
||||
|
||||
@@ -261,10 +261,19 @@ def _block_keys_for_candidate(candidate: Candidate) -> dict[str, tuple[str, ...]
|
||||
|
||||
def _recommendation(block_label: str) -> str:
|
||||
if block_label in {"action", "sequence"}:
|
||||
return "Move repeated logic to a shared script and call it with variables."
|
||||
return (
|
||||
"Move repeated logic to config/script/<script_id>.yaml and call it "
|
||||
"via service: script.<script_id> with variables."
|
||||
)
|
||||
if block_label == "condition":
|
||||
return "Extract shared condition logic into helpers/template sensors or merge condition blocks."
|
||||
return "Consolidate repeated trigger patterns where behavior is equivalent."
|
||||
return (
|
||||
"Extract shared condition logic into helper/template entities or "
|
||||
"merge condition blocks when behavior is equivalent."
|
||||
)
|
||||
return (
|
||||
"Consolidate equivalent trigger patterns and keep shared actions in a "
|
||||
"single reusable script when possible."
|
||||
)
|
||||
|
||||
|
||||
def _render_occurrences(occurrences: list[Occurrence], max_rows: int = 6) -> str:
|
||||
@@ -282,6 +291,22 @@ def _normalize_path(path: str) -> str:
|
||||
return path.replace("\\", "/").lower()
|
||||
|
||||
|
||||
def _entry_parent_block_path(block_path: str) -> str:
|
||||
"""Return parent block path for entry occurrences (strip trailing [idx])."""
|
||||
return re.sub(r"\[\d+\]$", "", block_path)
|
||||
|
||||
|
||||
def _occurrence_key(
|
||||
occurrence: Occurrence, *, treat_as_entry: bool = False
|
||||
) -> tuple[str, str, str]:
|
||||
block_path = (
|
||||
_entry_parent_block_path(occurrence.block_path)
|
||||
if treat_as_entry
|
||||
else occurrence.block_path
|
||||
)
|
||||
return (occurrence.file_path, occurrence.candidate_path, block_path)
|
||||
|
||||
|
||||
def _infer_script_id(candidate: Candidate) -> str | None:
|
||||
if candidate.kind != "script":
|
||||
return None
|
||||
@@ -298,14 +323,41 @@ def _infer_script_id(candidate: Candidate) -> str | None:
|
||||
|
||||
|
||||
def _collect_script_service_calls(node: Any, script_ids: set[str]) -> None:
|
||||
"""Collect called script IDs from common HA service invocation patterns."""
|
||||
script_domain_meta_services = {"turn_on", "toggle", "reload", "stop"}
|
||||
|
||||
def _add_script_entity_ids(value: Any) -> None:
|
||||
if isinstance(value, str):
|
||||
if value.startswith("script."):
|
||||
entity_script_id = value.split(".", 1)[1].strip()
|
||||
if entity_script_id:
|
||||
script_ids.add(entity_script_id)
|
||||
return
|
||||
if isinstance(value, list):
|
||||
for item in value:
|
||||
_add_script_entity_ids(item)
|
||||
|
||||
if isinstance(node, dict):
|
||||
for key, value in node.items():
|
||||
if key in {"service", "action"} and isinstance(value, str):
|
||||
service_name = value.strip()
|
||||
if service_name.startswith("script."):
|
||||
script_id = service_name.split(".", 1)[1].strip()
|
||||
if script_id:
|
||||
script_ids.add(script_id)
|
||||
service_name_raw = node.get("service")
|
||||
action_name_raw = node.get("action")
|
||||
service_name = None
|
||||
if isinstance(service_name_raw, str):
|
||||
service_name = service_name_raw.strip()
|
||||
elif isinstance(action_name_raw, str):
|
||||
service_name = action_name_raw.strip()
|
||||
|
||||
if service_name and service_name.startswith("script."):
|
||||
tail = service_name.split(".", 1)[1].strip()
|
||||
if tail and tail not in script_domain_meta_services:
|
||||
script_ids.add(tail)
|
||||
else:
|
||||
_add_script_entity_ids(node.get("entity_id"))
|
||||
for key in ("target", "data", "service_data"):
|
||||
container = node.get(key)
|
||||
if isinstance(container, dict):
|
||||
_add_script_entity_ids(container.get("entity_id"))
|
||||
|
||||
for value in node.values():
|
||||
_collect_script_service_calls(value, script_ids)
|
||||
return
|
||||
if isinstance(node, list):
|
||||
@@ -401,6 +453,26 @@ def main(argv: list[str]) -> int:
|
||||
|
||||
full_groups = _filter_groups(full_index)
|
||||
entry_groups = _filter_groups(entry_index)
|
||||
|
||||
# Drop ENTRY groups that are fully subsumed by an identical FULL_BLOCK group.
|
||||
full_group_member_sets: dict[tuple[str, str], list[set[tuple[str, str, str]]]] = defaultdict(list)
|
||||
for (kind, block_label, _), occurrences in full_groups:
|
||||
full_group_member_sets[(kind, block_label)].append(
|
||||
{_occurrence_key(occ) for occ in occurrences}
|
||||
)
|
||||
|
||||
filtered_entry_groups: list[tuple[tuple[str, str, str], list[Occurrence]]] = []
|
||||
for entry_group_key, entry_occurrences in entry_groups:
|
||||
kind, block_label, _ = entry_group_key
|
||||
entry_member_set = {
|
||||
_occurrence_key(occ, treat_as_entry=True) for occ in entry_occurrences
|
||||
}
|
||||
full_sets = full_group_member_sets.get((kind, block_label), [])
|
||||
is_subsumed = any(entry_member_set.issubset(full_set) for full_set in full_sets)
|
||||
if not is_subsumed:
|
||||
filtered_entry_groups.append((entry_group_key, entry_occurrences))
|
||||
|
||||
entry_groups = filtered_entry_groups
|
||||
intra_duplicate_notes = sorted(set(intra_duplicate_notes))
|
||||
script_definitions_by_id: dict[str, set[str]] = defaultdict(set)
|
||||
|
||||
|
||||
@@ -0,0 +1,302 @@
|
||||
import contextlib
|
||||
import importlib.util
|
||||
import io
|
||||
import re
|
||||
import sys
|
||||
import tempfile
|
||||
import unittest
|
||||
from pathlib import Path
|
||||
|
||||
|
||||
MODULE_PATH = (
|
||||
Path(__file__).resolve().parents[1] / "scripts" / "verify_ha_yaml_dry.py"
|
||||
)
|
||||
MODULE_NAME = "verify_ha_yaml_dry"
|
||||
SPEC = importlib.util.spec_from_file_location(MODULE_NAME, MODULE_PATH)
|
||||
if SPEC is None or SPEC.loader is None:
|
||||
raise RuntimeError(f"Unable to load module spec from {MODULE_PATH}")
|
||||
MODULE = importlib.util.module_from_spec(SPEC)
|
||||
sys.modules[MODULE_NAME] = MODULE
|
||||
SPEC.loader.exec_module(MODULE)
|
||||
|
||||
|
||||
class VerifyHaYamlDryTests(unittest.TestCase):
|
||||
def _run_verifier(
|
||||
self,
|
||||
files: dict[str, str],
|
||||
*extra_args: str,
|
||||
scan_subpath: str = ".",
|
||||
) -> tuple[int, str, str]:
|
||||
with tempfile.TemporaryDirectory() as tmpdir:
|
||||
root = Path(tmpdir)
|
||||
for rel_path, content in files.items():
|
||||
file_path = root / rel_path
|
||||
file_path.parent.mkdir(parents=True, exist_ok=True)
|
||||
file_path.write_text(content, encoding="utf-8")
|
||||
|
||||
scan_path = root / scan_subpath
|
||||
stdout = io.StringIO()
|
||||
stderr = io.StringIO()
|
||||
with contextlib.redirect_stdout(stdout), contextlib.redirect_stderr(stderr):
|
||||
rc = MODULE.main([str(scan_path), *extra_args])
|
||||
return rc, stdout.getvalue(), stderr.getvalue()
|
||||
|
||||
@staticmethod
|
||||
def _full_block_group_order(output: str) -> list[str]:
|
||||
marker = "\nFULL_BLOCK findings:\n"
|
||||
if marker not in output:
|
||||
return []
|
||||
section = output.split(marker, 1)[1]
|
||||
for stop in ("\nENTRY findings:\n", "\nINTRA findings:\n", "\nCENTRAL_SCRIPT findings:\n"):
|
||||
if stop in section:
|
||||
section = section.split(stop, 1)[0]
|
||||
break
|
||||
|
||||
groups: list[str] = []
|
||||
for line in section.splitlines():
|
||||
text = line.strip()
|
||||
match = re.match(r"^\d+\.\s+([a-z]+\.[a-z_]+)\s+repeated\s+\d+\s+times$", text)
|
||||
if match:
|
||||
groups.append(match.group(1))
|
||||
return groups
|
||||
|
||||
def test_full_block_detection_with_fixture(self) -> None:
|
||||
files = {
|
||||
"automations.yaml": """
|
||||
- alias: A1
|
||||
trigger:
|
||||
- platform: state
|
||||
entity_id: binary_sensor.one
|
||||
to: "on"
|
||||
action:
|
||||
- service: light.turn_on
|
||||
target:
|
||||
entity_id: light.kitchen
|
||||
- alias: A2
|
||||
trigger:
|
||||
- platform: state
|
||||
entity_id: binary_sensor.two
|
||||
to: "on"
|
||||
action:
|
||||
- service: light.turn_on
|
||||
target:
|
||||
entity_id: light.kitchen
|
||||
"""
|
||||
}
|
||||
rc, out, err = self._run_verifier(files)
|
||||
self.assertEqual(rc, 0, out + err)
|
||||
self.assertIn("Duplicate full-block groups: 1", out)
|
||||
self.assertIn("FULL_BLOCK findings:", out)
|
||||
self.assertIn("automation.action repeated 2 times", out)
|
||||
|
||||
def test_entry_detection_with_fixture(self) -> None:
|
||||
files = {
|
||||
"automations.yaml": """
|
||||
- alias: A1
|
||||
trigger:
|
||||
- platform: state
|
||||
entity_id: binary_sensor.one
|
||||
to: "on"
|
||||
action:
|
||||
- service: script.shared_handler
|
||||
- alias: A2
|
||||
trigger:
|
||||
- platform: state
|
||||
entity_id: binary_sensor.two
|
||||
to: "on"
|
||||
action:
|
||||
- service: script.shared_handler
|
||||
- delay: "00:00:01"
|
||||
"""
|
||||
}
|
||||
rc, out, err = self._run_verifier(files)
|
||||
self.assertEqual(rc, 0, out + err)
|
||||
self.assertIn("Duplicate entry groups: 1", out)
|
||||
self.assertIn("ENTRY findings:", out)
|
||||
self.assertIn("automation.action entry repeated 2 times", out)
|
||||
|
||||
def test_intra_detection_with_fixture(self) -> None:
|
||||
files = {
|
||||
"automations.yaml": """
|
||||
- alias: Repeat Inside Block
|
||||
trigger:
|
||||
- platform: state
|
||||
entity_id: binary_sensor.one
|
||||
to: "on"
|
||||
action:
|
||||
- service: light.turn_off
|
||||
target:
|
||||
entity_id: light.den
|
||||
- service: light.turn_off
|
||||
target:
|
||||
entity_id: light.den
|
||||
"""
|
||||
}
|
||||
rc, out, err = self._run_verifier(files)
|
||||
self.assertEqual(rc, 0, out + err)
|
||||
self.assertIn("INTRA findings:", out)
|
||||
self.assertIn("INTRA automation.action: Repeat Inside Block has 2 duplicated entries", out)
|
||||
|
||||
def test_central_script_detection_with_package_definition_and_multi_caller(self) -> None:
|
||||
files = {
|
||||
"config/packages/shared_scripts.yaml": """
|
||||
script:
|
||||
my_shared_script:
|
||||
alias: Shared Script
|
||||
sequence:
|
||||
- service: logbook.log
|
||||
data:
|
||||
name: Shared
|
||||
message: Hello
|
||||
""",
|
||||
"config/automations/caller_one.yaml": """
|
||||
automation:
|
||||
- alias: Caller One
|
||||
trigger:
|
||||
- platform: state
|
||||
entity_id: binary_sensor.one
|
||||
to: "on"
|
||||
action:
|
||||
- service: script.my_shared_script
|
||||
""",
|
||||
"config/automations/caller_two.yaml": """
|
||||
automation:
|
||||
- alias: Caller Two
|
||||
trigger:
|
||||
- platform: state
|
||||
entity_id: binary_sensor.two
|
||||
to: "on"
|
||||
action:
|
||||
- service: script.turn_on
|
||||
target:
|
||||
entity_id: script.my_shared_script
|
||||
""",
|
||||
}
|
||||
rc, out, err = self._run_verifier(files)
|
||||
self.assertEqual(rc, 0, out + err)
|
||||
self.assertIn("Central-script findings: 1", out)
|
||||
self.assertIn("script.my_shared_script is package-defined and called from 2 files", out)
|
||||
self.assertIn("suggestion: Move definition to config/script/my_shared_script.yaml", out)
|
||||
|
||||
def test_subsumed_entry_groups_are_collapsed(self) -> None:
|
||||
files = {
|
||||
"automations.yaml": """
|
||||
- alias: A1
|
||||
trigger:
|
||||
- platform: state
|
||||
entity_id: binary_sensor.one
|
||||
to: "on"
|
||||
action:
|
||||
- service: light.turn_on
|
||||
target:
|
||||
entity_id: light.kitchen
|
||||
- alias: A2
|
||||
trigger:
|
||||
- platform: state
|
||||
entity_id: binary_sensor.two
|
||||
to: "on"
|
||||
action:
|
||||
- service: light.turn_on
|
||||
target:
|
||||
entity_id: light.kitchen
|
||||
"""
|
||||
}
|
||||
rc, out, err = self._run_verifier(files)
|
||||
self.assertEqual(rc, 0, out + err)
|
||||
self.assertIn("Duplicate full-block groups: 1", out)
|
||||
self.assertIn("Duplicate entry groups: 0", out)
|
||||
|
||||
def test_full_block_findings_order_is_deterministic(self) -> None:
|
||||
files = {
|
||||
"automations.yaml": """
|
||||
- alias: A1
|
||||
trigger:
|
||||
- platform: state
|
||||
entity_id: binary_sensor.same
|
||||
to: "on"
|
||||
action:
|
||||
- service: light.turn_on
|
||||
target:
|
||||
entity_id: light.kitchen
|
||||
- alias: A2
|
||||
trigger:
|
||||
- platform: state
|
||||
entity_id: binary_sensor.same
|
||||
to: "on"
|
||||
action:
|
||||
- service: light.turn_on
|
||||
target:
|
||||
entity_id: light.kitchen
|
||||
""",
|
||||
"scripts.yaml": """
|
||||
script:
|
||||
script_one:
|
||||
sequence:
|
||||
- service: light.turn_off
|
||||
target:
|
||||
entity_id: light.kitchen
|
||||
script_two:
|
||||
sequence:
|
||||
- service: light.turn_off
|
||||
target:
|
||||
entity_id: light.kitchen
|
||||
""",
|
||||
}
|
||||
rc, out, err = self._run_verifier(files)
|
||||
self.assertEqual(rc, 0, out + err)
|
||||
order = self._full_block_group_order(out)
|
||||
self.assertGreaterEqual(len(order), 3, out)
|
||||
self.assertEqual(order[:3], ["automation.action", "automation.trigger", "script.sequence"])
|
||||
|
||||
def test_exit_codes_for_strict_and_non_strict_modes(self) -> None:
|
||||
files = {
|
||||
"automations.yaml": """
|
||||
- alias: A1
|
||||
trigger:
|
||||
- platform: state
|
||||
entity_id: binary_sensor.one
|
||||
to: "on"
|
||||
action:
|
||||
- service: light.turn_on
|
||||
target:
|
||||
entity_id: light.kitchen
|
||||
- alias: A2
|
||||
trigger:
|
||||
- platform: state
|
||||
entity_id: binary_sensor.two
|
||||
to: "on"
|
||||
action:
|
||||
- service: light.turn_on
|
||||
target:
|
||||
entity_id: light.kitchen
|
||||
"""
|
||||
}
|
||||
rc_non_strict, _, _ = self._run_verifier(files)
|
||||
rc_strict, _, _ = self._run_verifier(files, "--strict")
|
||||
self.assertEqual(rc_non_strict, 0)
|
||||
self.assertEqual(rc_strict, 1)
|
||||
|
||||
def test_parse_error_path_returns_exit_code_two(self) -> None:
|
||||
files = {
|
||||
"good.yaml": """
|
||||
automation:
|
||||
- alias: Good
|
||||
trigger:
|
||||
- platform: state
|
||||
entity_id: binary_sensor.one
|
||||
to: "on"
|
||||
action:
|
||||
- service: light.turn_on
|
||||
target:
|
||||
entity_id: light.kitchen
|
||||
""",
|
||||
"bad.yaml": "automation: [\n",
|
||||
}
|
||||
rc, out, err = self._run_verifier(files)
|
||||
self.assertEqual(rc, 2, out + err)
|
||||
self.assertIn("Parse errors:", out)
|
||||
self.assertIn("bad.yaml", out)
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
unittest.main()
|
||||
Reference in New Issue
Block a user