mirror of
https://github.com/CCOSTAN/Home-AssistantConfig.git
synced 2026-04-27 18:52:11 +00:00
Enhance Home Assistant configuration with new sensors, state classes, and infrastructure monitoring
- Updated proxmox.yaml to include state_class for disk usage sensors and improved availability checks. - Modified space.yaml to add state_class for the Earth distance sensor. - Enhanced stats.yaml with state_class for various command line sensors and template sensors to support long-term trend rollups. - Updated recorder.yaml to refine notes and exclude additional MariaDB snapshot sensors from recording. - Revised README.md in scripts to correct package paths and add a new monthly log hygiene review automation. - Introduced infrastructure.yaml for comprehensive observability and monitoring of WAN, DNS, and website states, including automated repairs for uptime breaches. - Added mariadb_snapshot.py script to collect telemetry snapshots for MariaDB, supporting Home Assistant command line sensors.
This commit is contained in:
@@ -50,10 +50,10 @@ Live collection of plug-and-play Home Assistant packages. Each YAML file in this
|
||||
| [github_watched_repo_scout.yaml](github_watched_repo_scout.yaml) | Nightly Joanna dispatch that reviews unread notifications from watched GitHub repos, recommends HA-config ideas, refreshes strong-candidate issues, and marks processed watched-repo notifications read. | `automation.github_watched_repo_scout_nightly`, `script.joanna_dispatch`, `script.send_to_logbook` |
|
||||
| [proxmox.yaml](proxmox.yaml) | Proxmox runtime and disk pressure monitoring with Repairs + Joanna dispatch for sustained node degradations, plus nightly Frigate reboot. | `binary_sensor.proxmox*_runtime_healthy`, `sensor.proxmox*_disk_used_percentage`, `repairs.create`, `script.joanna_dispatch`, `button.qemu_docker2_101_reboot` |
|
||||
| [synology_dsm.yaml](synology_dsm.yaml) | Synology DSM integration health normalization for Carlo-NAS01 and Carlo-NVR, with Repairs + Joanna dispatch on sustained integration, security, or storage problems. | `binary_sensor.carlo_*_synology_problem`, `sensor.carlo_*_synology_problem_summary`, `repairs.create`, `script.joanna_dispatch` |
|
||||
| [infrastructure_observability.yaml](infrastructure_observability.yaml) | Normalized WAN/DNS/backup/domain/cert health + website uptime/latency SLO signals for Infrastructure dashboards. | `binary_sensor.infra_website_uptime_slo_breach`, `binary_sensor.infra_website_latency_degraded`, `binary_sensor.infra_*` |
|
||||
| [infrastructure.yaml](infrastructure.yaml) | Normalized WAN/DNS/backup/domain/cert health + website uptime/latency SLO signals for Infrastructure dashboards, plus nightly backup verification and monthly Joanna HA log hygiene review with GitHub issue follow-up. | `binary_sensor.infra_website_uptime_slo_breach`, `binary_sensor.infra_website_latency_degraded`, `automation.infra_backup_nightly_verification`, `automation.infra_monthly_log_hygiene_review`, `script.joanna_dispatch` |
|
||||
| [onenote_indexer.yaml](onenote_indexer.yaml) | OneNote indexer health/status monitoring for Joanna, failure-repair automation, and a daily duplicate-delete maintenance request. | `sensor.onenote_indexer_last_job_status`, `binary_sensor.onenote_indexer_last_job_successful` |
|
||||
| [mqtt_status.yaml](mqtt_status.yaml) | Command-line MQTT broker reachability probe with Spook Repairs escalation and Joanna troubleshooting dispatch on outage. | `binary_sensor.mqtt_status_raw`, `binary_sensor.mqtt_broker_problem`, `repairs.create`, `rest_command.bearclaw_command` |
|
||||
| [mariadb.yaml](mariadb.yaml) | MariaDB recorder health and capacity SQL sensors. | `sensor.mariadb_status`, `sensor.database_size` |
|
||||
| [mariadb.yaml](mariadb.yaml) | MariaDB recorder health and capacity snapshots with hourly live metrics, weekly admin/recorder polling, and stats-ready numeric sensors. | `sensor.mariadb_status`, `sensor.database_size` |
|
||||
| [processmonitor.yaml](processmonitor.yaml) | Root filesystem disk-pressure monitoring with immediate digest/logbook notes at 80%, Joanna review after 10 minutes above 80%, and delayed phone alerts only if the issue stays unresolved after dispatch. | `sensor.disk_use_percent`, `repairs.create`, `script.joanna_dispatch`, `tts.clear_cache` |
|
||||
| [tugtainer_updates.yaml](tugtainer_updates.yaml) | Tugtainer container update notifications via webhook + persistent alerts, plus event-based Joanna dispatch when reports include `### Available:` (24h cooldown via `mode: single` + delay, no new helpers). | `persistent_notification.create`, `event: tugtainer_available_detected`, `script.joanna_dispatch`, `input_datetime.tugtainer_last_update` |
|
||||
| [bearclaw.yaml](bearclaw.yaml) | Joanna/BearClaw bridge automations that forward Telegram commands to codex_appliance, include LLM-first routing context for freeform text, relay replies back, ingest `/api/bearclaw/status` telemetry, and expose dispatch plus QMD/memory-index sensors for Infrastructure dashboards. | `rest_command.bearclaw_*`, `sensor.bearclaw_status_telemetry`, `sensor.joanna_*`, `binary_sensor.joanna_*`, `automation.bearclaw_*`, `script.send_to_logbook` |
|
||||
|
||||
@@ -55,6 +55,7 @@ mqtt:
|
||||
unique_id: garadget_large_garage_door_brightness
|
||||
state_topic: "garadget/GLarge/status"
|
||||
unit_of_measurement: '%'
|
||||
state_class: measurement
|
||||
value_template: '{{ value_json.bright }}'
|
||||
|
||||
- name: "Small Garage Door Since"
|
||||
@@ -66,6 +67,7 @@ mqtt:
|
||||
unique_id: garadget_small_garage_door_brightness
|
||||
state_topic: "garadget/GSmall/status"
|
||||
unit_of_measurement: '%'
|
||||
state_class: measurement
|
||||
value_template: '{{ value_json.bright }}'
|
||||
|
||||
input_text:
|
||||
|
||||
@@ -3,13 +3,15 @@
|
||||
# For more info visit https://www.vcloudinfo.com/click-here
|
||||
# Original Repo : https://github.com/CCOSTAN/Home-AssistantConfig
|
||||
# -------------------------------------------------------------------
|
||||
# Infrastructure Observability - Normalized infra monitoring signals
|
||||
# WAN/DNS/website/domain/cert state normalized for dashboards.
|
||||
# Infrastructure - Observability and Joanna review workflows
|
||||
# WAN/DNS/website/domain/cert state normalized for dashboards, plus scheduled infrastructure reviews.
|
||||
# -------------------------------------------------------------------
|
||||
# Related Issue: 1584
|
||||
# Notes: Home dashboard consumes `infra_*` entities for exceptions-only alerts.
|
||||
# Notes: Domain warning threshold is <30 days; critical threshold is <14 days.
|
||||
# Notes: Nightly Duplicati verification is performed by codex_appliance against the Duplicati API because HA backup entities are not available.
|
||||
# Notes: Monthly HA log hygiene review requests Telegram + GitHub issue follow-up only; Joanna must wait for approval before any changes.
|
||||
# Notes: Numeric WAN telemetry exposes state_class so recorder can keep long-term statistics.
|
||||
######################################################################
|
||||
|
||||
command_line:
|
||||
@@ -22,6 +24,7 @@ command_line:
|
||||
END {if (!found) print "unknown"}'
|
||||
scan_interval: 300
|
||||
unit_of_measurement: "%"
|
||||
state_class: measurement
|
||||
value_template: "{{ (value | regex_replace('[^0-9.]', '')) or 'unknown' }}"
|
||||
|
||||
- sensor:
|
||||
@@ -33,6 +36,7 @@ command_line:
|
||||
END {if (!found) print "unknown"}'
|
||||
scan_interval: 300
|
||||
unit_of_measurement: "ms"
|
||||
state_class: measurement
|
||||
value_template: "{{ (value | regex_replace('[^0-9.]', '')) or 'unknown' }}"
|
||||
|
||||
- sensor:
|
||||
@@ -411,3 +415,42 @@ automation:
|
||||
action_taken,
|
||||
verification,
|
||||
next_action_required=true/false.
|
||||
|
||||
- alias: "Infrastructure - Monthly HA Log Hygiene Review"
|
||||
id: infra_monthly_log_hygiene_review
|
||||
description: "Ask Joanna monthly to review Home Assistant logs, create a GitHub issue with noisy entries, and send Telegram recommendations only."
|
||||
mode: single
|
||||
trigger:
|
||||
- platform: time
|
||||
at: "03:20:00"
|
||||
condition:
|
||||
- condition: template
|
||||
value_template: "{{ now().day == 1 }}"
|
||||
variables:
|
||||
trigger_context: "HA automation infra_monthly_log_hygiene_review (Infrastructure - Monthly HA Log Hygiene Review)"
|
||||
action:
|
||||
- service: script.joanna_dispatch
|
||||
data:
|
||||
trigger_context: "{{ trigger_context }}"
|
||||
source: "home_assistant_automation.infra_monthly_log_hygiene_review"
|
||||
summary: "Monthly Home Assistant log hygiene review with GitHub issue and Telegram follow-up"
|
||||
diagnostics: >-
|
||||
schedule=day_1@03:20:00,
|
||||
review_scope=available_home_assistant_logs,
|
||||
desired_outputs=telegram_follow_up+github_issue,
|
||||
github_repo=CCOSTAN/Home-AssistantConfig,
|
||||
approval_required_before_changes=true
|
||||
request: >-
|
||||
Review the available Home Assistant log files from the last month and identify noisy,
|
||||
low-value entries that could be safely suppressed, filtered, slowed, deduplicated, or
|
||||
retired. Focus on practical Home Assistant-side changes such as recorder exclusions,
|
||||
logger filtering, scan-interval reductions, entity retirement, or automation de-noising.
|
||||
Create or refresh a GitHub issue in CCOSTAN/Home-AssistantConfig that captures the noisy
|
||||
entries, estimated frequency, why each candidate is low-value, and the exact repo files
|
||||
or integrations likely to change. Then send Carlo a concise Telegram summary with the top
|
||||
recommendations and the GitHub issue number or link. Do not make any changes from this
|
||||
review. Wait for explicit follow-up approval first.
|
||||
- service: script.send_to_logbook
|
||||
data:
|
||||
topic: "HOME ASSISTANT"
|
||||
message: "Joanna monthly Home Assistant log hygiene review dispatched; Telegram summary and GitHub issue requested."
|
||||
@@ -9,6 +9,7 @@
|
||||
# Notes: Webhook id is bearclaw_maintenance_log_v1 (Joanna -> HA contract).
|
||||
# Notes: Duplicate event_id values are ignored to prevent double-count totals.
|
||||
# Notes: Recent event history string format is "when|amount|note||...".
|
||||
# Notes: Numeric refill interval sensors expose state_class for long-term trend rollups.
|
||||
######################################################################
|
||||
|
||||
input_number:
|
||||
@@ -64,17 +65,21 @@ template:
|
||||
- name: "Water Softener Salt Days Since Last Add"
|
||||
unique_id: water_softener_salt_days_since_last_add
|
||||
unit_of_measurement: d
|
||||
state: >-
|
||||
state_class: measurement
|
||||
availability: >-
|
||||
{% set raw = states('input_datetime.water_softener_salt_last_occurred_at') %}
|
||||
{% if raw in ['unknown', 'unavailable', 'none', ''] %}
|
||||
unknown
|
||||
false
|
||||
{% else %}
|
||||
{% set event_ts = as_timestamp(as_local(as_datetime(raw)), default=none) %}
|
||||
{% if event_ts is none %}
|
||||
unknown
|
||||
{% else %}
|
||||
{{ [((as_timestamp(now()) - event_ts) / 86400), 0] | max | round(1) }}
|
||||
{% endif %}
|
||||
{{ as_timestamp(as_local(as_datetime(raw)), default=none) is not none }}
|
||||
{% endif %}
|
||||
state: >-
|
||||
{% set raw = states('input_datetime.water_softener_salt_last_occurred_at') %}
|
||||
{% set event_ts = as_timestamp(as_local(as_datetime(raw)), default=none) %}
|
||||
{% if event_ts is not none %}
|
||||
{{ [((as_timestamp(now()) - event_ts) / 86400), 0] | max | round(1) }}
|
||||
{% else %}
|
||||
0
|
||||
{% endif %}
|
||||
|
||||
- name: "Water Softener Salt Last Summary"
|
||||
@@ -98,6 +103,7 @@ template:
|
||||
- name: "Water Softener Salt Average Days Between Refills"
|
||||
unique_id: water_softener_salt_average_days_between_refills
|
||||
unit_of_measurement: d
|
||||
state_class: measurement
|
||||
state: >-
|
||||
{% set raw = states('input_text.water_softener_salt_recent_events') %}
|
||||
{% if raw in ['unknown', 'unavailable', 'none', ''] %}
|
||||
|
||||
@@ -3,168 +3,224 @@
|
||||
# For more info visit https://www.vcloudinfo.com/click-here
|
||||
# Original Repo : https://github.com/CCOSTAN/Home-AssistantConfig
|
||||
# -------------------------------------------------------------------
|
||||
# MariaDB Monitoring - SQL sensor bundle for DB health
|
||||
# Recorder-backed metrics for MariaDB performance and capacity checks.
|
||||
# MariaDB Monitoring - Snapshot-driven DB health sensors
|
||||
# Recorder-backed metrics for MariaDB health, capacity, and tuning.
|
||||
# -------------------------------------------------------------------
|
||||
# Notes: Uses SQL integration against recorder_db_url.
|
||||
# Notes: COUNT(*) queries run every 6h; increase scan_interval or disable if slow.
|
||||
# Notes: Uses command_line snapshot helpers so expensive MariaDB queries are not forced to run every 30 seconds by the SQL integration.
|
||||
# Notes: Live metrics poll hourly; recorder/admin snapshots poll weekly.
|
||||
# Notes: Numeric template sensors expose state_class where useful so HA can keep long-term statistics efficiently.
|
||||
######################################################################
|
||||
|
||||
sql:
|
||||
- name: "MariaDB Status"
|
||||
db_url: !secret recorder_db_url
|
||||
query: >-
|
||||
SELECT 'running' as status
|
||||
FROM information_schema.GLOBAL_STATUS
|
||||
WHERE VARIABLE_NAME = 'Uptime'
|
||||
AND CAST(VARIABLE_VALUE AS UNSIGNED) > 0;
|
||||
column: "status"
|
||||
value_template: "{{ value if value else 'stopped' }}"
|
||||
command_line:
|
||||
- sensor:
|
||||
name: MariaDB Live Snapshot
|
||||
unique_id: mariadb_live_snapshot
|
||||
command: "python3 /config/shell_scripts/mariadb_snapshot.py live"
|
||||
scan_interval: 3600
|
||||
command_timeout: 30
|
||||
json_attributes:
|
||||
- performance
|
||||
- connections
|
||||
- questions
|
||||
- uptime_seconds
|
||||
value_template: "{{ value_json.status | default('unknown') }}"
|
||||
|
||||
- name: "MariaDB Version"
|
||||
db_url: !secret recorder_db_url
|
||||
query: >-
|
||||
SELECT @@version as version;
|
||||
column: "version"
|
||||
- sensor:
|
||||
name: MariaDB Recorder Snapshot
|
||||
unique_id: mariadb_recorder_snapshot
|
||||
command: "python3 /config/shell_scripts/mariadb_snapshot.py recorder"
|
||||
scan_interval: 604800
|
||||
command_timeout: 180
|
||||
json_attributes:
|
||||
- database_tables_count
|
||||
- database_oldest_record
|
||||
- database_total_records
|
||||
- database_records_per_day
|
||||
value_template: "{{ value_json.database_size_mib | default('unknown') }}"
|
||||
|
||||
- name: "MariaDB Performance"
|
||||
db_url: !secret recorder_db_url
|
||||
query: >-
|
||||
SELECT ROUND(
|
||||
(SELECT VARIABLE_VALUE
|
||||
FROM information_schema.GLOBAL_STATUS
|
||||
WHERE VARIABLE_NAME = 'Queries') /
|
||||
(SELECT VARIABLE_VALUE
|
||||
FROM information_schema.GLOBAL_STATUS
|
||||
WHERE VARIABLE_NAME = 'Uptime')
|
||||
) as performance;
|
||||
column: "performance"
|
||||
unit_of_measurement: "q/s"
|
||||
- sensor:
|
||||
name: MariaDB Admin Snapshot
|
||||
unique_id: mariadb_admin_snapshot
|
||||
command: "python3 /config/shell_scripts/mariadb_snapshot.py admin"
|
||||
scan_interval: 604800
|
||||
command_timeout: 30
|
||||
json_attributes:
|
||||
- version
|
||||
- max_connections
|
||||
- log_file_size_mib
|
||||
- tmp_table_size_mib
|
||||
- io_capacity
|
||||
- io_threads_read
|
||||
- io_threads_write
|
||||
- table_cache
|
||||
- sort_buffer_mib
|
||||
- read_buffer_mib
|
||||
- join_buffer_mib
|
||||
value_template: "{{ value_json.buffer_pool_gib | default('unknown') }}"
|
||||
|
||||
- name: "Database Size"
|
||||
db_url: !secret recorder_db_url
|
||||
query: >-
|
||||
SELECT ROUND(SUM(data_length + index_length) / 1024 / 1024, 2) as size
|
||||
FROM information_schema.tables
|
||||
WHERE table_schema = 'homeassistant';
|
||||
column: "size"
|
||||
unit_of_measurement: "MB"
|
||||
value_template: "{{ value | float(0) }}"
|
||||
template:
|
||||
- sensor:
|
||||
- name: "MariaDB Status"
|
||||
unique_id: mariadb_status
|
||||
state: >-
|
||||
{% set value = states('sensor.mariadb_live_snapshot') %}
|
||||
{{ value if value not in ['unknown', 'unavailable', 'none', ''] else 'unknown' }}
|
||||
|
||||
- name: "Database Tables Count"
|
||||
db_url: !secret recorder_db_url
|
||||
query: >-
|
||||
SELECT COUNT(*) as count
|
||||
FROM information_schema.tables
|
||||
WHERE table_schema = 'homeassistant';
|
||||
column: "count"
|
||||
unit_of_measurement: "tables"
|
||||
- name: "MariaDB Version"
|
||||
unique_id: mariadb_version
|
||||
state: >-
|
||||
{% set value = state_attr('sensor.mariadb_admin_snapshot', 'version') %}
|
||||
{{ value if value is not none else 'unknown' }}
|
||||
|
||||
- name: "Database Oldest Record"
|
||||
db_url: !secret recorder_db_url
|
||||
query: >-
|
||||
SELECT DATE_FORMAT(FROM_UNIXTIME(MIN(last_updated_ts)), '%Y-%m-%d') as oldest
|
||||
FROM states;
|
||||
column: "oldest"
|
||||
- name: "MariaDB Performance"
|
||||
unique_id: mariadb_performance
|
||||
unit_of_measurement: "q/s"
|
||||
state_class: measurement
|
||||
availability: >-
|
||||
{{ state_attr('sensor.mariadb_live_snapshot', 'performance') is not none }}
|
||||
state: >-
|
||||
{{ state_attr('sensor.mariadb_live_snapshot', 'performance') | float(0) }}
|
||||
|
||||
- name: "Database Total Records"
|
||||
db_url: !secret recorder_db_url
|
||||
query: >-
|
||||
SELECT COUNT(*) as count
|
||||
FROM states;
|
||||
column: "count"
|
||||
unit_of_measurement: "records"
|
||||
- name: "Database Size"
|
||||
unique_id: database_size
|
||||
unit_of_measurement: "MiB"
|
||||
state_class: measurement
|
||||
availability: >-
|
||||
{{ states('sensor.mariadb_recorder_snapshot') not in ['unknown', 'unavailable', 'none', ''] }}
|
||||
state: >-
|
||||
{{ states('sensor.mariadb_recorder_snapshot') | float(0) }}
|
||||
|
||||
- name: "Database Records Per Day"
|
||||
db_url: !secret recorder_db_url
|
||||
query: >-
|
||||
SELECT ROUND(
|
||||
COUNT(*) /
|
||||
GREATEST(DATEDIFF(NOW(), FROM_UNIXTIME(MIN(last_updated_ts))), 1),
|
||||
0
|
||||
) as avg
|
||||
FROM states;
|
||||
column: "avg"
|
||||
unit_of_measurement: "records/day"
|
||||
- name: "Database Tables Count"
|
||||
unique_id: database_tables_count
|
||||
unit_of_measurement: "tables"
|
||||
state_class: measurement
|
||||
availability: >-
|
||||
{{ state_attr('sensor.mariadb_recorder_snapshot', 'database_tables_count') is not none }}
|
||||
state: >-
|
||||
{{ state_attr('sensor.mariadb_recorder_snapshot', 'database_tables_count') | int(0) }}
|
||||
|
||||
- name: "MariaDB Uptime"
|
||||
db_url: !secret recorder_db_url
|
||||
query: >-
|
||||
SELECT VARIABLE_VALUE as value
|
||||
FROM information_schema.GLOBAL_STATUS
|
||||
WHERE VARIABLE_NAME = 'Uptime';
|
||||
column: "value"
|
||||
unit_of_measurement: "seconds"
|
||||
- name: "Database Oldest Record"
|
||||
unique_id: database_oldest_record
|
||||
state: >-
|
||||
{% set value = state_attr('sensor.mariadb_recorder_snapshot', 'database_oldest_record') %}
|
||||
{{ value if value is not none else 'unknown' }}
|
||||
|
||||
- name: "MariaDB Connections"
|
||||
db_url: !secret recorder_db_url
|
||||
query: >-
|
||||
SELECT VARIABLE_VALUE as value
|
||||
FROM information_schema.GLOBAL_STATUS
|
||||
WHERE VARIABLE_NAME = 'Threads_connected';
|
||||
column: "value"
|
||||
unit_of_measurement: "connections"
|
||||
- name: "Database Total Records"
|
||||
unique_id: database_total_records
|
||||
unit_of_measurement: "records"
|
||||
state_class: measurement
|
||||
availability: >-
|
||||
{{ state_attr('sensor.mariadb_recorder_snapshot', 'database_total_records') is not none }}
|
||||
state: >-
|
||||
{{ state_attr('sensor.mariadb_recorder_snapshot', 'database_total_records') | int(0) }}
|
||||
|
||||
- name: "MariaDB Questions"
|
||||
db_url: !secret recorder_db_url
|
||||
query: >-
|
||||
SELECT VARIABLE_VALUE as value
|
||||
FROM information_schema.GLOBAL_STATUS
|
||||
WHERE VARIABLE_NAME = 'Questions';
|
||||
column: "value"
|
||||
unit_of_measurement: "queries"
|
||||
- name: "Database Records Per Day"
|
||||
unique_id: database_records_per_day
|
||||
unit_of_measurement: "records/day"
|
||||
state_class: measurement
|
||||
availability: >-
|
||||
{{ state_attr('sensor.mariadb_recorder_snapshot', 'database_records_per_day') is not none }}
|
||||
state: >-
|
||||
{{ state_attr('sensor.mariadb_recorder_snapshot', 'database_records_per_day') | float(0) }}
|
||||
|
||||
- name: "MariaDB Buffer Pool Size"
|
||||
db_url: !secret recorder_db_url
|
||||
query: >-
|
||||
SELECT CONCAT(ROUND(@@innodb_buffer_pool_size / 1024 / 1024 / 1024, 1), ' GB') as value;
|
||||
column: "value"
|
||||
- name: "MariaDB Uptime"
|
||||
unique_id: mariadb_uptime
|
||||
unit_of_measurement: "s"
|
||||
availability: >-
|
||||
{{ state_attr('sensor.mariadb_live_snapshot', 'uptime_seconds') is not none }}
|
||||
state: >-
|
||||
{{ state_attr('sensor.mariadb_live_snapshot', 'uptime_seconds') | int(0) }}
|
||||
|
||||
- name: "MariaDB Max Connections"
|
||||
db_url: !secret recorder_db_url
|
||||
query: >-
|
||||
SELECT @@max_connections as value;
|
||||
column: "value"
|
||||
- name: "MariaDB Connections"
|
||||
unique_id: mariadb_connections
|
||||
unit_of_measurement: "connections"
|
||||
state_class: measurement
|
||||
availability: >-
|
||||
{{ state_attr('sensor.mariadb_live_snapshot', 'connections') is not none }}
|
||||
state: >-
|
||||
{{ state_attr('sensor.mariadb_live_snapshot', 'connections') | int(0) }}
|
||||
|
||||
- name: "MariaDB Log File Size"
|
||||
db_url: !secret recorder_db_url
|
||||
query: >-
|
||||
SELECT CONCAT(ROUND(@@innodb_log_file_size / 1024 / 1024, 0), ' MB') as value;
|
||||
column: "value"
|
||||
- name: "MariaDB Questions"
|
||||
unique_id: mariadb_questions
|
||||
unit_of_measurement: "queries"
|
||||
state_class: total_increasing
|
||||
availability: >-
|
||||
{{ state_attr('sensor.mariadb_live_snapshot', 'questions') is not none }}
|
||||
state: >-
|
||||
{{ state_attr('sensor.mariadb_live_snapshot', 'questions') | int(0) }}
|
||||
|
||||
- name: "MariaDB Tmp Table Size"
|
||||
db_url: !secret recorder_db_url
|
||||
query: >-
|
||||
SELECT CONCAT(ROUND(@@tmp_table_size / 1024 / 1024, 0), ' MB') as value;
|
||||
column: "value"
|
||||
- name: "MariaDB Buffer Pool Size"
|
||||
unique_id: mariadb_buffer_pool_size
|
||||
unit_of_measurement: "GiB"
|
||||
state_class: measurement
|
||||
availability: >-
|
||||
{{ states('sensor.mariadb_admin_snapshot') not in ['unknown', 'unavailable', 'none', ''] }}
|
||||
state: >-
|
||||
{{ states('sensor.mariadb_admin_snapshot') | float(0) }}
|
||||
|
||||
- name: "MariaDB IO Capacity"
|
||||
db_url: !secret recorder_db_url
|
||||
query: >-
|
||||
SELECT @@innodb_io_capacity as value;
|
||||
column: "value"
|
||||
- name: "MariaDB Max Connections"
|
||||
unique_id: mariadb_max_connections
|
||||
unit_of_measurement: "connections"
|
||||
state_class: measurement
|
||||
availability: >-
|
||||
{{ state_attr('sensor.mariadb_admin_snapshot', 'max_connections') is not none }}
|
||||
state: >-
|
||||
{{ state_attr('sensor.mariadb_admin_snapshot', 'max_connections') | int(0) }}
|
||||
|
||||
- name: "MariaDB IO Threads"
|
||||
db_url: !secret recorder_db_url
|
||||
query: >-
|
||||
SELECT CONCAT(
|
||||
'Read: ', @@innodb_read_io_threads,
|
||||
', Write: ', @@innodb_write_io_threads
|
||||
) as value;
|
||||
column: "value"
|
||||
- name: "MariaDB Log File Size"
|
||||
unique_id: mariadb_log_file_size
|
||||
unit_of_measurement: "MiB"
|
||||
state_class: measurement
|
||||
availability: >-
|
||||
{{ state_attr('sensor.mariadb_admin_snapshot', 'log_file_size_mib') is not none }}
|
||||
state: >-
|
||||
{{ state_attr('sensor.mariadb_admin_snapshot', 'log_file_size_mib') | float(0) }}
|
||||
|
||||
- name: "MariaDB Table Cache"
|
||||
db_url: !secret recorder_db_url
|
||||
query: >-
|
||||
SELECT @@table_open_cache as value;
|
||||
column: "value"
|
||||
- name: "MariaDB Tmp Table Size"
|
||||
unique_id: mariadb_tmp_table_size
|
||||
unit_of_measurement: "MiB"
|
||||
state_class: measurement
|
||||
availability: >-
|
||||
{{ state_attr('sensor.mariadb_admin_snapshot', 'tmp_table_size_mib') is not none }}
|
||||
state: >-
|
||||
{{ state_attr('sensor.mariadb_admin_snapshot', 'tmp_table_size_mib') | float(0) }}
|
||||
|
||||
- name: "MariaDB Buffer Sizes"
|
||||
db_url: !secret recorder_db_url
|
||||
query: >-
|
||||
SELECT CONCAT(
|
||||
'Sort: ', ROUND(@@sort_buffer_size / 1024 / 1024, 0), 'M, ',
|
||||
'Read: ', ROUND(@@read_buffer_size / 1024 / 1024, 0), 'M, ',
|
||||
'Join: ', ROUND(@@join_buffer_size / 1024 / 1024, 0), 'M'
|
||||
) as value;
|
||||
column: "value"
|
||||
- name: "MariaDB IO Capacity"
|
||||
unique_id: mariadb_io_capacity
|
||||
state_class: measurement
|
||||
availability: >-
|
||||
{{ state_attr('sensor.mariadb_admin_snapshot', 'io_capacity') is not none }}
|
||||
state: >-
|
||||
{{ state_attr('sensor.mariadb_admin_snapshot', 'io_capacity') | int(0) }}
|
||||
|
||||
- name: "MariaDB IO Threads"
|
||||
unique_id: mariadb_io_threads
|
||||
state: >-
|
||||
{% set read = state_attr('sensor.mariadb_admin_snapshot', 'io_threads_read') %}
|
||||
{% set write = state_attr('sensor.mariadb_admin_snapshot', 'io_threads_write') %}
|
||||
{% if read is not none and write is not none %}
|
||||
Read: {{ read }}, Write: {{ write }}
|
||||
{% else %}
|
||||
unknown
|
||||
{% endif %}
|
||||
|
||||
- name: "MariaDB Table Cache"
|
||||
unique_id: mariadb_table_cache
|
||||
unit_of_measurement: "tables"
|
||||
state_class: measurement
|
||||
availability: >-
|
||||
{{ state_attr('sensor.mariadb_admin_snapshot', 'table_cache') is not none }}
|
||||
state: >-
|
||||
{{ state_attr('sensor.mariadb_admin_snapshot', 'table_cache') | int(0) }}
|
||||
|
||||
- name: "MariaDB Buffer Sizes"
|
||||
unique_id: mariadb_buffer_sizes
|
||||
state: >-
|
||||
{% set sort = state_attr('sensor.mariadb_admin_snapshot', 'sort_buffer_mib') %}
|
||||
{% set read = state_attr('sensor.mariadb_admin_snapshot', 'read_buffer_mib') %}
|
||||
{% set join = state_attr('sensor.mariadb_admin_snapshot', 'join_buffer_mib') %}
|
||||
{% if sort is not none and read is not none and join is not none %}
|
||||
Sort: {{ sort }}M, Read: {{ read }}M, Join: {{ join }}M
|
||||
{% else %}
|
||||
unknown
|
||||
{% endif %}
|
||||
|
||||
@@ -10,13 +10,23 @@
|
||||
# Notes: Creates HA repair issues when proxmox nodes report updates.
|
||||
# Notes: Adds normalized runtime + disk health signals for dashboard/alerts.
|
||||
# Notes: Joanna dispatch is reserved for sustained runtime and disk-pressure degradations.
|
||||
# Notes: Normalized disk usage sensors expose state_class for long-term trend rollups.
|
||||
######################################################################
|
||||
template:
|
||||
- sensor:
|
||||
- name: "Proxmox1 Disk Used Percentage"
|
||||
unique_id: proxmox1_disk_used_percentage
|
||||
unit_of_measurement: "%"
|
||||
state_class: measurement
|
||||
icon: mdi:harddisk
|
||||
availability: >-
|
||||
{% set preferred = states('sensor.node_proxmox1_disk_used_percentage') %}
|
||||
{% set used = states('sensor.node_proxmox1_disk') %}
|
||||
{% set total = states('sensor.node_proxmox1_max_disk') %}
|
||||
{{ preferred not in ['unknown', 'unavailable', 'none', ''] or
|
||||
(used not in ['unknown', 'unavailable', 'none', ''] and
|
||||
total not in ['unknown', 'unavailable', 'none', ''] and
|
||||
(total | float(0)) > 0) }}
|
||||
state: >-
|
||||
{% set preferred = states('sensor.node_proxmox1_disk_used_percentage') %}
|
||||
{% if preferred not in ['unknown', 'unavailable', 'none', ''] %}
|
||||
@@ -27,14 +37,23 @@ template:
|
||||
{% if total > 0 %}
|
||||
{{ ((used / total) * 100) | round(1) }}
|
||||
{% else %}
|
||||
{{ none }}
|
||||
0
|
||||
{% endif %}
|
||||
{% endif %}
|
||||
|
||||
- name: "Proxmox02 Disk Used Percentage"
|
||||
unique_id: proxmox02_disk_used_percentage
|
||||
unit_of_measurement: "%"
|
||||
state_class: measurement
|
||||
icon: mdi:harddisk
|
||||
availability: >-
|
||||
{% set preferred = states('sensor.node_proxmox02_disk_used_percentage') %}
|
||||
{% set used = states('sensor.node_proxmox02_disk') %}
|
||||
{% set total = states('sensor.node_proxmox02_max_disk') %}
|
||||
{{ preferred not in ['unknown', 'unavailable', 'none', ''] or
|
||||
(used not in ['unknown', 'unavailable', 'none', ''] and
|
||||
total not in ['unknown', 'unavailable', 'none', ''] and
|
||||
(total | float(0)) > 0) }}
|
||||
state: >-
|
||||
{% set preferred = states('sensor.node_proxmox02_disk_used_percentage') %}
|
||||
{% if preferred not in ['unknown', 'unavailable', 'none', ''] %}
|
||||
@@ -45,7 +64,7 @@ template:
|
||||
{% if total > 0 %}
|
||||
{{ ((used / total) * 100) | round(1) }}
|
||||
{% else %}
|
||||
{{ none }}
|
||||
0
|
||||
{% endif %}
|
||||
{% endif %}
|
||||
|
||||
|
||||
@@ -17,6 +17,7 @@ sensor:
|
||||
- earth_distance_mi
|
||||
value_template: '{{ value_json["speed_mph"] }}'
|
||||
unit_of_measurement: "mph"
|
||||
state_class: measurement
|
||||
resource: 'https://api.spacexdata.com/v2/info/roadster'
|
||||
|
||||
- platform: rest
|
||||
|
||||
@@ -7,6 +7,7 @@
|
||||
# Build historical stats for AI/alerting.
|
||||
# -------------------------------------------------------------------
|
||||
# Contact: @CCOSTAN
|
||||
# Notes: Numeric repo and home counters expose state_class for long-term trend rollups.
|
||||
######################################################################
|
||||
|
||||
### Building out some Historical stats for AI. #####################
|
||||
@@ -19,6 +20,7 @@ command_line:
|
||||
scan_interval: 20000
|
||||
value_template: "{{ value | int }}"
|
||||
unit_of_measurement: "count"
|
||||
state_class: measurement
|
||||
- sensor:
|
||||
name: 'GitHub Open Issues'
|
||||
unique_id: github_open_issues
|
||||
@@ -26,6 +28,7 @@ command_line:
|
||||
scan_interval: 20000
|
||||
value_template: '{{ value_json.open_issues }}'
|
||||
unit_of_measurement: 'count'
|
||||
state_class: measurement
|
||||
|
||||
- sensor:
|
||||
name: 'GitHub Stargazers'
|
||||
@@ -34,6 +37,7 @@ command_line:
|
||||
scan_interval: 20000
|
||||
value_template: '{{ value_json.stargazers_count }}'
|
||||
unit_of_measurement: 'count'
|
||||
state_class: measurement
|
||||
|
||||
sensor:
|
||||
- platform: history_stats
|
||||
@@ -79,6 +83,7 @@ template:
|
||||
- name: "Number of Sensors"
|
||||
unique_id: stats_number_of_sensors
|
||||
unit_of_measurement: "count"
|
||||
state_class: measurement
|
||||
icon: mdi:counter
|
||||
state: >-
|
||||
{{ states.sensor | list | count }}
|
||||
@@ -86,6 +91,7 @@ template:
|
||||
- name: "Number of Automations"
|
||||
unique_id: stats_number_of_automations
|
||||
unit_of_measurement: "count"
|
||||
state_class: measurement
|
||||
icon: mdi:robot
|
||||
state: >-
|
||||
{{ states.automation | list | count }}
|
||||
@@ -93,6 +99,7 @@ template:
|
||||
- name: "Number of Scripts"
|
||||
unique_id: stats_number_of_scripts
|
||||
unit_of_measurement: "count"
|
||||
state_class: measurement
|
||||
icon: mdi:script-text
|
||||
state: >-
|
||||
{{ states.script | list | count }}
|
||||
@@ -100,6 +107,7 @@ template:
|
||||
- name: "Number of Binary Sensors"
|
||||
unique_id: stats_number_of_binary_sensors
|
||||
unit_of_measurement: "count"
|
||||
state_class: measurement
|
||||
icon: mdi:binary-sensor
|
||||
state: >-
|
||||
{{ states.binary_sensor | list | count }}
|
||||
@@ -107,6 +115,7 @@ template:
|
||||
- name: "Number of Devices"
|
||||
unique_id: stats_number_of_devices
|
||||
unit_of_measurement: "count"
|
||||
state_class: measurement
|
||||
icon: mdi:account-group
|
||||
state: >-
|
||||
{{ states.device_tracker | list | count }}
|
||||
@@ -114,6 +123,7 @@ template:
|
||||
- name: "Number of Lights"
|
||||
unique_id: stats_number_of_lights
|
||||
unit_of_measurement: "count"
|
||||
state_class: measurement
|
||||
icon: mdi:lightbulb
|
||||
state: >
|
||||
{{ states.light | list | count }}
|
||||
@@ -121,12 +131,14 @@ template:
|
||||
- name: "Number of lights on"
|
||||
unique_id: stats_number_of_lights_on
|
||||
unit_of_measurement: "count"
|
||||
state_class: measurement
|
||||
icon: mdi:binary-sensor
|
||||
state: >-
|
||||
{{ states.light | selectattr('state', 'eq', 'on') | list | count }}
|
||||
|
||||
- name: "Number of Smoke Detectors"
|
||||
unit_of_measurement: "count"
|
||||
state_class: measurement
|
||||
icon: mdi:smoke-detector
|
||||
state: >
|
||||
{% if states('group.protects') == 'on' %}
|
||||
@@ -142,6 +154,7 @@ template:
|
||||
- name: "Number of online Cameras"
|
||||
unique_id: stats_number_of_online_cameras
|
||||
unit_of_measurement: "count"
|
||||
state_class: measurement
|
||||
icon: mdi:camera
|
||||
state: >
|
||||
{{ states.camera | list | count }}
|
||||
@@ -149,6 +162,7 @@ template:
|
||||
- name: "Total WiFi Clients"
|
||||
unique_id: total_wifi_clients
|
||||
unit_of_measurement: "clients"
|
||||
state_class: measurement
|
||||
icon: mdi:wifi
|
||||
state: >
|
||||
{% set g = states('sensor.unifi_ap_garage_clients') | int(0) %}
|
||||
|
||||
@@ -6,7 +6,7 @@
|
||||
# Recorder Configuration - database retention and exclusions
|
||||
# Stores HA history while purging noise and controlling DB size.
|
||||
# -------------------------------------------------------------------
|
||||
# Notes: Keeps 180 days (1/2 year); excludes vcloudinfo pings, noisy connectivity telemetry, countdown-style alarm helpers, and other high-churn entities; MariaDB via recorder_db_url.
|
||||
# Notes: Keeps 180 days (1/2 year); excludes vcloudinfo pings, noisy connectivity telemetry, countdown-style alarm helpers, MariaDB snapshot helpers, and other high-churn entities; MariaDB via recorder_db_url.
|
||||
######################################################################
|
||||
db_url: !secret recorder_db_url
|
||||
purge_keep_days: 180
|
||||
@@ -100,6 +100,9 @@ exclude:
|
||||
- sensor.last_alexa
|
||||
- sensor.lights_on_count
|
||||
- sensor.low_battery
|
||||
- sensor.mariadb_admin_snapshot
|
||||
- sensor.mariadb_live_snapshot
|
||||
- sensor.mariadb_recorder_snapshot
|
||||
- sensor.network
|
||||
- sensor.network_detail
|
||||
- sensor.pi_hole_ads_blocked_today
|
||||
|
||||
@@ -57,7 +57,8 @@ Current automations that kick off automated resolutions (via `script.joanna_disp
|
||||
| `mqtt_open_repair_on_failure` | MQTT - Open Repair On Failure | [../packages/mqtt_status.yaml](../packages/mqtt_status.yaml) |
|
||||
| `onenote_indexer_daily_delete_maintenance` | OneNote Indexer - Daily Delete Maintenance Request | [../packages/onenote_indexer.yaml](../packages/onenote_indexer.yaml) |
|
||||
| `onenote_indexer_failure_open_repair` | OneNote Indexer - Open Repair On Failure | [../packages/onenote_indexer.yaml](../packages/onenote_indexer.yaml) |
|
||||
| `infra_backup_nightly_verification` | Infrastructure - Backup Nightly Verification | [../packages/infrastructure_observability.yaml](../packages/infrastructure_observability.yaml) |
|
||||
| `infra_backup_nightly_verification` | Infrastructure - Backup Nightly Verification | [../packages/infrastructure.yaml](../packages/infrastructure.yaml) |
|
||||
| `infra_monthly_log_hygiene_review` | Infrastructure - Monthly HA Log Hygiene Review | [../packages/infrastructure.yaml](../packages/infrastructure.yaml) |
|
||||
| `docker_state_sync_repairs_dynamic` | Docker State Sync - Repairs (Dynamic) | [../packages/docker_infrastructure.yaml](../packages/docker_infrastructure.yaml) |
|
||||
| `docker_group_reconcile_weekly_joanna_review` | Docker Group Reconcile - Weekly Joanna Review | [../packages/docker_infrastructure.yaml](../packages/docker_infrastructure.yaml) |
|
||||
| `tugtainer_dispatch_joanna_for_available_updates` | Tugtainer - Dispatch Joanna For Available Updates | [../packages/tugtainer_updates.yaml](../packages/tugtainer_updates.yaml) |
|
||||
|
||||
156
config/shell_scripts/mariadb_snapshot.py
Normal file
156
config/shell_scripts/mariadb_snapshot.py
Normal file
@@ -0,0 +1,156 @@
|
||||
#!/usr/bin/env python3
|
||||
"""Collect MariaDB telemetry snapshots for Home Assistant command_line sensors."""
|
||||
|
||||
from __future__ import annotations
|
||||
|
||||
import json
|
||||
import re
|
||||
import sys
|
||||
from decimal import Decimal
|
||||
from pathlib import Path
|
||||
from typing import Any
|
||||
|
||||
from sqlalchemy import create_engine, text
|
||||
|
||||
SECRETS_PATH = Path("/config/secrets.yaml")
|
||||
RECORDER_DB_URL_KEY = "recorder_db_url"
|
||||
|
||||
QUERIES = {
|
||||
"live": """
|
||||
SELECT
|
||||
'running' AS status,
|
||||
ROUND(
|
||||
MAX(
|
||||
CASE
|
||||
WHEN VARIABLE_NAME = 'Queries' THEN CAST(VARIABLE_VALUE AS DECIMAL(20, 0))
|
||||
END
|
||||
) /
|
||||
NULLIF(
|
||||
MAX(
|
||||
CASE
|
||||
WHEN VARIABLE_NAME = 'Uptime' THEN CAST(VARIABLE_VALUE AS DECIMAL(20, 0))
|
||||
END
|
||||
),
|
||||
0
|
||||
),
|
||||
0
|
||||
) AS performance,
|
||||
MAX(
|
||||
CASE
|
||||
WHEN VARIABLE_NAME = 'Threads_connected' THEN CAST(VARIABLE_VALUE AS UNSIGNED)
|
||||
END
|
||||
) AS connections,
|
||||
MAX(
|
||||
CASE
|
||||
WHEN VARIABLE_NAME = 'Questions' THEN CAST(VARIABLE_VALUE AS UNSIGNED)
|
||||
END
|
||||
) AS questions,
|
||||
MAX(
|
||||
CASE
|
||||
WHEN VARIABLE_NAME = 'Uptime' THEN CAST(VARIABLE_VALUE AS UNSIGNED)
|
||||
END
|
||||
) AS uptime_seconds
|
||||
FROM information_schema.GLOBAL_STATUS
|
||||
WHERE VARIABLE_NAME IN ('Queries', 'Questions', 'Threads_connected', 'Uptime');
|
||||
""",
|
||||
"recorder": """
|
||||
WITH state_stats AS (
|
||||
SELECT
|
||||
MIN(last_updated_ts) AS min_last_updated_ts,
|
||||
COUNT(*) AS total_records
|
||||
FROM states
|
||||
)
|
||||
SELECT
|
||||
ROUND(SUM(t.data_length + t.index_length) / 1024 / 1024, 2) AS database_size_mib,
|
||||
COUNT(*) AS database_tables_count,
|
||||
DATE_FORMAT(
|
||||
FROM_UNIXTIME(ss.min_last_updated_ts),
|
||||
'%Y-%m-%d'
|
||||
) AS database_oldest_record,
|
||||
ss.total_records AS database_total_records,
|
||||
ROUND(
|
||||
ss.total_records /
|
||||
GREATEST(DATEDIFF(NOW(), FROM_UNIXTIME(ss.min_last_updated_ts)), 1),
|
||||
0
|
||||
) AS database_records_per_day
|
||||
FROM information_schema.tables t
|
||||
CROSS JOIN state_stats ss
|
||||
WHERE t.table_schema = 'homeassistant';
|
||||
""",
|
||||
"admin": """
|
||||
SELECT
|
||||
@@version AS version,
|
||||
ROUND(@@innodb_buffer_pool_size / 1024 / 1024 / 1024, 1) AS buffer_pool_gib,
|
||||
@@max_connections AS max_connections,
|
||||
ROUND(@@innodb_log_file_size / 1024 / 1024, 0) AS log_file_size_mib,
|
||||
ROUND(@@tmp_table_size / 1024 / 1024, 0) AS tmp_table_size_mib,
|
||||
@@innodb_io_capacity AS io_capacity,
|
||||
@@innodb_read_io_threads AS io_threads_read,
|
||||
@@innodb_write_io_threads AS io_threads_write,
|
||||
@@table_open_cache AS table_cache,
|
||||
ROUND(@@sort_buffer_size / 1024 / 1024, 0) AS sort_buffer_mib,
|
||||
ROUND(@@read_buffer_size / 1024 / 1024, 0) AS read_buffer_mib,
|
||||
ROUND(@@join_buffer_size / 1024 / 1024, 0) AS join_buffer_mib;
|
||||
""",
|
||||
}
|
||||
|
||||
|
||||
def _load_db_url() -> str:
|
||||
"""Read recorder_db_url from Home Assistant secrets.yaml."""
|
||||
secrets_text = SECRETS_PATH.read_text(encoding="utf-8")
|
||||
match = re.search(
|
||||
rf"^{re.escape(RECORDER_DB_URL_KEY)}:\s*[\"']?(.*?)[\"']?\s*$",
|
||||
secrets_text,
|
||||
re.MULTILINE,
|
||||
)
|
||||
if match is None:
|
||||
raise RuntimeError(f"Missing {RECORDER_DB_URL_KEY} in {SECRETS_PATH}")
|
||||
return match.group(1)
|
||||
|
||||
|
||||
def _json_safe(value: Any) -> Any:
|
||||
"""Convert SQLAlchemy result values into JSON-serializable values."""
|
||||
if isinstance(value, Decimal):
|
||||
return float(value)
|
||||
return value
|
||||
|
||||
|
||||
def main() -> int:
|
||||
"""Run the requested query mode and emit a compact JSON payload."""
|
||||
mode = sys.argv[1].strip().lower() if len(sys.argv) > 1 else ""
|
||||
|
||||
if len(sys.argv) != 2 or mode not in QUERIES:
|
||||
print(
|
||||
json.dumps(
|
||||
{
|
||||
"error": "usage",
|
||||
"message": "expected one mode: admin, live, recorder",
|
||||
},
|
||||
separators=(",", ":"),
|
||||
),
|
||||
file=sys.stderr,
|
||||
)
|
||||
return 2
|
||||
|
||||
engine = create_engine(_load_db_url(), pool_pre_ping=True)
|
||||
|
||||
try:
|
||||
with engine.connect() as connection:
|
||||
row = connection.execute(text(QUERIES[mode])).mappings().one()
|
||||
except Exception as err: # pragma: no cover - runtime safety path
|
||||
print(
|
||||
json.dumps(
|
||||
{"error": "query_failed", "message": str(err)},
|
||||
separators=(",", ":"),
|
||||
),
|
||||
file=sys.stderr,
|
||||
)
|
||||
return 1
|
||||
|
||||
payload = {key: _json_safe(value) for key, value in row.items()}
|
||||
print(json.dumps(payload, separators=(",", ":")))
|
||||
return 0
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
raise SystemExit(main())
|
||||
Reference in New Issue
Block a user