Show HN: Bewaker – File protection for AI-assisted coding (github.com)

🤖 AI Summary
Bewaker is a VS Code extension and Git-hook toolkit that enforces policy-driven file protections to stop AI-assisted or human edits from silently breaking configs, leaking secrets, or erasing compliance evidence. You define protections in .guardpolicy.yml (glob patterns, owners, approvals), then lock the repo to produce a cryptographic .guardlock. Bewaker decorates protected files in the explorer and editor, auto-reverts unauthorized edits, blocks commits/pushes via Node-based pre-commit/pre-push hooks, and records tamper-evident audit entries so teams can keep AI pair-programming productivity without losing control. Technically, Bewaker uses SHA-256 hashing with a Merkle root and Ed25519 signatures (keys stored locally under .bewaker/keys.json) for integrity, plus a hash-chained JSONL audit ledger (.bewaker-audit.jsonl) with an in-product verifier. It supports line-level locks (.bewaker-ranges.json), per-file unlock requests/approvals with expiry, heuristic risk scoring (configurable bumps during active AI sessions), and a Command Center for incident triage. Everything runs locally (no telemetry or cloud services), requires VS Code 1.85+, Node 18/20, and Git 2.39+, and is Apache-2.0 open source—making it practical for regulated teams to add cryptographic, auditable guardrails around AI-driven code changes while fitting into CI/checks workflows.
Loading comments...
loading comments...