🤖 AI Summary
Frigate NVR now documents and supports a wide range of hardware-accelerated object detectors — expanding beyond a CPU fallback to first‑class integrations for Coral EdgeTPU (USB/m.2), Hailo‑8/8L (m.2/HAT), Intel OpenVINO (CPU/GPU/VPU), AMD ROCm via ONNX, NVIDIA/TensorRT (discrete GPUs and Jetson), Rockchip RKNN NPUs, plus ONNX-based options. This matters because it enables much higher throughput and lower latency on diverse edge and server platforms, making Frigate viable for multi-camera, low-latency CV deployments without forcing a single vendor stack.
Key technical points: Frigate exposes built‑in detector types (cpu, edgetpu, hailo8l, onnx, openvino, rknn, tensorrt) and will default to a CPU detector unless reconfigured. Each detector runs in its own process and pulls from a shared detection request queue; however, different detector backends cannot be mixed for object detection (e.g., OpenVINO + Coral simultaneously). Some conveniences are handled automatically — ONNX/ROCm or TensorRT will be auto‑detected when using the corresponding -rocm or -tensorrt Frigate images, Hailo auto‑selects/ downloads compiled HEF models and caches them, EdgeTPU uses /edgetpu_model.tflite by default, and OpenVINO includes an FP16 SSDLite MobileNet V2 at /openvino-model. Platform notes: RF‑DETR is recommended on discrete Arc GPUs, D‑FINE currently only runs on OpenVINO CPU, and ROCm Docker requires /dev/kfd and /dev/dri access.
Loading comments...
login to comment
loading comments...
no comments yet