Foundation-SEC-8B Instruct Model (64k context) (huggingface.co)

🤖 AI Summary
Cisco Foundation AI released Llama-3.1-FoundationAI-SecurityLLM-1.1-8B-Instruct — an open-weight, 8-billion-parameter instruction-tuned LLM optimized for cybersecurity workflows. The model adds chat-style instruction following to a security-focused Llama 3.1 8B backbone and extends the context window to 65,536 tokens (64k), enabling analysis of long incident reports, threat feeds, and playbooks locally. Targeted use cases include SOC acceleration (triage, summarization, case notes), proactive threat defense (attack simulation, vulnerability prioritization, MITRE ATT&CK mapping), and engineering enablement (config validation, compliance evidence extraction). The model is available for on-prem deployment (fdtn-ai/Foundation-Sec-1.1-8B-Instruct) with a November 20, 2025 release and a data cutoff of April 10, 2025. Technically, Foundation-Sec-1.1-8B-Instruct was instruction-fine-tuned with RLHF using AdamW, benchmarked in zero-shot settings, and shows +3 to +13 point improvements over Llama-3.1-8B-Instruct on security-specific benchmarks. It narrows the gap with smaller frontier models (competitive with GPT-4o-mini on many tasks) and scores significantly higher on safety metrics (HarmBench: 94.7% vs 72.4% baseline; 98.5% when paired with LlamaGuard). Cisco emphasizes limitations and guardrails: no autonomous critical decisions, no malware/phishing generation, and human-in-the-loop review plus additional safeguards, retrieval augmentation, and up-to-date feeds are recommended for production security deployments.
Loading comments...
loading comments...