🤖 AI Summary
llm-models is a lightweight CLI that lets you enumerate available LLMs across major providers (OpenAI, Google AI Studio, Vertex AI, Anthropic, xAI). It’s installable via pipx, brew or pip and invoked like llm-models --provider OpenAI (Vertex AI requires a region flag -r). The tool prints provider-specific model identifiers—examples shown include OpenAI (gpt-3.5-turbo, chatgpt-4o-latest, dall-e-3), Google (models/gemini-2.5-flash, embedding-gecko-001), Vertex AI publisher paths, Anthropic Claude variants (Haiku/Sonnet/Opus), and xAI grok variants (noting xAI aliases like grok-4 mapping to grok-4-0709). It requires provider credentials in environment variables (OPENAI_API_KEY, GOOGLE_API_KEY or GOOGLE_CLOUD_PROJECT for Vertex, ANTHROPIC_API_KEY, XAI_API_KEY). Python 3.7+; tested on Ubuntu 24.04.
For the AI/ML community this is a handy discovery and automation tool: it simplifies inventorying provider model catalogs, validating availability across regions (Vertex AI’s region-specific endpoints and IAM auth), and resolving naming/alias quirks (xAI). That makes it useful for CI checks, model-selection scripts, and quick comparisons before benchmarking or cost estimation. Limitations: it lists identifiers but doesn’t return model metadata (capabilities, specs, pricing) — you still need provider APIs for full capabilities and performance profiling.
Loading comments...
login to comment
loading comments...
no comments yet