Estimating Black-Box LLM Parameter Counts via Factual Capacity (arxiv.org)

🤖 AI Summary
A new research paper introduces "Incompressible Knowledge Probes" (IKPs), a novel benchmarking method for estimating the parameter counts of closed-source large language models (LLMs) using their factual knowledge. Current estimation methods face significant uncertainties, but the IKP framework employs a log-linear mapping between a model's accuracy on a set of 1,400 factual questions and its parameter count. By analyzing 89 open-weight models, researchers achieved an impressive R^2 of 0.917 in prediction accuracy, demonstrating the model’s knowledge directly correlates with its number of parameters. This development is significant for the AI/ML community as it provides a more reliable technique to gauge the capabilities of proprietary AI models, which often do not disclose their architectural details. The findings challenge the notion that reasoning abilities have reached a plateau, as factual capacity continues to scale proportionately with parameters, contrary to some existing theories like "Densing Law." The method offers insights into the effective knowledge capacity of various models, illuminating potential limitations in safety-tuned systems and the ongoing evolution of LLMs across vendors.
Loading comments...
loading comments...