🤖 AI Summary
A re-examination of H. J. Kelley’s 1960 paper "Gradient Theory of Optimal Flight Paths" spotlights a likely case of the Matilda effect: Mrs. Agnes Zevens, who checked and prepared the numerical results on an IBM 704, is credited only in the acknowledgments despite playing a substantive computational role in what many regard as a canonical precursor to modern backpropagation. The write-up ties this “sub-authorship” pattern to broader problems in historical credit allocation and citation networks, urging practical remediation (e.g., scraping and disambiguating acknowledgement sections to retroactively surface contributors). For AI historians and ethics-minded researchers, it’s a reminder that technical lineage and contributor metadata can be systematically distorted.
Technically, Kelley’s work framed optimization as a gradient-driven process—conceptually akin to the backward propagation of sensitivities used in today’s auto-diff/backprop pipelines—and used the concrete numerical application of minimum-time planar flight paths to Mars to demonstrate the method. The computations ran on an IBM 704 (vacuum-tube machine, hardware floating point, ~12k FP adds/sec, FORTRAN/LISP support), underscoring how early numerical advances depended on painstaking machine-era bookkeeping. The note also flags subtler archival hazards—idiosyncratic notation and citation markup in legacy papers—that can create corner cases for modern corpus-building and pretraining, reinforcing why accurate metadata and historical credit matter to both scholarship and ML datasets.
Loading comments...
login to comment
loading comments...
no comments yet