THE CRITICAL EDITION
Normalization as Scholarship: The Softmax Critical Edition
Just as V.S. Sukthankar initiated one of the most ambitious projects of scholarship in 1919, assembling a team comprising dozens of scholars to prepare a critical edition of the Mahabharata, the Softmax function performs a similar curatorial role within neural architectures. The Sanskritist’s method involved collecting manuscripts written in a variety of scripts from different parts of the country, comparing verses from each manuscript meticulously, and ultimately selecting those verses that appeared common to most sources to establish a canonical text. Similarly, Softmax takes the disparate, unbounded logit values produced by preceding network layers—each representing a competing interpretation or class—and normalizes them into a coherent probability distribution where the “most common” or highest activation values receive proportionally greater weight.
The Mahabharata runs in its present form into over 100,000 verses, containing depictions of a wide range of social categories and situations composed over a period of about 1,000 years (c. 500 BCE onwards). In parallel, modern deep learning models process inputs through layers that could be seen as centuries of compositional evolution. The Softmax layer acts as the final arbiter, ensuring that the output vector sums to unity, much as the critical edition aimed to create a unified narrative from divergent textual traditions. This normalization prevents the “unequal distribution” of raw activation values from distorting the final interpretation, instead presenting a balanced allocation of probability mass across competing categories.
The process of preparing a critical edition required Sukthankar’s team to work out a method of comparing verses from each manuscript, discerning which variations represented scribal errors versus which constituted genuine alternative readings. This mirrors the denoising effect of Softmax, which effectively filters out minor fluctuations in the input vector while amplifying consistent, strong signals. However, just as the critical edition privileges the majority reading when manuscripts disagree, Softmax inherently favors the most strongly activated class, potentially eliminating minority perspectives that might hold genuine significance—much as the voices of forest dwellers and craft specialists were marginalized in the dominant textual traditions of the era.
The Epic Architecture: Social Stratification in Probability Space
Between c. 600 BCE and 600 CE, the extension of agriculture into forested areas transformed the lives of forest dwellers, while craft specialists emerged as distinct social groups, and the unequal distribution of wealth sharpened social differences. These historical processes mirror the internal dynamics of the Softmax function, where input values (logits) represent different “social categories” competing for limited resources (probability mass). Just as the Mahabharata was composed over 1,000 years with stories circulating even earlier, neural networks undergo training epochs that stratify features into hierarchical representations. The central story about two sets of warring cousins finds its mathematical analogy in the competition between logits, where small advantages in raw activation translate into disproportionate shares of the final distribution.
The Temperature Parameter as Historical Lens
Just as historians must consider the language used and the ways in which the text circulated when analyzing the Mahabharata, practitioners must examine the temperature parameter in Softmax. A low temperature (T→0) creates sharp distinctions akin to the rigid caste structures documented in normative texts, while higher temperatures (T>1) produce more egalitarian distributions, similar to the fluid social mobility observed in early agrarian societies before the unequal distribution of wealth solidified class boundaries.
Historians use textual traditions to understand these processes, recognizing that each text was written from the perspective of specific social categories. Similarly, Softmax outputs reflect the perspective of the training data’s dominant classes. When the input contains high variance—akin to the sharp social differences documented in the epics—the function responds by allocating nearly all probability to the highest value, effectively creating a “winner-take-all” scenario reminiscent of how unequal wealth distribution created distinct social groups. This behavior emerges because the exponential operation amplifies disparities, much as historical records amplify the voices of literate elites while marginalizing forest dwellers and craft specialists.
The transformation of forest dwellers’ lives through agricultural extension finds a computational parallel in how gradient descent reshapes the decision boundaries of classification models. Just as craft specialists emerged as distinct social groups during the period between c. 600 BCE and 600 CE, certain neurons specialize to detect specific features, yet Softmax’s allocation mechanism often forces these specialists to compete for recognition in a zero-sum probability space. The sharp social differences documented in historical records become mathematically inevitable when the exponential function converts linear advantages into geometric disparities, rendering the “middle classes” of activation values virtually invisible in the final output distribution despite their substantial contribution to the network’s representational capacity. The norms of behaviour encoded in the Mahabharata’s verses, occasionally followed by principal characters but often deviated from, find their parallel in the regularization techniques applied to logits. However, the fundamental inequality persists: much as the extension of agriculture transformed forest dwellers’ lives without eliminating their marginalization, temperature scaling modifies the distribution without solving the core issue of representational dominance.
ARCHAEOLOGICAL LIMITS
The Undeciphered Seals: Epistemic Boundaries of Normalization
The Harappan seal, made of steatite and containing animal motifs and signs from a script that remains undeciphered, represents the profound limitations of interpretive frameworks. Archaeologists know a great deal about the lives of the people who lived in the region from what they left behind—their houses, pots, ornaments, tools, and seals—but some aspects of the civilisation remain unknown and may even remain so. This archaeological uncertainty parallels the fundamental limitations of Softmax in modern machine learning. While the function provides a convenient probability distribution over outputs, it assumes a closed world where all possible classes are known and represented, much like attempting to interpret Harappan culture solely through the lens of later textual traditions.
The undeciphered script on the seals warns us against overconfidence in our interpretive tools. Softmax suffers from similar opacity: it cannot represent genuine uncertainty or “unknown unknowns.” When faced with out-of-distribution inputs—analogous to discovering an artefact from a completely foreign culture—the function still forces a probability distribution over known classes, producing confident misclassifications rather than admitting ignorance. The critical edition approach of selecting common verses works only when manuscripts exist; similarly, Softmax fails when the true class lies outside the training distribution, a limitation as significant as the gaps in our understanding of the Indus Valley script.
The Harappan seals, made of steatite and often containing animal motifs, provide archaeological evidence that remains subject to interpretation changes as new materials are excavated. Similarly, Softmax outputs require constant recalibration; the function assumes that the training distribution represents the complete universe of possible inputs, an assumption as problematic as attempting to reconstruct the Mahabharata’s original composition date of c. 500 BCE from fragmentary archaeological remains alone. When models encounter inputs that deviate from their training distribution—modern analogues to the undeciphered Harappan script—they cannot generate novel categories or admit their ignorance, instead forcefully interpreting foreign artefacts through domestic frameworks, producing errors that are both confident and catastrophic. Furthermore, the function’s tendency to produce overconfident predictions mirrors the bias of textual traditions written from specific social categories—just as inscriptions and epics reflect the perspectives of elites rather than common forest dwellers, Softmax outputs privilege the majority class distributions captured in training data, flattening the rich uncertainty of neural activations into a deceptive certainty.
Published by Adiyogi Arts. Explore more at adiyogiarts.com/blog.
Written by
Aditya Gupta




Responses (0)