Previous studies exploring category-sensitive representations of numbers and letters have predominantly focused on individual brain regions. This study expands upon this research through computationally rigorous whole-brain neural decoding using Elastic Net (ND-EN), facilitating the analysis of neural patterns across the entire brain with greater precision. To establish the robustness and generalizability of our results, we also conducted innovative probabilistic meta-analyses of the extant functional neuroimaging literature. The investigation comprised both an active task, requiring participants to distinguish between numbers and letters, and a passive task where they simply viewed these symbols. ND-EN revealed that, during the active task, a distributed network-including the ventral temporal-occipital cortex, intraparietal sulcus, middle frontal gyrus, and insula-actively differentiated between numbers and letters. This distinction was not evident in the passive task, indicating that the task engagement level plays a crucial role in such neural differentiation. Further, regional neural representational similarity analyses within the ventral temporal-occipital cortex revealed similar activation patterns for numbers and letters, indicating a lack of differentiation in regions previously linked to these visual symbols. Thus, our findings indicate that category-sensitive representations of numbers and letters are not confined to isolated regions but involve a broader network of brain areas, and are modulated by task demands. Supporting these empirical findings, probabilistic meta-analyses conducted with NeuroLang and the Neurosynth database reinforced our observations. Together, the convergence of evidence from multivariate neural pattern analysis and meta-analysis advances our understanding of how numbers and letters are represented in the human brain.
Keywords: Distributed neural representation; Multivariate decoding; Neural representational similarity; Quantitative meta-analysis; Ventral temporal-occipital cortex.
Copyright © 2025 Elsevier Ltd. All rights reserved.