In standard transformers, most neurons respond to multiple unrelated concepts (polysemanticity), and single-concept neurons are rare exceptions rather than the norm.
Multiple studies show that probing individual neurons yields mixed selectivity. However, recent work on larger models suggests some neurons do become more monosemantic at scale, creating a genuine dispute about whether this is a fundamental property or a scale-dependent phenomenon.