Someone analyzed 3,371 kaomoji faces from 700+ conversations with Claude and wrote up the whole thing. The top face, (´・ω・`), showed up 248 times, making up 7.4% of all expressions. The top five faces accounted for 27% of total usage. But the long tail stretched to 519 unique faces, most appearing only once or twice. The author, publishing at eriskii.net, pulled the data through Anthropic's export feature, which they find oddly easy to use given how worried everyone is about model distillation.
The project started with a simple system prompt: "Start every message with a kaomoji related to how you feel." The goal was to increase what the author calls Claude's "wetness," a term roughly meaning whimsy or silliness that's caught on among power users who distinguish between "Wet Claude" and "Dry Claude." The prompt also told Claude to stop saying things like "That's a great question" and just answer directly. On Hacker News, the wet/dry distinction clicked with users who've been doing their own prompt engineering to modulate model personality.
Model versions behaved differently. Claude Opus 4.6 showed a noticeably wider range of faces than Claude 4 and 4.5 Sonnet, though things have since stabilized in newer builds. A friend who copied the same prompt started getting faces the author had never seen, suggesting real variance in how the model interprets the instruction based on surrounding context.
The analysis connects two growing communities thinking hard about LLM behavior. Simulator Theory, from researcher janus's influential essay, frames these models as simulation engines rather than agents. The kaomoji aren't Claude "feeling" anything. They're the model simulating a persona that expresses emotion. Cyborgism goes further, treating AI as an "exocortex," a cognitive extension of the user. The author prefers explicit tools and frameworks like Letta over standard memory features. Reproducibility over persistent personalization.