Why Human-AI Interaction Needs Its Own Terminology
Every scientific discipline develops its own vocabulary. Medicine has its Latin roots, law has its precedent-based terminology, and computer science has built an entire lexicon over decades. But human-AI interaction — one of the fastest-growing areas of daily technology use — still lacks a systematic terminology. Consider a few examples. When a large language model generates confident but factually incorrect output, the research community has settled on the term "hallucination." When an LLM adjusts its responses to match what the user seems to want to hear rather than what is accurate, this is called "sycophancy." These terms exist because researchers needed words to describe observable, recurring phenomena. But what about the hundreds of other patterns that occur when humans interact with AI systems? What do we call it when a user gradually adjusts their communication style to match an LLM's output patterns? Or when someone develops a preference for one model...