No, your AI isn't conscious. And Alexander Lerchner from Google DeepMind has a specific reason why.
In a paper on PhilArchive, he introduces the "Abstraction Fallacy." The mistake is assuming abstract causal patterns can produce subjective experience. They can't.
Here's the distinction that matters. Simulation is mimicry. An AI says "I feel sad" because training data links patterns to that output. Instantiation is the real thing, subjective experience arising from physical constitution.
Symbolic computation, Lerchner argues, isn't an intrinsic physical process. It's a description imposed from outside. Someone decides certain physical states count as ones and zeros. The meaning doesn't come from within the system.
This echoes John Searle's Chinese Room (syntax alone can't give you semantics) and contradicts Douglas Hofstadter's Strange Loop hypothesis (self-referential patterns generate consciousness regardless of substrate). If an artificial system were ever conscious, it would owe that to its physical makeup. The algorithm doesn't create experience. The physics does.
The practical takeaway for AI welfare: we don't need a complete theory of consciousness. We need a rigorous ontology of computation. Under Lerchner's framework, current AI systems aren't candidates for moral status regardless of how they behave. They simulate. They don't experience.