What if you could influence an LLM's output not by breaking its rules, but by bending its probabilities? In this deep-dive, we explore how small changes in user input (down to a single token) can shift the balance between “true” and “false”, triggering radically different completions.