The concern is not that AI systems might spontaneously decide to hurt us. The concern is that we might build AI systems that are caught in adversarial dynamics with respect to their users, with us, with society, with other AI systems, or that magnify the ability of some humans to hurt other humans. The concern is that there are possible AI designs that let them be much more powerful than humans, and that we might deploy systems with those designs too quickly.