Security for AI “employees” is often reduced to a question of access rights: who has access to what and under which conditions. In practice, however, it is becoming clear that access control alone cannot cover the biggest risks associated with deploying AI. Models and agents today do not function merely as passive tools, but as active participants in processes — they combine data from multiple sources, generate decisions, and in many cases directly execute actions. Therefore, security must be designed not only around access, but primarily around intent and the auditability of every step.