Military robots are rarely fully autonomous due to their limited ability to perceive and react in complex situations.1 Dependence on human control implies continuous communication, which creates vulnerability in the event of a partial link failure.1 The desire for control stems from the need to determine responsibility for violations of international humanitarian law, where designers cannot deny responsibility for the limits of systems.1 However, disclosure of these limits poses a security risk that can be exploited by adversaries.[Responsibility for the robot's actions can be determined retrospectively using recorded data, but states may not record the data to obscure responsibility.1 The best way to ensure security is to prevent errors by transmitting information about the robot's intended actions and its perception certainty to human operators.1 Operators should see a visualization of the scene from the robot's perspective, including target detection and the system's level of certainty.1 The best way to ensure security is to prevent errors by transmitting information about the robot's intended actions and its perception certainty to human operators.