Researchers have developed a new approach to AI security that employs text prompts to better protect AI systems from cyber threats. This method focuses on the creation of adversarial examples to prevent AI from being misled by inputs that are typically undetectable to humans.