Easily jailbroken models show safeguards are failing.AI models are still easy targets for manipulation and attacks, especially if you ask them nicely.A new report from the UK's new AI Safety Institute found that four of the largest, publicly available Large Language Models LLMs were extremely