Cybersecurity awareness month is here, and there’s no better time to talk about how state and local governments can improve their cybersecurity practices. A key component of that improvement is ...
OpenAI has taken a more aggressive approach to red teaming than its AI competitors, demonstrating its security teams' advanced capabilities in two areas: multi-step reinforcement and external red ...
A research paper explains how Apple Intelligence is designed, and the steps the company takes to ensure the safety of the models. The paper also gives a glimpse into the scale and complexity of the on ...
AI red teaming has emerged as a critical security measure for AI-powered applications. It involves adopting adversarial methods to proactively identify flaws and vulnerabilities such as harmful or ...
Generative artificial intelligence (GenAI) has emerged as a significant change-maker, enabling teams to innovate faster, automate existing workflows, and rethink the way we go to work. Today, more ...
In the realm of IT security, the practice known as red teaming -- where a company's security personnel play the attacker to test system defenses -- has always been a challenging and resource-intensive ...
Agentic AI functions like an autonomous operator rather than a system that is why it is important to stress test it with AI-focused red team frameworks. As more enterprises deploy agentic AI ...
Haize Labs is building an automated red-teaming solution for generative AI companies like Anthropic. The startup is raising an early-stage round and received multiple term sheets, sources say.
Some results have been hidden because they may be inaccessible to you
Show inaccessible results