Connect with us

Hi, what are you looking for?

Tech News

Microsoft’s new safety system can catch hallucinations in its customers’ AI apps

Microsoft logo
Illustration: The Verge

Sarah Bird, Microsoft’s chief product officer of responsible AI, tells The Verge in an interview that her team has designed several new safety features that will be easy to use for Azure customers who aren’t hiring groups of red teamers to test the AI services they built. Microsoft says these LLM-powered tools can detect potential vulnerabilities, monitor for hallucinations “that are plausible yet unsupported,” and block malicious prompts in real time for Azure AI customers working with any model hosted on the platform.

“We know that customers don’t all have deep expertise in prompt injection attacks or hateful content, so the evaluation system generates the prompts needed to simulate these types of attacks. Customers can then get a…

Continue reading…

You May Also Like

Tech News

The Limitless Pendant comes in a bunch of colors and doesn’t really look like an AI gadget. | Image: Limitless The Limitless Pendant doesn’t...

Tech News

The Dbrand Ghost Case for the iPhone 15 Pro. | Image: Dbrand Dbrand is scrapping plans to fix its anti-yellowing Ghost Case, but not...

Tech News

Image: Nintendo If you’re the kind of Mario Kart 8 player who cares about winning and not just playing their favorite characters (Daisy and...

Editor's Pick

Clark Packard and Alfredo Carrillo Obregon Earlier this week, President Biden welcomed Japanese Prime Minister Kishida for an official visit and state dinner. As...