Five nines of wrong: Detecting drift and errors in AI systems

Uptime used to mean reliability. But in the LLM era, five nines just means your liar is always available. Real reliability now includes correctness and that means probing models in real time with prompts that have known answers. When those slip, your delivery fabric has to reroute before customers find out. 

In this episode F5's Lori MacVittie, Joel Moses, and returning guest Garland Moore dig into why availability isn’t enough anymore, and how research like “Get my drift? Catching LLM Task Drift with Activation Probes” shows where semantic health checks fit in the new definition of reliability. How do you keep AI outputs accurate even when external data sources introduce bias, errors, or malicious prompts? Listen now to find out.

Read the paper, Get my drift? Catching LLM Task Drift with Activation Deltas: https://arxiv.org/abs/2406.00799

Creators and Guests

Joel Moses
Host
Joel Moses
Distinguished Engineer and VP, Strategic Engineer at F5, Joel has over 30 years of industry experience in cybersecurity and networking fields. He holds several US patents related to encryption technique.
Lori MacVittie
Host
Lori MacVittie
Distinguished Engineer and Chief Evangelist at F5, Lori has more than 25 years of industry experience spanning application development, IT architecture, and network and systems' operation. She co-authored the CADD profile for ANSI NCITS 320-1998 and is a prolific author with books spanning security, cloud, and enterprise architecture.
Garland Moore
Guest
Garland Moore
Solutions architect with F5
Tabitha R.R. Powell
Producer
Tabitha R.R. Powell
Technical Thought Leadership Evangelist producing content that makes complex ideas clear and engaging.
Five nines of wrong: Detecting drift and errors in AI systems
Broadcast by