Joel Moses

Joel Moses

Distinguished Engineer and VP, Strategic Engineer at F5, Joel has over 30 years of industry experience in cybersecurity and networking fields. He holds several US patents related to encryption technique.

Appears in 30 Episodes

CISO Hot Takes on MCP, PQC, and Data Center Attacks

Recorded live at F5 AppWorld 2026 in Las Vegas, this episode of Pop Goes the Stack puts Field CISO Chuck Herrin in the hot seat for a fast-moving conversation on what ...

AI Red Teaming in Practice: Scores, guardrails, auto-remediation

AI in production isn’t just another feature to ship. It’s a non-deterministic system that can be socially engineered, fuzzed, and pushed into failure states you won’t ...

Agent Identity Crisis: Access, audit, and “soul.md”

Coming to you from the AppWorld show floor, Joel Moses and guest co-pilot Oscar Spencer cut through the conference polish to tackle a problem that’s quickly becoming u...

VibeOps: Guardrailed agents for deterministic production

Ops used to be a world of YAML, caffeine, and careful deploy rituals. Now it’s probabilistic models, token-based cost surprises, and reliability questions that sound m...

WebAssembly: A programmability paradigm shift

Programmability is experiencing a paradigm shift, and this episode explains why WebAssembly is at the center of it. F5's Lori MacVittie and Joel Moses are joined by We...

Unstructured Integration: The hidden surface area putting AI privacy & compliance at risk

"It's just a chat" is the most dangerous sentence in AI. In this episode of Pop Goes the Stack, F5's Lori MacVittie and Joel Moses are joined by data science expert Sc...

Logging for Giants: High-Speed Telemetry in an AI World

When OpenAI discovered they could reclaim 30,000 CPU cores simply by tuning the log-forwarding agent Fluent Bit—disabling a single function that ate ~35 % of one serve...

Low-Code Automation Tools with Teeth: FlowFuse & N8N

Low-code automation has grown up, and the competition is getting spicy. In this episode of Pop Goes the Stack, F5's Lori MacVittie and Joel Moses are joined by Aubrey ...

The New New User Interface: AI in your brain

The capability to map brain activity to language isn’t just another UI shift—it’s a paradigm shift in how humans and machines might communicate. If you’re building sys...

The Impact of Inference: Reliability

Traditional reliability meant consistency. Given identical inputs, systems produced identical outputs. Costs were stable and behavior predictable. Inference reliabilit...

The Impact of Inference: Performance

Traditional performance meant deterministic response times. Identical inputs produced near-identical execution times. Optimizations reduced latency, but variance was m...

The Impact of Inference: Availability

What does "availability" mean in a world of AI inferencing and ever-shifting workloads? It’s no longer just about servers responding or apps being online—availability ...

Shift left into runtime: Vibe coding and AI guardrails

Coding pipelines are evolving and AI agents are taking the wheel. In this episode of Pop Goes the Stack, F5's Joel Moses teams up with Buu Lam to dive into “vibe codin...

Reshaping the web for AI agents and LLMs

The web we built—a tangle of HTML, JavaScript, CSS, APIs, and SEO quirks—has always been messy. But with AI agents and real-time apps now consuming the web as data, th...

Five nines of wrong: Detecting drift and errors in AI systems

Uptime used to mean reliability. But in the LLM era, five nines just means your liar is always available. Real reliability now includes correctness and that means prob...

Now you see me, now you don't: Ephemeral Auth and AI agents

Agents are popping up everywhere: tiny bots spinning up for a task, then dying off. They shouldn’t carry long-lived credentials any more than you carry a master key ev...

BOLA exploits: The #1 API threat and how to stop it

The 2025 API Threat Report is out, and shocker: we’re still getting wrecked by injection, data leaks, and BOLA. That’s Broken Object Level Authorization, for those of ...

LLM-as-a-Judge: Bias, Preference Leakage, and Reliability

Here's the newest bright idea in AI: don’t pay humans to evaluate model outputs, let another model do it. This is the “LLM-as-a-judge” craze. Models not just spitting ...

Crossing the streams

Prompt injection isn't some new exotic hack. It’s what happens when you throw your admin console and your users into the same text box and pray the intern doesn’t find...

When Context Eats Your Architecture

Anthropic lobbed a million-token grenade into the coding wars, and suddenly every AI startup with a “clever context management” pitch looks like it’s selling floppy di...

Broadcast by