Securing AI Agents: Tackling the Non-Human Identity Crisis

Lori MacVittie: [00:00:00] Hey, welcome to Pop Goes to Stack the podcast about emerging tech with zero chill and even less respect for your carefully built stack. Every episode will bring you one step closer to understanding the tech that's about to break your deployment or your will to debug it. I'm Lori MacVittie, your host. Brace yourself, because this week we're going to dive into securing AI agents with Senior Solution Architect Peter Scheffler and our not AI co-host, Joel Moses.

Joel Moses: Oh, but how do you know Lori?

Lori MacVittie: I don't. Stop it. Stop it.

Joel Moses: I didn't present my identity or anything.

Lori MacVittie: I know. Yeah. All right, let's start by setting the stage why we wanna talk about this.

Okay. Last year there was a git Guardian report, mostly about secrets that discovered, I don't know, millions and millions. [00:01:00] 23.7 to be exact.

Joel Moses: Mm-hmm. 23.7 million.

Lori MacVittie: That's right. Uh, credentials. Secrets, right. Uh, keys, uh, you know, credentials, everything right was just spilled all over the internet. It just all over the place.

So they did a lot of research and dug in to discover. I don't know, people still have bad security practices. I know that surprises Peter. He does not believe this. But yeah, people still have bad security secrets or bad security practices. Yeah, with their secrets. So secret management is a problem and that when it comes to things like AI, when you start putting copilot in.

It starts going well, that's a common practice. It's a pattern out there. I saw it. That must be how you do it. So they just start hard coding credentials in what they're doing and just kind of multiply the problem that already exists. So one of the things we wanted to talk about is that agents are [00:02:00] starting to become a thing.

People are putting them into production. They have plans, formal plans. For how they're going to use them in the enterprise and maybe how we should treat them and Right. What about identity for them, because they're not just scripts. Yeah. You know, scripts don't decide to just grab credentials and throw 'em in, right?

They follow code, specifically agents, they do what they want, right? They're like, honey badgers. They don't care. They do what they want. So how are we gonna deal with this? Right? And how do we deal with the fact that we have all these bad practices and AI is likely to follow those? What do we do? Yeah. So give us answers.bGive us answers.

Peter Scheffler: Well, I think that the, one of the challenges is, is we need to make sure that we�re teaching people to code properly. You can't rely on that, but you need to continue hammering the, the good practices home so that the, the models will learn from that. And that propagates better learning or better capabilities for the, for the models.

But we can't [00:03:00] rely on that. I mean, I, I, I have had cases where I have written prototype code. It includes JWT token. And I have no intention of putting it into a repo, but it's just here, here's some sample code. You know, I might use Postman to go and have it take a curl call and say, you know, turn this into a, into some, some node JS for me quickly.

Right. So great. And it pulls my token and, you know, doesn't parameterize it or anything like that. It just throws the token in there and it's sitting on my machine and that's fine. But then I copy and paste it and I send it to, to, you know, to Joel and say, Hey, Joel, run this. This is really cool. This, this works.

And Joel's like, ah, this is cool. And then he sends Lori, you should see what, you know, what, what Scheff put together. And, and, and all of a sudden my token is floating around. That's right. And was was no intention to have that happen. Um, but it just sort of propagates through and, and then when we start to have agency that then that becomes a problem.

And, and that token itself can, can, can quickly, yeah. [00:04:00] Become a problem, and we wanna make sure that when we're doing that, the, the tools are enabled to make those decisions and say, Hey, this looks like a, this looks like authentication. I'm not gonna put that in there. Well. Postman knows that that JWT token is an authentication 'cause it obfuscates it in the UI.

That's great until you copy and paste it into your editor. And you know, if it's just a, you know, if I'm using Notepad as my editor, it's not gonna know that that token is anything different than a, than another string, right? Yeah. So.

Joel Moses: I think we're at the confluence of two things that are pretty bad related to agentic AI.

First is. These agents are non, uh, what, what they call in government parlance, uh, via Phipps one, uh, 2 0 1 dash two, if I remember correctly. A non-person entity, right? Yeah. So these are non-person entities that are talking to, to each other. They are of course coded and it, the way that they talk to each other usually involves some form of authorization or, or, or, uh, uh, authentication and.

A lot of people are using AI [00:05:00] systems to quickly develop, uh, AI agents. Um, the difficulty is, uh, if you roll back, uh, they're using assistive AI, uh, code generation tools in order to lay down these base AI agents and these generation tools are trained on repositories. That have poor secrets management practices encoded in them.

A lot of sample code out there has example tokens for exam, uh, for, you know, mm-hmm. That are encoded in it. And so when it generates the template for the AI, uh, agent, a lot of times it'll choose or generate or pull from, uh, uh, the, the secret black hat that, that, uh, that, that these, uh, copilot systems have.

They'll pull example tokens and put them in the code, meaning for you to replace them. Right. Um, but a lot of, a lot of times people don't do that. Instead, they trade these things around. So, you, you wanna make sure that if you're generating an AI agent [00:06:00] code that, that you are replacing or uniquely generating your own tokens or your own API keys to insert into those AI agents. Otherwise, you're essentially creating an NPE that might be a duplicative of something that's already out there and, and attackers kind of know this already, so, and yeah, that's difficult.

Peter Scheffler: And, and, and as you define your, let's say your model context, protocol, your MCP, right?

So, or your agent to agent definition. You need to make sure that you say. Well, one of the problems I have with that is if I have an agent that's calling an agent that has to call an agent, how do I define the source of trust? You know, what's what. Or the root of trust and what of, of that device.

And it's funny, but this is something we've had to do with our own technology. When a device pops up and a key gets generated and that device [00:07:00] is now has a key, you have to have that, that root of trust. Yeah. Well now you have these agents that are doing these things. Well, okay, so the agent that's making the call, I'd like it to be me.

But maybe it needs to have access to, you know, this for a subagent and this for a subagent. How do, how do we make sure that all those things are happening? Now that's great that we have a vault that we could put those data in. That, or those, you know, some sort of store that's gonna have those credentials, but what's the credentials of that agent in order to get those, right? So, you have that transitive, it just becomes very cyclical.

Lori MacVittie: It, it's also, it could change, right? And an agent could change roles within, right, the context of executing a task. So, you have that problem. You have, there's two different ways to build agents today, right?

One is, is kind of the one I see end users doing with the, ooh, look, I can build an agent, which is basically just an LLM with some [00:08:00] guardrails. Which is probably the worst way to build an agent and put into your environment in an enterprise. So like, don't do that because guardrails are kind like, I don't know, the pirates code, they're like a suggestion that it might listen to or not.

Or you can code it specifically and then. Then you have the problem of, well, it inserted, right, if you're using AI to do it, but it expects that you will check it. So you've gotta be a lot more vigilant and you know, build these with intention and then put the security into it, recognizing that, right, they don't have identities.

Maybe they need different ones, maybe they need separate ones. Maybe they need multiple ones as they're going through a task that these are, these are not bash scripts, right? They're not. They're not.

Joel Moses: So one thing that I want everybody to understand is that it's not just a single identity exchange.

There two different ones for agentic AI. The first is establishing a [00:09:00] level of authorization between agents. Is the agent that is on one side talking to the correct agent that is on the other side and not some attacker that's masquerading as that agent. So you have agent to agent authentication, and that's literally, uh, these NPEs and these NPEs talking to each other and authorizing each other.

Now, underneath that, you also have to assert via, for example you know, rag or something like that. You have to assert you have the right to act on behalf of a user over this established agent to agent connection. So it's layers of identity. The top layer being the agent to agent communication, and the underlying being the validation that the agent is acting on your behalf.
That is, it's an assertion effectively. You know, we see this all the time in like OAuth role designations, right? Where you're, you know, if you've ever granted rights to see your Google Drive to a third party [00:10:00] application, you've seen, you granting the ability to act on your behalf to this third party service, and that communicate that grant has to then be represented by the agents.

Now, the, the diff the difference here is. That's something that I am going and telling, okay, this piece of software has the right to act on my behalf for this particular purpose, but I'm in control of when that occurs. Right? I set the schedule for that. Agentic AI sets its own schedule and path.

It may or may not use that data set. It may or may not use that role on a regular basis, but it uses it when it decides that it should be appropriately used. So, these are transactions that represent identities on a non-regular basis. They act as you,

Lori MacVittie: They, well, they, and maybe they shouldn't.

So, two things. One, you keep calling them NBEs, and all I can hear is the guy from [00:11:00] Transformers. Right. And talking about Megatron. Yeah. And you know, NBE one, and I kind of like that idea for naming our agents. I mean,

Joel Moses: yeah. A, a non-person entity is actually a, you know, any, any web server Yeah. Is a non-person entity when you put a certificate on it.

Uh, so this is, this is something the industry's been struggling with for a while. The, just the spread of secrets and keys. It's, it's difficult enough to create an identity that matches a human and identifies or authorize as a human, but for non-person identities, there are far more of those than there are humans.

So, uh. And they can include, you know, identities for organizations, identities for hardware, objects, devices, software objects, and even data or information artifacts. All of these things are non-person identities. The difficulty with AI agents is, they make the granting of roles and assertions of roles a lot more difficult, and, and it makes it a lot more dynamic.
So it's even harder [00:12:00] to see and control.

Lori MacVittie: Say it. Say it like it you mean it. And this'll be good for you, Peter, 'cause maybe you can answer the question. What agents and agent architectures are doing, and I've done some digging into it, is they're breaking every security assumption that we have held.

Every attempt that we take is based on traditional methods of how do we manage identity, how do we match identity to people, roles, role-based access, all of those security assumptions and that it's external. Are being completely blown apart by not just agents, but then agentic architecture and that problem of authenticating together, charting your own path, random flows, random roles, dynamism.

So, right. How do you deal with that? And maybe that in part solves the, the problem of bad practices because if we no longer are gonna rely on hard-coded, right keys and credentials, maybe that goes away. But okay, you can solve it, Peter. How [00:13:00] do we, how do we even start to address that problem?

Peter Scheffler: Well, I do think there's things like the, the open agent protocol, like, so there are protocols that are being proposed. Nothing's in place yet, right? So we've seen Google's agent to agent. And you know there are some, um, Microsoft's also got one, name escapes me now, off, off gen or something like that, or whatever it's called. Um, so there are several that are let's use the word nascent.

I mean, you know, this podcast is, you know, mid 2025, maybe someone watches this in 2026. They're like, it's still nascent. I don't know. Right. Um, could be, but it, this isn't a new problem. This isn't something that we've been able to properly address. I think OAuth, from a user perspective, did as good a job as we possibly can.

'cause there was a, there was a granting, uh, system, there was a validation [00:14:00] system on the tokens. And it allowed us to have the explosion of APIs. We wouldn't have had the explosion of APIs that we had without, without something like OAuth. And I think one of these protocols, and I'm not hedging my bet on which one, but they're gonna allow us to start using storage, you know, secret storage capabilities that allow these agents to properly do that. And then be able to go, and Joel, you mentioned something earlier is, is this really the entity I wanna talk to? Right. So this is, this is, um, we need to make sure that we're, we're talking to the right agent.

So that means some sort of cryptographic identity of that thing then that needs to be some sort of signing process that's behind that. And then we also need a means of properly granting the, you know, the, the capabilities to that agent to say, you're allowed to go get the information about this, but you're not [00:15:00] allowed to do anything with it.
That has to be a different agent. Maybe it's the same agent with a different identity, that's possible. But you have to be able to define those boundaries. So, those aren't guardrails, those are hard and fast concrete walls that the, you know, that we need to put around these things so that we define them.

That, you know, you hit them, things break. It's not, it's not bounce against the, you know, the thing and you know, sort of move back onto the road. We don't want people hitting these, we want them to stop. So, defining very strict boundaries to how the agent can act, allows us to do the proper things for, and then once we start coding it and the model start learning it, maybe we start getting better towards the end.

I don't, I don't know if anybody has the answer yet. I, maybe Joel you know better than I do, but I don't know if we've got one yet, so,

Joel Moses: No, I think the answer to it is, is the same answer that you would give to [00:16:00] a lot of different security problems. You go through your steps, discover it, observe it, control it, put a policy in a process around it.

Those are really the only ways that you're gonna be able to defend yourself against some of these, some of these things. Non-person entities have been a problem for a very long time and they're gonna continue to mag, that problem will be magnified in an era where agents are commonly using personal identities and non-human identities to conduct transactions on your behalf.

So, it's definitely a place we need to watch.

Lori MacVittie: Well, what about something like privileged user access and ephemeral credentials that are on-the-fly, temporary, based on what you're trying to do right now? Given the role that's inferred. Would that perhaps be a solution or a at least baseline for a better solution to solve this problem?

Peter Scheffler: I think that's what we need to get to it. It just is just how do those [00:17:00] ephemeral account credentials get spun up? Yeah, spun down too, right? So with, you know, has spent life on them.

Joel Moses: Revocation.

Peter Scheffler: Yeah, absolutely. Like we have a problem with certificate revocation lists, and, you know, this is just gonna be compounded when we go from 7 billion people on the planet to 700 billion, you know, non-person entities communicating with each other.

Joel Moses: Well, that's, it's, I mean, grounding, grounding it to the impact on a single person. Think about the number of things you've given your accounts rights to access from. And, and with the explosion of agents, you're gonna be granting access to your emails. You're gonna be granting access to payment accounts, potentially. You're gonna be granting access to information stores, social accounts, photo libraries, et cetera, potentially, potentially. And what I,

Lori MacVittie: Maybe you are, but I'm not, I'm just saying. That's, I'm not silly.

Joel Moses: The thing is, what happens when you need to go back to all of these distinct services [00:18:00] that you granted rights to and remove the access from the agent. Yeah. Will you even remember what rights you've granted or, or what?

Peter Scheffler: I don't remember the rights I granted to something this morning.

Joel Moses: Exactly. So, yeah, I used that wrong. So we're gonna need to pay attention to this in the future, much, much greater attention, because these systems you can kind of control by the policies that you set on them today. When you put them onto an AI agent the agent is in control of the policy, and the usage of those services, and it's acting on your behalf. So again, discover it, observe it, control it, put a policy and process around it, and you won't, you won't be, you won't be affected.

Lori MacVittie: Yeah, I think part of it is, it's so new, it's so exciting. You wanna play with this stuff, right? I mean, I do it on my laptop as well, right? I'm, you know, writing code, trying to play with things. What can I do? How can I do it? And people get very excited when a tool will build something for them that [00:19:00] will take work off their plate, because who doesn't want that, right?

Yeah. Everybody wants to, you know, offload the toil. Right. Get rid of the yak shaving. Somebody else can do that. Yeah. I just wanna do the fun stuff. So there's so many tools out there that are letting you do it. Yeah. But right. As you know, you both pointed out, and Peter, you said it right? They learned on the bad practices.

Yeah. So they propagate it when you're using those tools. So I think it's also a matter of, right, you need to be a little bit more, I don't know, oversight-y, governanc-y with respect to the kind of tools that people are using and building, because shadow agents are already a thing, right?

People are building them and we need to get a handle on that as well, so they don't propagate a lot more of this bad stuff.

Joel Moses: And also mentioned. Also mentioned about secure supply chain. We've actually seen attacks against agent landscapes. Stripe published its MCP support. It's MCP server [00:20:00] support for payment processing.

Almost immediately on NPM, on the JavaScript package manager, there were false packages being deployed out there for Stripe MCP support, whose goal was to act as the stripe MCP server. And then turn around and proxy it to the actual Stripe MCP service. Yeah. And keep a copy of the payment information on the side.

So, so again, the fact that these things have interfaces is great. The fact that you can authenticate and authorize over the top is great, but do you know if you are correctly authenticating and authorizing the correct service? So software supply chain is still a part of this.

Peter Scheffler: The other thing that, I don't wanna open the door and have another 45 minute conversation, that you have to worry about too, is the fact that all these agents are logging stuff. Right. So, that's, you know, we want to know that what they're doing. We want to know what's passing through these, but if these [00:21:00] agents aren't defined to properly identify PII and other things, right.

Are they exposing your credentials? Are you sending some sort of you know, proprietary definition of your credentials. It's not a job token, so it doesn't know that. So how do you know? Those agents need to understand this is data that I shouldn't be logging. So those, that's a whole other can of worms that start happening at the other side once these things start talking back and forth. Now we have to go and make sure our logs are scrubbed and we're not exposing information there too.

Joel Moses: Grandma always said, don't talk to strangers.

Peter Scheffler: That's, that's good advice.

Lori MacVittie: That's really good advice. Yeah.

Peter Scheffler: Yeah. We can go back to living in caves and talking to people face to face and, you know.

Lori MacVittie: Right, right there. Don't, yeah. Alright, we're, before we end up with a plan to move into caves and cook our food over a fire again, I think we're gonna wrap this episode 'cause you're right, we could talk hours more on this subject and all the little [00:22:00] nuances that come in because one, it's, it's new, it's moving that fast and it really is. Right, this one is breaking the stack, the security stack. And we need to, not necessarily slow down, but be more observant, right? Pay attention to what's going on. Be aware of the risks of the different ways you're building agents, who's deploying them, what you're using, and then what the practices are that you're following.

So, you know, we're gonna wrap it up. So. That's it for this episode of Pop Goes the Stack where the tech is bleeding edge and your sanity is indeed a deprecated feature. If you survived this conversation, congratulations, you're ahead of the curve for this week. But you know, be sure to subscribe, leave a review, or you know, just scream into the void.
Whatever's gonna help you cope with everything you've heard here today. We'll be back again with more ways emerging tech is rewriting the rules and breaking your stack. Until then, stay curious, stay cautious [00:23:00] and don't, don't hard code your credentials.

Joel Moses: Ever.

Creators and Guests

Joel Moses
Host
Joel Moses
Distinguished Engineer and VP, Strategic Engineer at F5, Joel has over 30 years of industry experience in cybersecurity and networking fields. He holds several US patents related to encryption technique.
Lori MacVittie
Host
Lori MacVittie
Distinguished Engineer and Chief Evangelist at F5, Lori has more than 25 years of industry experience spanning application development, IT architecture, and network and systems' operation. She co-authored the CADD profile for ANSI NCITS 320-1998 and is a prolific author with books spanning security, cloud, and enterprise architecture.
Peter Scheffler
Guest
Peter Scheffler
Peter Scheffler is a Senior Solutions Architect at F5, focused on API security, post-quantum cryptography, and next-gen app delivery. Known for translating complex tech into memorable stories, he blends deep expertise with real-world demos and a bit of humor to help teams secure and scale modern applications.
Securing AI Agents: Tackling the Non-Human Identity Crisis
Broadcast by