Agent Identity Crisis: Access, audit, and “soul.md”

00:00:05:02 - 00:00:23:00
Joel Moses
Welcome back to Pop Goes the Stack, the podcast where shiny techniques meets the messy reality of production. This episode is coming to you from the show floor at AppWorld, where the demos are polished, the buzzwords are flowing, and the WiFi is, I guess, doing its best. I'm Joel Moses stepping out of the co-host chair to run the show today.

00:00:23:00 - 00:00:32:05
Joel Moses
Lori MacVittie couldn't make it to the event, but I'm joined by a guest copilot, Oscar Spencer, to help translate conference energy into production grade reality. Welcome, Oscar.

00:00:32:06 - 00:00:33:16
Oscar Spencer
Oh, it's amazing to be here.

00:00:33:18 - 00:00:55:00
Joel Moses
So we're going to go big right out of the gate. Today, we're talking about an existential crisis. An agent identity crisis. Who am I right this minute? Because identity isn't as static as our models would assume. As agents become more capable and more variable, do we force them into the same fixed credential boxes we use for everything else?

00:00:55:02 - 00:01:25:21
Joel Moses
Or does identity itself have to evolve to reflect what the agent is doing, what time it's doing it for, what it's acting for, and what context it's operating in? Identity seems to be one of those things that just isn't getting enough attention in the era of AI. Now, to dig into that, we're joined by F5's Chief Product Officer, Kunal Anand, whose time is infinitely precious to discuss how identity access and trust can change when the user and the software can shapeshift.

00:01:25:23 - 00:01:29:05
Kunal Anand
I am always down to talk to Joel OpenClaw Moses.

00:01:29:05 - 00:01:33:20
Joel Moses
Oh wow man, I'm telling you, that's like my

00:01:33:20 - 00:01:36:20
Kunal Anand
OC.

Joel Moses
that's like my pledge name now.

Kunal Anand
OC, that's it.

Joel Moses
Great. Fantastic.

00:01:36:22 - 00:02:04:08
Joel Moses
So identity in the era of AI, especially as we move into agentic AI structures, not only are we asking software to do things for us, we're asking it to take initiative on behalf of us, which means in some cases, we're disconnecting the identity being sampled live and embodying something with identity. What are the ramifications of that? What do you think we need to do to constrain to control identity?

00:02:04:10 - 00:02:27:19
Oscar Spencer
Yeah. So I think this is one of the fundamental issues that we're dealing with today. I never, ever want an agent to say it is me because you are not me, you are an agent, acting on my behalf. And I want that to always be reflected in every single audit log. Because eventually when I go back to see what has gone terribly wrong--you know, who authorized that purchase

00:02:27:20 - 00:02:51:06
Oscar Spencer
for me, who ran that script for me--I want to know exactly what chain, at what point any agent did this thing for me, because at this point in time, we've got amazing AI tools where I don't necessarily know all the agents that I have working for me, right?

Joel Moses
Right.

Kunal Anand
Right.

Oscar Spencer
And especially with different agents kicking off other agents and things, I really want to be able to have full control over what all of it looks like together.

Joel Moses
Yeah.

00:02:51:06 - 00:02:58:17
Oscar Spencer
So this is something that we actually need to take very seriously and have finer grained access control on everything if we're going to be going down this route.

00:02:58:17 - 00:03:09:17
Kunal Anand
Did you ever, for both of you, did you ever create a soul.md file?

Joel Moses
Yes, soul.md.

Kunal Anand
Did you ever do that? Yeah.

Oscar Spencer
I've never.

Kunal Anand
You've never done it?

Oscar Spencer
No.

Kunal Anand
Do you know what we're talking about?

Oscar Spencer
I don't. Please tell me.

Kunal Anand
Do want to do it?

00:03:09:18 - 00:03:28:23
Joel Moses
So, I've experimented with this. So, in creating some of the things that I needed to do for agentifying some home automation tasks that I was working on, you create a soul.md file, which basically gives it its governing mechanism, its rationale, its reason for being. It gives it a purpose.

00:03:29:01 - 00:03:30:05
Oscar Spencer
This is fascinating.

00:03:30:07 - 00:03:31:01
Kunal Anand
Yeah, it's existential stuff.

00:03:31:01 - 00:03:50:17
Joel Moses
It is existential. Now, again, the difficulty that I have is that I created that document. I knew that I created that document. Not every person who deploys agentic AI is going to have the ability to do that. Like, I got to define exactly how that agent should process the data that I give it rights on behalf of,

00:03:50:17 - 00:04:16:05
Joel Moses
but not every agentic system works that way.

Kunal Anand
Yeah.

Joel Moses
And, you know, when you launch an agent, you give it an initial grant, right? You tell it, "okay, you're me, for all intents and purposes, and here are the classes of things that you can do." And then you walk away and you become untethered, unattached from the initial grant and the agent then becomes, for all intents and purposes, you, working on behalf of you.

00:04:16:07 - 00:04:44:01
Kunal Anand
That's right.

Joel Moses
With your password

Kunal Anand
Yeah.

Joel Moses
effectively. And what happens if it goes awry? What happens if, you know,

Kunal Anand
Totally.

Joel Moses
your grant changes. That's another thing. It gets a point-in-time grant, but what happens if that access right is dropped? That token may still be associated with the old grant, right? So it strikes me that the way we authenticate and the way that we authorize is very much transactional, and it needs to move beyond that.

00:04:44:01 - 00:04:44:15
Joel Moses
What do you think?

00:04:44:15 - 00:05:03:07
Kunal Anand
There's a new paradigm that scares me that's on the discords now, which isn't self evolving agents, because that was sort of the thing with OpenClaw, right, which was how do we create these things and can these things evolve naturally because they can update its code. And the notion of a soul.md is fascinating. It's somewhat existential, right?

00:05:03:07 - 00:05:26:03
Kunal Anand
Which is you seed this thing with whatever you want. So the creator of Open Claw, he's the one who kind of created this soul.md file concept. And I love this idea. You can do all sorts of things with it and it can evolve. But those like sort of create the governing parameters. For your home automation stuff, you should like think about you're the Bob Vila of agents.

00:05:26:08 - 00:05:43:13
Joel Moses
You know, I gotta tell you, when I first started editing that document, I actually, I stepped back for a moment. I thought about the existential ramifications.

Kunal Anand
Yeah.

Joel Moses
If you want to describe what your purpose in life is, could you actually rationally write that accurately? You know what I mean.

00:05:43:13 - 00:06:05:05
Kunal Anand
Totally. Well, I mean now share like the thing that scared me when I read on discord, which was the ability for agents to create other agents. Not necessarily evolving itself. But what happens when you create those governing parameters in a soul file or in an agent.md file, or whatever it is, what happens when those things can go and create other agents?

00:06:05:08 - 00:06:37:07
Joel Moses
Right.

Kunal Anand
That is the thing that's super fascinating. And then kind of goes back to what Oscar was describing, which is so we know what grants you gave agent A, but if agent A goes and creates from scratch agents B and C. What's stopping agent A from granting things to agents B and C?

Joel Moses
Absolutely.

Kunal Anand
And it should likely--and I'm just going to pretend to be quasi logical here--should likely, the grants it gives, be a subset of whatever grants A got.

Joel Moses
Yeah.

00:06:37:08 - 00:06:51:04
Kunal Anand
But theoretically assuming like that chain of like provenance exists you could do that. But in this crazy, dynamic, weird, self-updating, do-whatever-you-want, YOLO world, I don't know if that's even possible.

00:06:51:04 - 00:07:12:20
Oscar Spencer
You know, but it isn't to say that I don't want this to happen, right? Like, I would love it if agents B and C were just created automatically and just could solve all my problems and do all the things. Just, it's a big question of how do we do it safely?

Kunal Anand
Yeah.

Joel Moses
Yeah.

Oscar Spencer
That's the question. And it almost begs the question of can we vibe our soul.md's? Is

00:07:12:22 - 00:07:17:22
Kunal Anand
Oh, wow. We are doing this right now, I love it.

Joel Moses
Meta.

Kunal Anand
Yeah.

Oscar Spencer
No, but, seriously.

Kunal Anand
Let's go, let's go for it.

00:07:17:22 - 00:07:30:10
Oscar Spencer
No, but genuinely, I would love, you know, if I could create a set of guidelines around, hey, this is how I want my agents to act. Right? And so when you create a new agent for me that you're actually giving it those same principles.

00:07:30:12 - 00:07:55:01
Joel Moses
So this is definitely an area though, like, I mentioned transactional in nature and not, transactional unfortunately occurs at a point in time and at the point in time you have a context. As you drift farther away from the transaction, you begin to lose the context.

Kunal Anand
Totally.

Joel Moses
And as you imbue another agent, B or C, they don't have the same context that the origination had when you first authenticated and authorized it, right.

00:07:55:01 - 00:08:18:04
Joel Moses
So, you know, you would do this using SAML technologies, you would chain SAML assertions to each other to coalesce something, or you would use an OAuth token. But those are all, like I said, transactional identities.

Kunal Anand
Right.

Joel Moses
And "on behalf of" identities

Kunal Anand
Yeah.

Joel Moses
are an idea that exists and has existed for a long time ago. Hails back to like Kerberos days.

00:08:18:06 - 00:08:19:07
Kunal Anand
Oh geez. I'm shivering right now.

00:08:19:07 - 00:08:47:22
Joel Moses
But honestly, there's no standard for transmission of these elements.

Kunal Anand
That's true.

Joel Moses
You can actually ask, for example, an agent to create a secrets file and maintain a secrets file. Now, unless you're very good at it, you can specify what the controls are on the secrets file and maybe what the constraints are in redistributing the secrets file, but these things are going to keep your passwords.

Kunal Anand
Yeah.

Joel Moses
Keep your tokens.

Kunal Anand
Totally

Joel Moses
And so, it seems to me that we haven't adequately tackled this problem.

00:08:48:00 - 00:09:06:12
Kunal Anand
So I would agree with that. I think we're still in the exploratory fun phase

Oscar Spencer
Yeah.

Kunal Anand
of the technology.

Joel Moses
Yeah.

Kunal Anand
And I feel like we are--and I hear you around the security side of it, that stuff definitely keeps me up at night. I think it keeps all of us up at night.

00:09:06:12 - 00:09:37:19
Kunal Anand
Like, how do you do this safely and securely? There's sort of other meta technical questions here around identity and if the current tools and technologies that we have today are good enough for it. I don't think so. I think the fluidity of all of the technology, the fact that we're transmitting and you said this well, like the fact that we would have to basically when A spawns B or C has to sort of create the rules of the road for B and C in text files, and it could pass the original context that it got when it was created.

00:09:37:21 - 00:09:49:04
Kunal Anand
But you're basically relying on text as your transmission vehicle. And at some point when you get to agents X, Y, and Z, how much of the context window have you chewed up? Because you've like literally

00:09:49:06 - 00:09:51:16
Oscar Spencer
You know, the context game of telephone.

00:09:51:18 - 00:10:11:05
Kunal Anand
Yeah. Like you have used all your tokens, thanks. Like by the time you don't, like the context window is completely full because of all the context from, you know, like A through U.

Joel Moses
Right.

Kunal Anand
You know and you just don't have more space left for doing what you want to do. It's fascinating, right?

Kunal Anand
But we're in the fun phase though.

Joel Moses
We are in the fun

Kunal Anand
Aren't you guys having fun?

00:10:11:05 - 00:10:16:23
Joel Moses
No, no, no, it's absolutely fun.

Oscar Spencer
I mean, it is fun as long as everyone is looking at it as fun right now.

Kunal Anand
Right.

00:10:17:00 - 00:10:33:04
Joel Moses
Yeah, and so

Kunal Anand
That's the point.

Kunal Anand
That's the point. But I think that's, we have to admit that that's where we are.

Oscar Spencer
Yes.

Joel Moses
Yeah.

Kunal Anand
And it's kind of like, if you remember the early days of the internet, it was so fun.

Joel Moses
Yeah.

Kunal Anand
You know, the early days of the internet, I would argue were way more fun than what they are right now.

00:10:33:04 - 00:10:39:00
Oscar Spencer
Oh yes.

Joel Moses
Although, you know, I, it's draw another ____ and we're in Las Vegas, so I have to invoke this one.

00:10:39:00 - 00:10:41:03
Kunal Anand
I say fun and you say Las Vegas.

00:10:41:03 - 00:10:42:12
Joel Moses
Just, well, yes. Of course.

00:10:42:14 - 00:11:06:00
Joel Moses
But just north of us we experimented with a production technology that had a very high blast radius before we really understood what good practices and good safeguards were, and as a result, we increased the background radiation for the entire planet. Nuclear testing.

Kunal Anand
Yeah.

Joel Moses
Right? In the early days, we didn't understand what the constraints of the technology were, what the risks of the technology were.

00:11:06:02 - 00:11:18:02
Joel Moses
But we rushed really hard into experimenting with it openly, publicly, without consideration for what that might cause. And I do think that we are at that stage of this particular technology as well.

00:11:18:07 - 00:11:19:05
Joel Moses
We're not

Oscar Spencer
But what do we do about it?

00:11:19:08 - 00:11:41:21
Joel Moses
Well, first of all, I think it's a good idea to understand the blast radius. We already talked about transitive identity and agents.

Kunal Anand
Yeah.

Joel Moses
Understand that every single time an agent creates another agent, the blast radius increases. It's not just you being able to grant your password, it's your password being able to be granted by two other authorizing parties.

00:11:41:22 - 00:11:49:12
Joel Moses
So that should give people pause.

Kunal Anand
Yeah.

Joel Moses
This should be a design aspect of the systems that you create, in my opinion.

00:11:49:14 - 00:12:05:22
Kunal Anand
I guess, let me ask you guys this question. Which is, so let's be practical for a moment, let's say that--cause I'm just trying to like build a mental model here and I think the listeners and viewers are probably trying to follow along and parse this thing. Although now they're thinking about fun in the desert,

Joel Moses
Oh, well, you know.

Kunal Anand
which is a totally different things.

Joel Moses
Sure.

00:12:05:22 - 00:12:29:15
Kunal Anand
But let's be practical and and let's assume that you've got an agent and you want this agent to be able to read information from a database--enter your favorite data store or database--that has a username and password. I want to, I'll ask you the questions and you tell me how you would solve these things, because I'm just trying to chain that.

00:12:29:17 - 00:12:54:00
Kunal Anand
Okay. So you've got, let's say the calling program I'll pick Go for whatever reason, I like it Why not? So let's say you have an application in Go and you want to be able to call an agent to go and fetch data from this database. Where do you store the credentials that the agent is going to use to fetch that information?

00:12:54:00 - 00:12:56:05
Kunal Anand
What would you do?

00:12:56:07 - 00:13:00:17
Joel Moses
Ooh.

Oscar Spencer
Credentials agent.

00:13:00:19 - 00:13:01:03
Kunal Anand
Look at you.

00:13:01:04 - 00:13:02:08
Oscar Spencer
No, seriously.

00:13:02:10 - 00:13:10:00
Kunal Anand
Oh, I love this. Oh, this is great. Oscar's answer is I'm going to write an agent that wraps one password.

00:13:10:00 - 00:13:13:16
Oscar Spencer
No, but genuinely, because the thing is is like, I actually want

00:13:13:16 - 00:13:16:20
Kunal Anand
I'm projecting and thinking you're a one password kind of person.

00:13:16:20 - 00:13:20:09
Oscar Spencer
I am one password for family subscribers.

00:13:20:11 - 00:13:23:22
Kunal Anand
Oh, geez. I was going to say like for OpSec, let's not.

00:13:24:00 - 00:13:40:15
Oscar Spencer
Yeah. No, no. No, but I think, genuinely though, is I think where your credentials are stored, I want there to be one place. I don't want my credentials just to be unfettered throughout many different agents. I want agents to say, "Hey, I need to talk to this other agent. This is the one that gives me access to things."

Joel Moses
Yeah.

Kunal Anand
Okay.

00:13:40:21 - 00:13:57:01
Oscar Spencer
I think at least lowering that blast radius to this one place, I think is going to be something that helps us curb some of this. Because otherwise

Joel Moses
That's a good idea.

Oscar Spencer
if the credentials are just everywhere, especially if they're plaintext username and password, and at that point it could end up in some training data sets.

00:13:57:01 - 00:14:03:09
Joel Moses
He's not wrong. Like there needs to be a place where purpose, context, and constraints are recorded alongside the credentials that are being held.

00:14:03:12 - 00:14:26:12
Kunal Anand
Yes, I agree with you there. You just did the XKCD thing without realizing it. I'm like, Oscar, how would you like imbue this thing with information to go and get data? And you're like, I'm going to go build a credential agent, which is like, that's rad.

Oscar Spencer
Yeah.

Kunal Anand
Walk me through how you would construct a, like really walk me through what are the primitives of that thing?

00:14:26:12 - 00:14:44:17
Kunal Anand
Like, where would you store that data? Would you would you literally keep it in a vault, like one password and then pull it out of there?

Oscar Spencer
Oh, yeah. No, you know, genuinely in a vault, like one password; HashiCorp, whatever, just go wild, right. And then you give your agent a credential that lets it unlock that vault.

00:14:44:19 - 00:14:51:17
Kunal Anand
Yeah. So, okay. But again, like, kind of going back to that. So you would still have to give that agent a credential.

00:14:51:17 - 00:14:56:01
Oscar Spencer
Yes. There, I don't know how we get around.

Joel Moses
To get

Kunal Anand
Yeah, how do you do that?

00:14:56:01 - 00:15:09:07
Joel Moses
Now, to get the credential,

Kunal Anand
Yes.

Joel Moses
I think that the standard transactional natures of asking for a credential, getting a credential, and applying a credential need to be modified so that you you create something that has to give you the context under which it's operating too,

00:15:09:09 - 00:15:09:18
Kunal Anand
I agree with that.

00:15:09:18 - 00:15:15:10
Joel Moses
and that agent should look at the context before allowing access to the credential for the other agent.

00:15:15:13 - 00:15:19:00
Kunal Anand
So to nerd out, SPIFFE and SPIRE?

00:15:19:02 - 00:15:21:03
Joel Moses
I like SPIFFE and SPIRE as a carrier for that.

00:15:21:03 - 00:15:36:21
Kunal Anand
So I was going to go there, which is do you see an adaptation of SPIFFE and SPIRE in this context for sort of agents to communicate with sub agents? Because maybe the Go program, we think of it differently and maybe it's not an application.

Joel Moses
Yeah.

Kunal Anand
Maybe we think of it as sort of an agent. And then we think about all those other agents like as sub agents.

00:15:37:00 - 00:15:46:19
Joel Moses
Yeah.

Kunal Anand
And it's sort of a different meta way to think about it. And do you imagine that there is like a new type of protocol that emerges? Classic XKCD, like, we just need to go invent a new protocol. And like, why not?

00:15:47:00 - 00:16:09:01
Joel Moses
I think a new protocol's got to emerge. I think SPIFFE and SPIRE are great at forming inner links between like API endpoints and API

Kunal Anand
Yeah.

Joel Moses
like host to host identity and grants for host to host operation. What I'm talking about is something that's even lower than that, where you have a number of different credentials going forking off to multiple services, and the services have to communicate under the covers of this.

00:16:09:05 - 00:16:20:01
Kunal Anand
Yeah.

Joel Moses
First you have to authorize the services to talk to each other. And then underneath that you have to authorize the purpose for which the transaction is occurring.

Kunal Anand
Totally.

Joel Moses
And I think something is going to emerge there.

00:16:20:04 - 00:16:33:06
Kunal Anand
Yeah. And what's going to be wild is

Joel Moses
It's got to.

Kunal Anand
like when it comes to sending that purpose, if you have a foundational model on the other side, that's like the gatekeeper that's parsing the purpose, and it's like, I reject your claim. I reject your claim agent.

00:16:33:06 - 00:16:37:05
Joel Moses
I reject your reality and substitute my own, as they say on MythBusters.

00:16:37:07 - 00:17:02:01
Kunal Anand
No, but this world is just so fun right now.

Joel Moses
It is.

Kunal Anand
It's so fun because we're watching it evolve in real time. And I don't know how we solve these problems. I mean, people talk about service accounts and, yeah, service accounts aren't going anywhere; they are what they are.

Oscar Spencer
Yeah.

Kunal Anand
But this world of identity and agents is so unique and there's so many ways to do this.

00:17:02:03 - 00:17:09:07
Kunal Anand
You've now, like, broken my brain with this idea of a credential agent.

00:17:09:09 - 00:17:10:10
Oscar Spencer
I mean, but you need

Joel Moses
It's not a bad idea.

00:17:10:14 - 00:17:10:22
Kunal Anand
It's not a bad idea.

00:17:10:22 - 00:17:21:04
Joel Moses
It's not a bad idea at all.

Oscar Spencer
you need a gatekeeper. Right? And the thing is, unfortunately, because this is something, you know, I haven't gone down the OpenClaw rabbit hole yet. I haven't done that yet.

00:17:21:09 - 00:17:22:14
Kunal Anand
Is there a reason why you haven't?

00:17:22:16 - 00:17:25:05
Oscar Spencer
I'm scared, Kunal.

00:17:25:07 - 00:17:32:18
Kunal Anand
Hold on. You're not scared to write a credential agent that has access to your one password, but you're scared of, like, firing up OpenClaw?

00:17:32:20 - 00:17:37:20
Oscar Spencer
Because I want to play around with it. But I think OpenClaw is just too much access to my things.

00:17:37:20 - 00:17:39:21
Kunal Anand
You got to give it a soul.

Joel Moses
That's right.

00:17:39:23 - 00:17:41:03
Oscar Spencer
I'll give it a soul, but

00:17:41:03 - 00:17:42:04
Joel Moses
Imbue it with a purpose.

00:17:42:05 - 00:17:44:01
Oscar Spencer
Yeah, is that enough?

Joel Moses
A dark purpose.

00:17:44:02 - 00:17:45:02
Kunal Anand
You've got to give it a soul.

00:17:45:02 - 00:17:46:18
Oscar Spencer
Is that enough?

00:17:46:20 - 00:17:47:15
Kunal Anand
Is it ever enough?

Oscar Spencer
No.

00:17:47:15 - 00:18:12:22
Joel Moses
It's never. It's never, ever, ever enough.

Oscar Spencer
I don't think it's ever enough. But I think genuinely having some type of gatekeeper is, the thing is, you know, I've been looking at, like, Claude Cowork is like something that I want to try.

Kunal Anand
Oh yeah.

Oscar Spencer
Right. And again, access to all my things, which is like kind of scary. And so I know that one of the things Cowork does whenever it needs to do something, it'll pop up and say, "hey, can I have permission to, like, run this thing or do this or whatnot?"

00:18:13:03 - 00:18:25:19
Oscar Spencer
And that sounds great. But we realize as that's happening thousands of times, that's not sustainable. Right?

Joel Moses
No. That's not scalable either.

Oscar Spencer
And so I actually do want something like an AI that can help me.

00:18:25:21 - 00:18:44:22
Kunal Anand
That's not sustainable, not scalable, also not reliable. Let me give you a real war story real, real fast on Claude Cowork. So I use Mac and I'm a crazy note taker. I have notes that go back 25 years, right. And I have paper notes, whatever, Evernote, all these things I've had over the years, I've coalesced all that stuff.

00:18:44:22 - 00:19:06:13
Kunal Anand
Yes. I was Obsidian-pilled for a little bit too.

Oscar Spencer
Alright.

Kunal Anand
But like, I got all that stuff out of there. I finally got into Apple Notes; life is good. But I used Claude Cowork recently to, like, parse through some notes and it ended up wrecking my notes database that powers Apple notes. It wrecked it as it was modifying it, and it corrupted all of my notes.

00:19:06:15 - 00:19:29:11
Oscar Spencer
For 25 years?

Kunal Anand
I am so lucky, for 25 years, I'm so lucky that I had a backup database of the notes that I could effectively go and like rebuild the database from, but it wrecked it, like fully corrupted the database. And like it's insane. So just be really, really careful from a reliability perspective because you're talking about scalability and you're talking about stability and all those things.

Joel Moses
That's right.

00:19:29:17 - 00:19:36:20
Kunal Anand
I'm talking about reliability. Like this thing fundamentally broke 25 years of notes. I think it's like 6000 plus notes in this app.

00:19:36:20 - 00:19:41:07
Oscar Spencer
And do you see why I'm terrified to try some of these things?

00:19:41:09 - 00:19:46:01
Kunal Anand
Don't be scared.

Oscar Spencer
Don't be, don't be scared?

Kunal Anand
Don't be scared.

Oscar Spencer
All right, just

Kunal Anand
And have backups. Hold on, I just shared that

00:19:46:03 - 00:19:46:23
Joel Moses
Have backups.

00:19:46:23 - 00:19:47:05
Kunal Anand
Hold on. Really, yes.

00:19:47:05 - 00:19:49:09
Joel Moses
Don't do it in a production environment.

00:19:49:11 - 00:19:50:17
Kunal Anand
And it's for the LOLs.

00:19:50:19 - 00:19:53:10
Oscar Spencer
It's for the LOLs?

Kunal Anand
Do it for the LOLs.

Oscar Spencer
It's for the LOLs?

Kunal Anand
Do it for the LOLs.

Oscar Spencer
Alright.

00:19:53:12 - 00:19:54:16
Kunal Anand
It's all about the LOLs.

00:19:54:19 - 00:20:18:01
Joel Moses
Well, with that, that's all the time we have for today.

Kunal Anand
Come on.

Joel Moses
Now, I want everybody to remember autonomous AI agents sound great until you realize they're basically microservices with their own special initiative. Which means identity and access control matters a lot more than it used to. Every AI agent eventually becomes a service account. If you don't think about that, and if history is to be our guide, every service account eventually becomes an incident report.

00:20:18:03 - 00:20:27:05
Joel Moses
And that is a wrap for Pop goes the Stack. If you're still thinking, "who am I right now?" then you should subscribe. We keep tracing the edge where agents meet access and reality. Take care.

Creators and Guests

Joel Moses
Host
Joel Moses
Distinguished Engineer and VP, Strategic Engineer at F5, Joel has over 30 years of industry experience in cybersecurity and networking fields. He holds several US patents related to encryption technique.
Kunal Anand
Guest
Kunal Anand
As Chief Product Officer at F5, Kunal leads the efforts to deliver transformative solutions in application security and delivery, overseeing product vision, technology strategy, and execution. His passion for cybersecurity, data, and engineering has shaped his career, from co-founding Prevoty, an application security startup acquired by Imperva, to serving as Chief Technology Officer and Chief Information Security Officer at Imperva. These experiences, along with leadership roles at organizations like NASA’s Jet Propulsion Lab and BBC Worldwide, have prepared him to tackle the evolving challenges of modern technology.
Oscar Spencer
Guest
Oscar Spencer
Principal Engineer with F5, Co-author of the Grain programming language, and TSC Director for the Bytecode Alliance, Oscar is passionate about advancing the future of WebAssembly.
Tabitha R.R. Powell
Producer
Tabitha R.R. Powell
Technical Thought Leadership Evangelist producing content that makes complex ideas clear and engaging.
Agent Identity Crisis: Access, audit, and “soul.md”
Broadcast by