Today on the AI Daily Brief, a blueprint for enterprise AI adoption. The AI Daily Brief is a daily podcast and video about the most important news and discussions in AI. To join the conversation, follow the Discord link at our show notes.
Hello friends! Today we once again have a slightly different type of episode, but one that I'm really excited about. It is undeniable that the biggest theme this year for most enterprises, or at least the most exciting theme to most enterprises, is Agents.
I have a whole slew of theories around why I think agents have businesses thinking differently even more than perhaps some gen AI assistant type tools have. But in this conversation, I'm joined by Swami Shandersakar on the head of the USAi Center for Excellence for KPMG, rather than just a general overview of agents. This conversation comprises a part one of something of a blueprint for thinking about enterprise agent adoption or at least testing.
Swami shares his taco framework, thinking about different types of agents broken down as taskers, automators, collaborators. We discuss the most common challenges that he's seeing among enterprises trying to adopt agents. And ultimately, we try to provide some positive steps that you can take as an enterprise to advance your agent strategy.
We certainly don't get through an entire blueprint for an agent strategy. We will have to have Swami back to keep going on that. As you'll see, Swami is definitely not your standard consultant. He has a deep technology background, working previously as an executive architect at IBM's Watson, among other roles. He holds more than 30 patents and has authored multiple books and articles on applied AI. And so without any further ado, let's dive into this conversation. All right, Swami, welcome to the AI daily brief. How are you doing, sir? They're going to tell you all thanks for having me over a big round of your show.
Appreciate it. Yeah, we were just joking. So up until about, I don't know, 24, 48 hours ago, we were talking about the hottest topic in AI. I think for a very brief moment that's been displaced, maybe by deep-seeking NR1, but broadly speaking, I think this conversation about agents still is pretty down the middle of where a lot of people are thinking. Maybe before we get into it though, I would love for you to just share a little bit about what you spend your days doing and that gives context for this conversation.
So I live in Dallas. I'm a partner KPMG. I lead the AI and data labs for the firm. And what that actually means is as part of the large transformation program you're running called AIQ, other steep chase runs. The data, AI and data labs is a pretty significant part.
The way I explain my job to my 13-year-old is I do three things. When I say hi, it's me and my team. We do a lot of experimentation. So for lack of better word, we don't have a full-fledged R&D function at the firm. So we do a lot of experiments, innovation, R&D around things that don't exist today, but it'll exist tomorrow, whether it is around how do we use language models, or how do we build advanced rag knowledge assistant techniques, or even agent tech frameworks, or how do we evaluate these models.
The second part of what I do is I help establish standardization when it comes to technology, architecture, and patterns for AI across the world. So we don't do the same thing five times. And the third part of this, even my history and being in the advisory side of KPMG, I work with a lot of folks in co-incubating new things for our clients. So I get closer to clients and understand problems so I don't
get too disconnected from what I do. So not sure I have, I think the best job in the firm and then a lot of fun and a lot of responsibility as well. Awesome. So perfect setup. I think a lot of the conversation today is going to be about the practical, factual kind of where we are with agents and understanding where you're sitting, especially relative to clients is useful. Let's actually start there with that question.
When you think about 2025 as relates to agents, they are obviously a key theme. They're on everyone's mind. But where are we actually when it comes to agent adoption, particularly in the enterprise, right? What stage are we at? And let's start there. And there's a lot of branching questions that I have from there as well.
Let me quickly set the context, no pun intended, right, to get to every agents. So when large language models came out, we started interacting with it with prams, ungrounded interactions, we loved it, and then we slowly started to bring in more context through longer prams, p-shot prompting, and so forth. Then, thanks to Meta, we have this approach with retrieval augmented generation where we set it. Why don't I intercept the prams and go to a corpus, bring back the relevant chunks, and give it to the model.
So we got our arms and ourselves adapt around, okay, now I understand the concept of rigor or what we call knowledge assistance in Cape Engine. But still with both of these paradigms, you were sitting and typing problems. You were, you went away, you were doing it, you may end up doing land chain type chaining and those kind of things, but you're still typing problems. There is the action. So agents come agents,
The whole concept is, can I have these machines go given a larger goal? Can these machines go figure out and plan and go take actions? So whether it is researching on a topic or whether it is
by reconciling a balance sheet against my ERP systems. It's now starting to do things. So what fundamentally makes agents, how will you define your instructions? Your goals express those instructions, long form prompts. How will those prompts are reasoned and understood through a planner, into tasks that you have to perform? And to perform the task or tools I need to do the job. Then there are things like knowledge, memory,
context and all bunch of things. Fundamentally, it is giving the large language models not only additional tools, but the ability to do reasoning in the context of a goal or adjacent set of goals you're trying to achieve. Swami, you gave a very good theoretical definition. What does it mean? If you look at what is possible today, all the things I've been explaining are possible
In a way, using frameworks like Lanchain and Lama index and others where you can deterministically chain those steps. For example, if I want to reconcile a balance sheet, I may have two break functions. Each function may have a long form instruction. I make that execution of function one in Python. You have that output to section the second function and I can achieve it.
is nothing truly authentic about it because you are hard coding the steps. The true agent behavior is going to be where I express, for example, balance sheet reconciliation. What do I do? As an expert, I say a balance sheet will have these following fields. I look for the following parts in the balance sheet input. Then I go to an ERP system and I do certain things. So you are expressing that as like how a human expert will express.
The custom outcomes can any large language model even reason and understand what it's saying. Probably till like six months ago or maybe a little before that, they were not. It was very hard for them. Or every iteration of the language models that came and came out from all the big tech, the reasoning capabilities and more importantly, longer instructions, longer problems, they begin to do pretty well.
Even if you go back to three, two, three years ago, these longer instructions were impossible to achieve. Right now you can do it. So what you have, the ability is better reasoning, better understanding of what you're saying through these long-form instructions that are very critical for agents. That was not possible in the task. So what does that leave with us? So you can understand instructions well, you can break them down into tasks probably.
Then it now then comes to, can you rely, can you, are those tasks that are broken down? And the tools that are used for those tasks, are they reliable enough for you? The answer, the jury's out there. The jury's out there in terms of the tools and platforms we've tried and worked with. It requires a bit of hand-hold.
Well, the language models can reason. The act of turning that into a set of tasks, a plan, instructions, and to go execute. It's getting there. It's getting better. But long story short, what we can do today is simple agents. I have come up with a simpler definition or a simple four ways to define the types of agents you can do, acronyms, taco, taskers, automators, collaborators, and orchestrators, which is one day.
agent orchestration. And one thing about Taco is people differentiate between, I've heard people talk about, oh, certain agents don't get to access all tools. My thing is in the Taco framework, all categories of agents, four types of agents are going to get access to the same knowledge corpus. It's going to get access to the same depth and depth of tools that the agents would need to create actions. It will have access to memory.
It'll have access to the same algorithms, so all of those four are fixed. So what is different? The difference between the four comes down to planning and orchestras. The T and Taco taskers, they're singular goals, one goal, but can break down to multiple tasks. They can be chained, easy to manage, easy to test, easy to run out.
When you go to automators, which is the next, they typically go to cross system cross application. These are end-to-end processes, order to cache, lead to cache, procure to pay, hire to retire. They touch multiple applications and multiple systems.
The goal may be similar, meaning ensure streamlined order to cache process execution, but they break down to sub goals. Each of the sub goals may touch different applications in different systems. So it gets a bit complex in terms of the scope of what it does. Then it gets complicated, orchestration gets complicated. In the orchestration, you have to manage state and all those things.
The third part is collaborators. This is where I've been monitoring all the questions. So there is this concept of can AR be used as teammates, agents be used as teammates. They're no longer you're telling the agent to do something, it comes back. You work with it. It's like how you work with your team member on a daily basis. So there's more sche minutes towards human collaboration partnering with the machine to get things done. It's there in the other forms of Asian, but it is even more so than this. It's just predominantly built that.
And the last bit, oh, in the taco is the multi-agent. That's where I have agents calling other agents. That is inter-agent collaboration. Of course, the complexity becomes more with all of this.
So like I said earlier, where are we today? I think there have been a lot of experiments prototyping done with the Taskers. It will have been inherently because there are quite a few platforms, open source commercial included, where you can build them quickly. And we can talk about that. But I think those are in the year of agents, if 25 days, I would see more Taskers. That's my prediction.
Do you think it's obviously very dangerous to sort of prescribe one right path without the context of any given organization? But do you think that that toggle framework actually is basically, are they four separate categories only or are they, do they have some sort of linear relationship with one another? As you're thinking about adoption, if you're sitting in an enterprise where, you know, it makes sense to start with taskers and then move to the next or, you know, how do you think about that?
Yeah, this is not, I don't want this to be a contrived framework where we retrofit everything into one of these four. The framework is meant for a mental model, mental picture. Look, how can I break down agents? Not everything, because the reason for this was everybody jumped into more days in coordination without even thinking about the basics. So that is one. Second is more than likely when you go talk to clients, they're going to talk about scenarios, which will not only overlap,
but will require their focus may be more than likely starting with, okay, let me do end-to-end-to-end-to-end-to-end-to-end-to-end-to-end-to-end-to-end-to-end-to-end-to-end-to-end-to-end-to-end-to-end-to-end-to-end-to-end-to-end-to-end-to-end-to-end-to-end-to-end-to-end-to-end-to-end-to-end-to-end-to-end-to-end-to-end-to-end-to-end-to-end-to-end-to-end-to-end-to-end-to-end-to-end-to-end-to-end-to-end-to-end-to-end-to-end-to-end-to-end-to-end-
So, yes, it is dangerous to put everything into the bucket, but that's not the point. The point here is to demystify the whole agent ecosystem and how complexity comes. And if you start to amalgamate and combine, that's okay. But at least you want to start the individual speed books.
Yeah, it's interesting. I think that one of the things that makes agent adoption fascinating as compared to, for example, sort of broader gen AI adoption over the last couple of years, enterprises moved very quickly relative to previous technology changes to grab onto gen AI and try to sort of harness it. Now, obviously, there's still tons of organizations that are behind, feel behind, very few organizations, I think.
We tend to find that the organizations who are the farthest ahead also have the greatest awareness of how much more they still have to do when it comes to adoption. So it's not like they're sort of at the end state or anything. But I do think that because they've been watching agents come down the pipeline for a little while, they maybe have a stronger sense in general of how they want to eventually use agents, the possibilities that have the most excited.
And I think that it might be leading to some of what you're seeing around, they're jumping to sort of exactly what they would like out of an ideal state of what an agent can do. They're imagining even ahead of where the technology is, rather than sort of just racing to catch up with what it can do now, which can create challenges just based on what's actually ready for prime time and what's not at this exact moment.
Yeah, everybody has an expectation and an ocean of what this agent should be for them. If you go look in the customer service and marketing function, they say, my version of an agent is, can I put a digital version of a customer or software development or sales development?
and it can talk to clients, it can ask questions, it can help close the sale and get paid a commission. So they start thinking it like synthetic employees. You go into the enterprise, you go into the mid and back office functions. They think in terms of processes.
There is a particular way in which I receive, review, approve, or deny invoices as part of my larger procurement process. So I have a conception of how agents should be in that particular way. It is not one size fits all, like you said, but at the same time, the key responsibility is when you go talk about them, you're not trying to take an existing technology
in retrofit and say, oh, I have agents. So as an example, one belief I have is good old business process engineering, like how you sat and designed business processes for end-to-end processes. It took a particular approach. Process engineering came out where he said, decompose your domain, break them down into level once through level N, could go up to level seven and eight, where you kind of
have a massive swim lane view of how your process looks like. That's how we represented processes. That's not how machines think. Now with the reasoning capabilities, I could express that same thing almost like a long form instruction. And you leave it to the machine to see if you go to find the process, the steps that are needed to execute it. So there is also a change in how we approach designing the agents
That is also essential and important to me. The outcome is the same. I want a better, efficient, leaner process. But you're approaching it in a different way. So the point being the entry point for agents are different. They're all going to converge at some point in time, but given where we are in the stage where we are, its expectations are widely done.
Today's episode is brought to you by Vanta. Trust isn't just earned, it's demanded. Whether you're a startup founder navigating your first audit or a seasoned security professional scaling your GRC program, proving your commitment to security has never been more critical or more complex.
That's where Vanta comes in. Businesses use Vanta to establish trust by automating compliance needs across over 35 frameworks like SOC 2 and ISO 27001. Centralized security workflows complete questionnaires up to 5x faster and proactively manage vendor risk. Vanta can help you start or scale up your security program by connecting you with auditors and experts to conduct your audit and set up your security program quickly.
Plus, with automation and AI throughout the platform, Vanta gives you time back, so you can focus on building your company. Join over 9,000 global companies like Atlassian, Quora, and Factory who use Vanta to manage risk and prove security in real time. For a limited time, this audience gets $1,000 off Vanta at vanta.com slash NLW. That's V-A-N-T-A dot com slash NLW for $1,000 off.
If there is one thing that's clear about AI in 2025, it's that the agents are coming. Vertical agents by industry, horizontal agent platforms, agents per function. If you are running a large enterprise, you will be experimenting with agents next year. And given how new this is, all of us are going to be back in pilot mode.
That's why Super Intelligent is offering a new product for the beginning of this year. It's an agent readiness and opportunity audit. Over the course of a couple quick weeks, we dig in with your team to understand what type of agents make sense for you to test, what type of infrastructure support you need to be ready, and to ultimately come away with a set of actionable recommendations that get you prepared to figure out how agents can transform your business.
If you are interested in the agent readiness and opportunity audit, reach out directly to me at www.bsuper.ai. Put the word agent in the subject line so I know what you're talking about. And let's have you be a leader in the most dynamic part of the AI market.
Hello, AI Daily Brief listeners. Taking a quick break to share some very interesting findings from KPMG's latest AI quarterly pulse survey. Did you know that 67% of business leaders expect AI to fundamentally transform their businesses within the next two years? And yeah, that's not all smooth sailing. The biggest challenges that they face include things like data quality, risk management, and employee adoption. KPMG is at the forefront of helping organizations navigate these hurdles.
They're not just talking about AI, they're leading the charge with practical solutions and real-world applications. For instance, over half of the organization surveyed are exploring AI agents to handle tasks like administrative duties and call center operations. So if you're looking to stay ahead in the AI game, keep an eye on KPMG. They're not just a part of the conversation, they're helping shape it. Learn more about how KPMG is driving AI innovation at kpmg.com slash US.
What do you think about as your advising clients or even just thinking about it broadly and you think about agent readiness in the enterprise? What are the some of the pillars of consideration? How much is it about data? How much is it about policy? How much is it about understanding objectives as you've just articulated? What are some of the key pillars of agent readiness?
Yeah, you kind of gave our three out of the things I was going to say anyways. So first of all, why is I start with that quest? What is the rationale? What is the motivation for it? So first define, don't go go to technology called agents yet. What is the problem you're always trying to solve? So if I'm a client, if they come in, if they're a retailer, they come and say,
You know what, I want better top line growth increase in my stores, in my brick and mortar stores. What are you doing today? They say, okay, I have these things, but the sales get affected because certain stores don't follow certain kind of policies and procedures. They don't take into account customer satisfaction or customer reviews and all those kind of things.
We go in and say, OK, the goal and objective is to have a better, more tangential approach to how do you do store performance analysis so you can improve the performance and increase your top line in real world. So number one is what I trying to do and is agency even the right answer. So let's assume you've gone down the path of saying, look, I want to optimize my processes, reimagine my processes at the same time optimizing my human resources.
Then you talk about, okay, where is the data coming from? You have the data. Do you have access to all of the data? Have you even, first of all, instrumented the data? If it is, if it has to be digitized? And as that data made is clean and it's all the good things about data availability and readiness and everything. The third one is, and I don't think you mentioned in the list, Nathaniel, which is, who is the human expert who can articulate what is happening today and what needs to change?
How are we going to elicit that knowledge? You pick any intense domain. Even if it is something as simple as customer service, from the point a customer comes and raises a request for refund. What do you do? What is the process you follow? And what is the way to reimagine from that point onwards using agentic concepts?
So human expertise is still needed to articulate. I mean, there are theories floating around. Can I go and do simulation? Can I look at what humans being done and learn from that? Yeah, you can, but they're not fully reliable yet.
Why agents, data, human expertise, articulating the whole thinking process and the process and how agents have to be built, then getting into policy three, that thinks, okay, are there things, how much of autonomy you want to give to these agencies? It's not a, it could be at a very broad stroke principle level saying, look, I don't want any decisions that have a financial implication to be approved without human and the law.
Maybe I want three steps of three stages of human syndrome. So there is a whole strategy around how do you bring in humans, where do you bring the man, where is the level of oversight, what does the kill switch equivalent for Asians look like? What if you want to stop Asians for a day? What is your fallback mechanism in case these don't start to work? So all of those policy, trust, security, reliability aspects is one big bucket.
The fourth important market is everybody, and this is a very opinionated topic I've seen with clients, is how are you going to build it? Okay, everything fine. You got the data, you got the experts, you got policies, you know how to build it. Where are you going to go build it?
So today, there are dime a dozen open source frameworks, the big tech, smart tech startups, they all have their platform. So where do you go standardize? Again, my thought process there is, till this whole thing settles down, you may have to remain in polyglot and pick a few choices, be very opinionated and go build and try them out. And some are going to work, some are not. So you'd have to be ready for consolidation and margin. So what is the tool technology infrastructure that you're going to go to?
I'm not even using, because LLMs are assuming they're gonna get awesome. They are awesome multi, they're gonna get continued to get awesome. And the last bit is around skills, do you have the skills to build this? And one more thing after this, okay, you have the skills. Building agents is one thing, the day two plus operations is a completely different thing. How are you gonna sustain? So we've talked about model drift and data drift, now comes agent drift.
What's the guarantee the agents are not going to drift? It's going to deviate away from what it was built for. How do you keep them up, keep them? Is the data changing? How good of a feedback are you providing back to it for reinforcement? Those all come in the day to class operations. So top of my mind, I think these are the kind of categories of things I would look at.
And do you have, I think it's a really useful framework. How much do you think people, how much are you seeing people's first experiences being something that they're kind of rolling their own, you know, with one of these general frameworks versus trying something that's more off the shelf? I mean, this is kind of only a question for the last few months as more off the shelf things have been available, but, you know, working with a customer service agent or is this, does this have to do with which category of agent to use your framework they're actually thinking about?
Yeah, so if you double click into where are you building agents, I think it double clicks into three sub-questions or sub-areas. Are you going to build your own using open source? Are you going to take a commercial platform like a copad studio or agent space? The third option is that you're going to buy the agent. So you go to agent force is going to say, okay, I already have a sales coach agent. You just go buy it, configure it and use it.
The experience is changing by the month. What we have today is not what we had six months ago. And again, there's another one. The way I look at the whole agent, the tooling spaces, there is the low-code tools, like Coparac Studios and those kind of the work. Then on the far right, you've got the pro-code tools, like the land grabs and crew AI and autogens of the world. Then in the middle, I call them mid-code.
You can go back and forth. I can write code, I can write and go drag and drop, so I can do both. Initially, people tend to go use the pro code options, and they realize while it gives them a lot of flexibility, they have to end up building a lot of things on their own. So there's a lot of lines of code to write and maintain and manage. Brittleness starts to kick in unless you have a well-core, needed engineering team.
development team, you may end up recreating the same thing. For example, the same tools to do the same thing may get recreated multiple times. So there is that risk of having, and you need to have a special set of skills and capabilities to do coding by yourself. Now, if you come to low-code, I mean, I could get started quickly very easily, but I've seen roadblocks where I say, oh, I want to do this Excel
comparison for one of the steps in my agent, and I cannot do a very deep Excel analysis because Microsoft has multiple complex cells and rows and headers. That's an example. Like I said, that's why the whole Polyglart approach is needed. You need to first decide what does my agent dig architecture going to look like? What are the tools that I need as of enterprise? Let's go figure out the strategy to build those tools in a reusable way. And then it doesn't matter if I'm building the agents in my pro-core
or on my local, they all access the same sort of tool. So let's focus more on getting the task done with the same set of guidelines, principles, and safety. And if you are ready to keep these agents from data onwards, you make a choice. So I think the jury is out there in terms of not one platform is got everything you need. We have something, then there is going to be something it does not give you or a reflection point you get.
I don't know if I'll phrase this question right initially, but with Gennai right now, non-agentic Gennai LLMs and assistant co-pilot style tools, a lot of adoption is happening, at least mediated by some central body in the enterprise that's tasked with thinking about AI transformation.
maybe it's a repurposed innovation group that touches all the lines of business and all the back office functions and all the things that sort of just understands everyone's different stakes and who become the conduit for different use cases and different tools and things like that. So it's top-down, not in an aggressive kind of way, but in a still like coming through a central entity.
Do you think that agentic adoption is going to mirror that? It's going to come from central groups analyzing all the different options. Or is this going to be a little bit more bottoms up where it's a specific department or a specific line of business or a specific area, you know, experimenting with something that's direct and purposeful for them?
You cannot stop innovation in the grassroots. That's the reality. People are going to keep innovating and come in with new approaches because the role I'm in, I belong to the central organization. So fair disclosure, right? I'm providing my perspective with that sitting in that part, in that, that side of the world. I believe helping standardize on the approach, the technology, the platforms, including safety that you incorporate when you're building the agents.
will go a long way in helping folks in the departments and different business units spend their time and energy in building units. Where I see a lot of time and energy being spent is trying to build your own agency platform.
or trying to make your own agency platform. This is like saying, I'm trying to build a car, but I have four groups in the company, and each one of them is building their own supply chain or the assembly line. Why even try that? Why don't we build one good, efficient, morality, Toyota, Tesla, you pick the best supply chain for the assembly line, including the supply chain that powers it?
and you focus on designing the Model 3 or the Toyota Camry or whatever your favorite corners. So, standardizing, giving them the platform and providing the guidelines and let them bring the focus in the hard part. The hard part, like I was telling earlier, eliciting knowledge of everyday work and translating that into an agent. That takes time. That's significant piece of work. So, who's going to do that if everybody's focusing on
also build the platform and I will also build it. So it sounds like a bit of a both and there's going to be functions that are relevant for kind of an org wide or at least cross functional discussion from an infrastructure perspective in particular. Well, there's also a clear kind of purpose for what the individual units or groups are going to actually need and understand. Yeah. Yeah. And one while the observation data point is we're already finding the individual groups heavily time constrained, meaning
They don't have a lot of time to go to R&D, pick a platform, evaluate a platform, evaluate choices, what kind of evaluations do I do on agents, this versus another. They already have things to go ship and build. So trying to take those as much as a way and have the central group help provide that guidance.
Let me go down even a level from that sort of department or functional or group level. How much are you thinking about individual level, employee level adoption and the challenges therein, either when it comes to
getting employee perspectives on which tasks are actually suited for automation or which things they'd like to have agent support for, as well as a question of employee attitudes and concerns around replacements and things like that. How much are you seeing that enter the discussion as companies are moving into this space?
So in one side, there are tools, so for example, KPMG, Zor.Microsoft M365 co-pilot to all of our employees in the US, for example, except for Veril. So they have access to all of the tools, the ability to create what are called personal co-pilots where you can point it to your own SharePoint corpus and start to interact with it. So they can pretty much do this in a matter of a few seconds today.
So there is that level of capabilities that are made available by Big Tech, like Microsoft and made available for large corporations. The reality is they are made available. They are there. The next evolution in that is they're also gonna sell, you can build your own agents to automate your daily tasks. So there is one theory from the Big Tech where they want to push their tools for more adoption, better adoption. That they single, you can build assistance agents on your own and it's gonna be easy.
My take is, look, well, that is all good on paper, but imagine you're going to have hundreds and thousands of these agents all over the place. The kind of actions the agents are going to take, we have to carefully manage them. You don't want it to go start doing things that will leak your IP, leak your knowledge, leak your data, put your address can be one.
So one tool is people who are builders, the builders of agents will have to be certain types of people who have gone through not only scaling, training and other kinds of things, but also understand the implications of building agents in a particular way. So we're going to start seeing personal agents that is confined to only what I do as work. So today in my computer, I could have a shell script that can do things that is confined to what is happening in my specific environment.
Enterprise-grade agents will take a path where it will be built by folks who have got a certain level of pedigree and steps. I don't think either of them are going to stop.
Do you see a convergence of those at some point where companies start? One of the fascinating things about JNI in general is that the first time that shadow IT has been, well, yes, a concern, also an area for innovation that they're actively trying to understand so they can potentially bring in.
You want to understand what people are using their personal gmails for, to sign up for, not only because you want them to not put important company data on those platforms without your knowledge, but also because you might want to adopt those. And given how much of a race there is to the personal assistant side of agents, right, we're recording this just a few days after operators come out, I can see there being a sort of a blend where enterprises start trying to adopt agents from a top down kind of way, or at least sort of a unit by unit group by group function by function kind of way.
and employees are bringing in assistance that have sort of started to automate their own personal processes at the same time. Yeah. Since the burn-up operators, let's take that as an example. When operators make them available for everybody, I could build an operator that I could use for my, for example, my weekend planning or my calendar, assuming I can log into or look on the web, look at my calendar and see overlapping meetings and come and tell me which ones I should consider canceling as an example.
But that's me having unleashing an operator, building an unleashing an operator, that is happening in my personal environment space. Assuming 10 other people find about it and say that's a very good use of operator, that's a very good personal agent. Can you share that with me?
So the point I'm trying to make is the personal agents, the scope of sharing is going to be limited. If you keep it that way, it's not permeating across the enterprise. It's still being built on, this is not like somebody gone wrong and built their own agent on an approved platform. I'm still talking about approved platforms, but very personally, but the scope of sharing is limited.
I foresee a world where you're going to see organic innovation happening and somebody's going to crack the nut on, but this is the most innovative use of operators or agents or coparts, should you be a whatnot, that I think should be being available at the enterprise level. To go through that level, you've got to go through stage gates of testing, evaluation, safety and other things. So you have proper governance in place.
Because for the enterprise, I see them no different than treating them as products. They're loving our products in your enterprise. You're not just going to roll out things randomly on the fly without knowing what it is doing in your enterprise.
So I think the, I had some idea coming into this what I wanted to do, but what's become clear is that I think this episode will kind of stand and I'm gonna frame it as almost sort of like an agent readiness checklist, but I think we just did part one. What I would suggest is maybe one wrap up question, but then we should come back and do this, maybe next month and do a part two where we get into maybe some more specifics around use cases and things like that. I guess until we get there,
If you had one general piece of advice for the next month, you're not going to get to talk to these listeners as they're thinking about adopting agents in their companies. What's one thing you would encourage them avoiding or trying or just setting as part of their framework to kind of maximize how they think about adopting agents in this year?
Yeah, one thing is always hard, but as we try, one thing I'll highly encourage is don't stop experiment. I mean, you have to do that only then you would understand what is right now. But one thing I'll highly encourage everybody to go do is talk to your respective transformation technology AI leaders. First question to ask is,
The things I've been talking about, what are we going to do? What if I have the next best agent take idea? Where do I go build it? Where do I build it in a way that it is not through their work? Because that could be a rallying point for many things, meaning what other agents, what kind of agents are they? How do I build those agents? What data do I need to build agents? Because I've seen everybody talk about thinking about agents, talking about agents, debating about them,
But when it comes to rubber hits the road of, I need the data, I need to go build them. It becomes analysis for analysis. So we are in the mood. If we are very, if all of us believe this is the year of agents, then you should have already picked a platform. If you've not highly encouraged go, think about where do you go? And then everything else would follow. What are you ready? Are you, are you, do you have the skills to go build it? What else do you need to think about? They'll all naturally fall.
Awesome. Well, like I said, I really do think this should be a part one, and we should come back again. But I appreciate you spending some time with us today. I think it's, you know, everyone is trying to wrap their head around this particular question right now. So invaluable to have you here to talk through it. Thank you, Nintendo. I'm going to come back.