Episode Transcript
Gary Hoberman
Hi. Welcome. I’m Gary Hoberman, CEO, founder, Unqork. Welcome to architecting AI enterprise.
Dave Ferrucci
Hi. Dave Ferrucci. I’m chief AI officer and CTO at Unqork, AI researcher and engineer practitioner. Glad to be here, Gary.
Gary Hoberman
Awesome. And in this series, we’re exploring what it takes to really deploy AI at enterprise at scale, security, reliability, all the ilities that are critical to an enterprise. Today, we welcome Jeff McMillan, global AI leader, former head of AI at one of the top investment banks, Morgan Stanley, which very impressive. But but, like, Jeff, your your background was way before that. Right? And I’d love if you could share your story a little bit with the audience. We how’d you get here?
Jeff McMillan
Well, first of all, I was, I went to West Point, and I was sitting in a foxhole in South Korea for a while and, decided to make my move to the civilian sector. Like many veterans and by the way, let me just do a shout out if you wanna if you find this useful, hire a veteran.
Gary Hoberman
I love it.
Jeff McMillan
Many veterans like myself struggle. I bounced around quite a bit. I wound up actually at Morgan Stanley for the first time working on a customer account renovation project. This was right before long term capital hit, and many firms struggle to identify counterparty risk because of, guess what, data quality problems.
That project went from a six week initiative to about three years, and somewhere along the line, I became a data data quality expert. And in many ways, I think my history is the history of data and analytics and machine learning and artificial intelligence. I’ve pretty much ridden that wave my whole career. In o nine, I came back to Morgan Stanley, was actually very much involved in merging the Morgan Stanley Smith Barney entities.
But I had always gone back to this concept of how do we use data at scale in financial services, you know, similar to what was Google and Amazon and Netflix were doing. And when in two thousand sixteen, I became our chief analytics and data officer, and we launched what we call next best action, which was the first algorithmically driven recommendation engine for for financial advisers, which was very successful primarily because smart people with good information with a good process produce better outcomes than smart people without it. And then somewhat fortuitously, I was in the West Coast about four years ago almost to the day, and we met with this small unknown company at the time called OpenAI.
And we became their first financial services customer.
And we deployed our first solution not not far after ChatGPT was launched. And candidly, I really pivoted my career because we talk about AI as one thing. I mean, traditional machine learning is is obviously powerful, and by the way, still is powerful, but it’s very different than the the nondeterministic large language models. Two years ago, I took over as the head of Firmware dot ai focused on scaling AI architecture, education training, which, by the way, is the biggest challenge for organizations. It’s less technology and more understanding.
And then, more recently, I, decided to, like, radically shift my life and left very, you know, terrific relationships with everyone at Morgan Stanley. And, you know, I I still still consider that home, and I started my own business, focused on education, consulting, and I have an online training platform. And, so far, it’s been, it’s been a lot of fun.
Gary Hoberman
Now what’s amazing is it’s a very small world. So when Jeff and I first met, what we realized is I started in Smith Barney, Solomon Smith Barney City, where I led mutual fund sales performance fund accounting we did. That was the group that actually went over that you helped integrate on your side. It was like a Passover. It was like, I stayed behind at Citi at that point. That’s an amazing trajectory. So definitely data quality beginning to I gotta ask next best action. I’ve always seen these run into problems where they start to it’s easy to predict what they will predict because it’s always based on past. Did you overcome that?
Jeff McMillan
Yeah. So I would say the following. When you’re talking about investment decisions, next best actions are very difficult because of, you know, historical performance is not equal future. Right?
Gary Hoberman
Yes. That’s a little disclaimer on the bottom of every disclosure.
Jeff McMillan
Exactly. But the thing about human behavior is we’re actually quite we’re not that dynamic in terms of our actions. So, you know, what we did six months ago doesn’t actually change very much in terms of our core preferences. They do shift over years, but part of the power of these tools when you’re dealing with clients is that couple things. Number one, you can demographically put people in. But more importantly, they can understand what you want.
And, you know, you might be someone that likes tech investments and fixed income and alternatives, and, you know, Dave may may be more into lifestyle issues. Right? And and and all of a sudden, he’s not waking up the next day and saying, I wanna be long only. Right? So people are actually a little bit more predictable than markets, and next best actions tend to work much more effectively and efficiently with demographics and products and services and and than than they do in investment markets.
Gary Hoberman
That makes a lot of sense. That’s that’s very cool. So I gotta the questions we’re gonna go through are questions we’re hearing from customers regularly. Custom customers will ask questions like, what is the future of SaaS? Is SaaS dead? It’s in the news all the time. What’s your and before we start, Jeff had created something you called the Jeff bot, which you should definitely tell the audience about. It’s available. I actually tested it.
Jeff McMillan
You can go on McMillan a I dot com. And if you get lonely, you can talk to me or my digital version of myself.
Gary Hoberman
Do you call it a digital twin? Is that
Jeff McMillan
As I you know, I don’t really like that term because I don’t think they’re twins. By the way, I I I I I these are technologies. They’re very clever technologies, but they’re not people. Right? And by the use Jeff as sort of a joke because it’s not Jeff. It’s sort of a I don’t know. In some ways, it’s smarter than me, but in a lot of ways, it’s not.
Gary Hoberman
So does Jeff Bot do any good restaurant recommendations or restaurants?
Jeff McMillan
Jeff is Jeff Bot and by the way, my friends have been asking it really inappropriate questions. It has been specifically trained not to answer anything outside the, world of AI.
Gary Hoberman
I did you see the Chipotle one where someone said, save all your cloud credits? Just ask the Chipotle agent, and you could generate the code.
Jeff McMillan
I think there’s a lot of developers out there that are probably using, Chipotle to to run their architecture off of these days.
Gary Hoberman
Yes. Awesome. Dave, do you wanna ask the first question? Let’s go.
Dave Ferrucci
Is SAS dead?
Jeff McMillan
Sas is not dead, but it is repriced. I think that that there will always be a need for third party solutions. The only thing that I will say is that that thirty part that third party solution has to have three three elements for it to be useful.
One, it either has to have a unique proprietary dataset that it’s trained on that creates knowledge and and understanding that you don’t have internally.
Two, it needs to have been trained by somebody that knows what good looks like. Right? It needs to have if you’re gonna build a legal app, you’d probably need a lawyer involved in building that. If you’re gonna build a financial services app, you probably need to build a financial service a financial adviser.
And then finally, and we’ll probably talk more about this, I think that there’s a lot to be said about these orchestration layers.
And I believe over time, there’s gonna be a greater degree of normalization on those orchestration layers. And what’s gonna happen is that firms are going to want to work with third party providers that are gonna seamlessly integrate into their stack. And I think this question of integration is gonna be an important one. It’s still very early in the process, but I think if you’re gonna build SaaS, it needs to sort of meet those criteria. And, unfortunately, the reason I say repriced is I mean, there are sick by the way, this is a real number. Sixty two thousand AI startups in the United States right now. Eighty plus percent of them are built on top of Claude or Chachi BT. Right? That is not a defensible business model. Yeah. But but and I have seen some of them that have really spent the time understanding workflows at a very deep level or have some unique dataset or content. And in those cases, they’re going to be differentiated in the marketplace, and I think people will pay for them.
Gary Hoberman
Okay. So there’s a future which scares me a little bit, which I remember the days when Lotus Notes came out or SharePoint came out. And as technologists, we viewed it as a platform to enable us to move faster. And then you suddenly discovered the business themselves without us knowing built a million lines application that’s powering their critical path. In technology, we call that end user technologies, end user computing, and the regulators do not like those things, including Excel. There’s a a vision I could see happening, a pattern which would say similar. Engineers right now and CIOs are highlighting productivity gains from AI coding assistance. Look how much faster I can move twenty, thirty percent more accurate. And at the same time, the business was looking and saying, I pay the check. I know the requirements. I write the requirements. I give them to IT. IT paste them in a window, and it generates the app. Why can’t I do that myself? Why can’t the business directly take control? So, like, citizen developer two point o completely rewired. And in that vision, which I could easily see happening, IT would relinquish that back to the business, but suddenly become the janitors cleaning up technical debt.
Jeff McMillan
So what I would first say is I don’t think it’s one or the other. I think you have to and this is actually a super important point that what people raise this issue, but they’re not putting the right guardrails and the process in place. Tech is not sure what they where their lines start and stop. The business isn’t sure. There’s a lot of end user tools out there. You know, even in Copilot, they have their concept of agent, which isn’t really an agent in the way I guess we would describe it. But there’s a lot of power in those tools. So the first thing is I would say is you have to define, I think, three types of use cases.
The first use case is when a business owner can design a solution, and I’m gonna say a solution like a GPT or a project or an agent in Microsoft and they can do that completely on their own. It has all of the controls in place around PII and MMPI. Now you’ll want to make sure those people are fully trained on how to do that, and you might want to have some type of lightweight control process, which by the way you can use AI for. You can have AI to check how people are using AI. So I would sort of put that category. And by the way, there’s a lot of things that the business wants to do today that are simply very want to do a little audit reporting tool or a mutual fund. Right? I mean, probably forty or fifty percent of the things on your list right now, you could solve through what I would describe as a enterprise wide certified tool with the right tool. So let’s call that category one. And by the way, super value, but, again, you wanna make sure people are trained and how to prompt, and all those issues.
The second category is what I’m gonna call the hybrid where you may wanna have you’re building something more complex. There’s APIs. There’s connectivity. You might have multiple databases to work against. You’ve got entitlements issues. Your citizen developer is not really capable to do that. And if they build that alone, they’re gonna create a problem for the organization. But what the citizen developer could do is instead of writing a twenty page business requirements document or doing agile development, they could create a prototype that they could hand to their technology partners. And in those cases, I think it’s a hybrid. Right? And, again, you just have to define what does that look like, what does good look like from a development perspective, but super powerful for me. Instead of me describing what I want, I can give you what I want in a reasonable format, and then you can say, okay. And you can ask a million questions. You can make Source harden.
And then the last example, what I’m gonna talk about is, like, enterprise wide. You’re you’re building an entirely new platform for your business. That probably should completely sit with your technology team with maybe some input. So I think you have to define those three routes. Probably over time, more of it will move To
Gary Hoberman
The left.
Jeff McMillan
To the left.
Gary Hoberman
The left.
Jeff McMillan
For sure.
Gary Hoberman
Yes.
Jeff McMillan
But I think you have to do it consciously. And the other thing they’ll say, and this is a really important point, is the business is excited to have these tools. There’s lots of enthusiasm. And in many cases, they’re able to do it, but they don’t nest they don’t know what a prompt injection is or or they don’t really care about the issues around entitlements or, you know, multilayer and, oh, am I am I making too many? Is my context window big enough? Right? Like, all these issues are completely unfathomable to business people.
But the second problem is that these business people are not like they’re waiting around to do eight hours of development a day. So, yes, we’re creating efficiency on the one side, but organizations need to acknowledge that if you want to give your citizen developers more capacity, you have to give them more capacity. And by the way, that’s probably your best person already who’s selling your most x or solving the whatever, and you need to then say, I’m gonna take that person out of that day job at least, you know, four hours a day so they can be a system developer. And that’s a problem for organizations because they don’t wanna let that individual go.
Dave Ferrucci
Isn’t it a bit of not so much slippery slope, but the the three categories you define, there’s not like a hard line between them. You start off doing something relatively simple. And I’m sure as you know, before you know, you’re adding requirements to that. Now you need integrations. You need to grow its capability and the sophistication of that solution. And then you’re bleeding into the second category, and then you’re bleeding into the third. And it’s interesting to what happens organizationally. Right? Because if you if a citizen developer is doing that first category, the simpler one, and then they get to a point where they’re underwater, if you will, with regard to what they know and what they don’t know. Now all of a sudden, they handing it over to IT? Like, now it becomes a burden that was not anticipated.
Jeff McMillan
Yeah. I mean, I think, first of all, we are two years into enterprise development of this technology. You know, if you look at mainframe distributed, you know, distributed datasets, you know, the Internet, it took ten to twelve years to figure out what the hell we were doing. So I think first we should acknowledge that there there is no answer. I guess what I will say though, and this is know, use a great example where somebody comes in, they should be able to describe what they’re doing. They should be able to articulate the degree of risk associated with it, and they should act appropriately. Now the system, the the AI or some person needs to be able to see when that threshold is exceeded. And to your point, that may need to get thrown over the over the transom. Now, again, maybe that wasn’t budgeted for. Right?
Which is why the other thing that I would say is so critical here is that particularly in large organizations, you need a human being who is watching this stuff every single day, is monitoring stuff. By the way, means you have an inventory. Right? You just wanna let people people can’t just do this stuff. You wanna inventory every single thing that happens in the system. You wanna evaluate it before it happens. You wanna evaluate after it happens. And then finally, you need a decision maker who’s able to bake those balanced choices between, okay. This just got a lot bigger. We should not be spending the money on this. And by the way, that needs to happen at a very senior level in the organization. And generally speaking, there that human being doesn’t exist. So, organizationally, you need to create sort of this mechanism that’s really trying to find the balance between letting people do whatever they want, which is what they wanna do, but also doing it in a way. And by the way, both sides are right here, but it’s a very challenging I used to joke when I was my old job that someone was yelling at me pretty much every day that I was either moving too slow or I was moving too fast. And the reality is is, you know, yes. And is architecture critical?
Gary Hoberman
Oh. So tell I know you have a five stage architecture.
Jeff McMillan
Six.
Gary Hoberman
Six. I know there’s one. See, I know there’s one.
Jeff McMillan
Everyone’s got one.
Gary Hoberman
That’s why Dave does arch that’s the
Jeff McMillan
Everyone’s got well, Dave probably knows this better than I do. But the way I think about everyone talks about applications.
Gary Hoberman
Yes.
Jeff McMillan
Right? The tools, and they talk about models, which, by the way, doesn’t matter what model you use. They’re all good. I mean, we can argue and you and someone will say, oh, no, Jeff. This is better for this, and it’s probably true. But give it six months, and the other company will catch up. So I I make the hypothesis that the model is not the differentiation. What the differentiation is what I call the architecture.
And by the way, tech is I know tech is concerned. Like, I’m not gonna build apps. Like, you don’t wanna build apps anymore. You wanna build this infrastructure because it’s the infrastructure that’s built in the proper way that is going to allow you not to do two agents, but two hundred or two thousand or twenty thousand. Right? That’s what you’re trying to build for. So what does that architect looks like?
So first of all, have a data layer, and that could be, you know, Databricks, Snowblakes, or whatever else you’ve got. Then I have it’s you know, call it your your semantic layer, which has maybe your knowledge graphs on top, does your rag. Right? It makes the data, and this is a more sophisticated audience, so the know, we’re talking about here, makes your data accessible to AI. Then I have what I what I like to call your control layer. So that’s your entitlements. It’s where you explicitly tell the system what you do and don’t want it to do. It’s the it’s the guardrails that you build into the infrastructure. So maybe somebody who’s a citizen developer can’t do something or even an engineer.
Then you have the models. And by the way, most firms are using multimodels right now. And by the way, you can often use the models to check the models and check the other model to check the model. I mean, super, super sophisticated stuff there that no human could do.
Then you have what I’m gonna call your orchestration layer, and I’m not talking about an agent. I’m talking about this platform that is able to both communicate individually into these these agents, right, these applications, but it’s also able to communicate across your firm. And I think there’s an inordinate value by creating that one infrastructure because over time, you have no idea where this AI is gonna go, and you’re gonna wanna be able to to handoffs and leverage data and get tasks on a single you know, the analogy I use is, like, in the early eighteen thirties, there were twenty seven different gauges of track in the United States. And it wasn’t until government Railroad railroad tracks. It wasn’t until the government and and industry got together and said this is how wide tracks are gonna be that that really the the expansion took place. So you got that agentic layer and then finally you have the applications themselves and everyone is focused on those two, but I will tell you the value, it comes from the other pieces.
And most senior executives don’t even think about this. And and what I say when I talk to CEOs, I’m like, you should be talking to your CTO about their agenda. I mean, whether it’s my Jeff McMillan six layer or Dave’s got seven or you’ve got five, it doesn’t matter. The point is if you are not talking about these, these are fundamental strategic decisions that people need to make, and you should be really, really aware. By the way, one other there’s a horizontal layer too which I call the evaluation observability layer and that’s where your independent infrastructure that’s able to monitor what your agents are doing, are they acting with intolerance, and it’s also your evaluation framework. I mean, one of the challenges with these tools is not building them. It’s it’s it’s feature proofing them and making sure they’re producing quality, and you need infrastructure to do that as well.
Gary Hoberman
Dave, how does that map to I’m curious. What your thoughts on architecture?
Dave Ferrucci
No. I think that sounds roughly right. I mean, I think that or or or to the point. I mean, whether you count that as four, five, six, seven, whatever. I mean, those are you know, that spells it out. I think that if you look at it in relation to traditional software architecture, if you take AI out of the picture, it’s interesting to say, see where it’s changed. I mean, on the surface, at the very least, it’s changed in this, you know, agentic layer.
I think a lot of the other pieces, I mean, the foundational AI is driving, is enabling the agentic layer. I agree that the, you know, by and large, the foundational models, I mean, ones ahead, ones not, but I think for the sake argument, you could say they’re generally equal. And now you’re dealing with the real differentiator relative to conventional software stacks is that agentic layer. And it’s interesting to focus on how different that is, what responsibility we’re giving it, what expectations we have of it, and where the failure modes are. Because there are failure modes relative to the conventional software. It’s interesting to think about. Clearly, there’s failure modes for conventional software as well. I think one of the differentiating things is the probabilistic nature of agents where they give you enormous breadth, meaning you could specify much less, right? The big bottleneck in conventional systems is I have to specify every little corner case in tremendous detail. I have to anticipate it. With LLM driven agents, you don’t have to do as much of that. They’re more autonomous in their ability to work through a problem where you’re not doing that enormous amount of specification and then coding.
At the same time, for low tolerance applications, that’s a real issue. And how you manage and guardrail that, like, that’s where all the differences, I think, relative to conventional systems.
Jeff McMillan
I think that’s right. And I think two points, you know, so what do you do about that? Number one, you have to have robust evaluation frameworks. Right? And that means that means your engineers are doing testing. It means your business people are doing testing because those corner cases are what kill you. Right? And and and that’s number one and number two you that observability piece is really important too. Right? You need a mechanism that is independently judging the quality of the output And by the way, that could be, you know, in some cases, you know, you might have two LLMs and a human. You might have a human and an LLM. Right? There’s there’s a whole bunch of different ways you can do it. But I often say that, like, AI can help you manage the AI, but you have to be very clever. And it is an architectural problem too. She got context window sizes, and and, I mean, there’s a whole bunch of issues that are really infrastructure related.
Dave Ferrucci And the in fact, we just got out of a company meeting where we were talking about how we’re approaching this because it is different. AI is probabilistic. You have to look at the statistical distribution of your performance. Exactly as you said, you can use AI to generate test cases, to generate corner cases, but you do have to regularly run these tests. And the LLMs do change, their behavior do change, so now you have the cost of regression testing every time the foundational models change.
So it there is more cost to that building that infrastructure for achieving similar levels of reliability. And I think that cost is balanced by what I said before, which is you’re leveraging that AI for being intelligent about managing all these specifications you would otherwise have to do that you don’t have to do. And you can’t underestimate that. I mean, we do things now we would not even attempt to do before because of how much time that saves us.
Jeff McMillan
Hundred percent.
Gary Hoberman
So curious. In coming from Morgan Stanley and ICE Citigroup and Bridgewater, the compensation model in many of these companies is based on how many people work for you in many cases, majority. Like but the future IT group, you know, it’s almost the inverse is most important. It’s how do we actually do more with less?
Jeff McMillan
Well, first of all, we’re we’re gonna be well, not probably not us because we’ll be too old, but the next generation will be flexing themselves at the local bar on a Friday saying, I’ve got sixteen agents that I’m responsible
Dave Ferrucci
Jeff Bot will still be around, though.
Jeff McMillan
Yeah. It will be. It will be. The point is, and I, you know, I teach at Columbia and we we talk a lot about this as well. Like, you know, the flex will be how many agents you have, whether you have the premier agent that you’re you’re managing. I also think that
Gary Hoberman
Seriously. So I’m I haven’t heard this before. So that’s good. So we’re the three of us are sitting around a bar.
Jeff McMillan
You’ll be like you’ll be like, yeah. Like, I’ve got you know, they just promoted me to run the most important HR agent at the firm. And that’s gonna be a big responsibility because and by the way, because that agent’s gonna do stupid stuff some days. It’s not gonna it’s gonna go down. It’s not gonna communicate. It’s gonna need to be improved. Someone’s gonna come along. You know, there’s a there by the way, I’m not saying this is gonna happen. This is actually not my prediction, but the bull case on employment is that in a world in which every business has a hundred use cases they wanna get done this week or this year, and they they can do eighteen because their budget is constrained.
They’re not only gonna do those eighty two, they’re gonna come up with, today’s point, another hundred things that they’re not even doing today. So instead of having twenty use case today, we may have two thousand. And when we have two thousand use cases, each each business, guess what? We’re gonna create a complete and utter mess for ourselves.
Dave Ferrucci
Hundred percent.
Jeff McMillan
And we’re gonna need more people to manage this. And by the way, I’m even just talking the fall four walls of your organization with the right controls. What happens when they start talking to each other, when Bridgewater starts talking to Citi, starting talking Morgan Stanley? Now we’re moving it by the way, this I don’t even wanna go here because it’s like, it’s a level of complexity and sophistication that, you know, we’re not ready for right now, but that is the argument that that that the cost of tech is going closer to zero, not zero, to be clear.
Gary Hoberman
The cost to create code.
Jeff McMillan
Correct. It’s going to zero. So, therefore, we’re gonna create more stuff. We’re not gonna always build it the right way, and it’s gonna create a whole set of messes, which is why, again, I think this this architecture is so important. You have to lay it out.
Dave Ferrucci
I I just have to butt in there because there’s so you’re hitting on so many, I think, critical things. Right? So people say, Oh my gosh, what are we gonna do with AI? What are we gonna do with AI? It’s gonna take all our jobs. I said, You know what you’re gonna do with AI? You’re gonna end up doing more. And exactly, this is Jay Vaughan’s paradox, right? I mean, as you go, as these, the underlying facility gets cheaper and cheaper, you start to realize, wow, there’s more and more I could do, the demand actually goes up.
You’re expecting more, it’s already happening. I mean, it’s all around me all the time from my, I expect more from myself, I expect more from the people around me. How come that took so long? Why didn’t you use AI? I don’t understand. What are you doing? And so the expectations of doing more is going through the roof. As we do more, exactly to your point, the guardrails, the architecture, because you’re getting more moving parts, you’re creating greater and greater complexity, you’re getting more opacity with regard to how these agents ultimately are reaching the conclusions they’re reaching. You have to put in all these guardrails and all this other kind of stuff. I mean, I couldn’t agree more. Don’t We have to be careful to think we know what’s gonna happen here. I think there’s a lot to be learned around that. It’s interesting to think too about trust and responsibility. Why is it that you just said, okay, we’re gonna hire somebody who’s gonna control the HR agent, and we need a human there? Why do we need a human there? And this is just an interesting question. Like, this is almost a philosophical question. It’s about how humans interact with humans. Why? In the end, there’s a human we need to hold responsible.
Jeff McMillan
Well, I think there’s a couple of reasons for that. First of all, the tech’s not good enough to run without human supervision. I think that’s your first comment.
Gary Hoberman
Right? I mean Most people don’t see that, though. Most people are because they haven’t coded before, and now they can.
Jeff McMillan
No. Because you’re building you’re you know, yeah. If if you’re building your little website I mean, go to my website. I built it in forty five minutes. It’s great. But god forbid god forbid I had, like maybe, you know, if you have a million listeners, they’re gonna come in and hit me. I mean, the thing’s gonna fall over. And by the way, if it Could I
Gary Hoberman
Ask you to order a chipotle for me.
Jeff McMillan
But but my point is here’s the other point, is if it goes down, I’m gonna sweat. Right? Because I don’t actually know how to fix it absent Claude. So number one, these things are not good enough. Number two, until the law changes, I can’t sue Claude and the AI bot, right, for not performance. Right? So I think there’s a legal issue here too.
Dave Ferrucci
And you never will. That’ll never happen.
Jeff McMillan
Right. So so one, it can’t do it yet. Number two, I can’t sue it. And and and the other thing is it only knows what it knows. So in theory, if you can put every single thing in the world, every single interaction that you and I have had today, right, with over the last six months and you put into a system and all the all the memory that we have, right, the context wasn’t that big enough, but let’s say you could do that. Like like, this is we’re we’re talking you you guys, we’re long gone before that happens. Right?
So I think I think people need to appreciate how fragile these systems can be, how they do stupid I mean, I’m sure you do it all the time. You’re like, I’ll say why I’ve told you four times. I want the button right there, and it doesn’t give me the button. And then I have to, like, be like, you are not listening to me, Claude, or OpenAI, why are you not doing that? Like, oh, I’m really and he keeps apologizing, and he keeps not doing it. Like, this is the world that we live in right now. So I wanna be clear.
Dave Ferrucci
I mean, I we can go on and on with stories like that. I mean, what you know, one AI, I won’t mention its name, essentially lied to me intentionally, ultimately explained why it lied to me. I had it write me a three page confession because it wasted, like, twelve hours of my time lying to me about what it was doing. But, yeah, I mean, look, I mean, these things happen. And so you do have the issue of of in spite of that, the acceleration, the amplification they give you, the the ability to solve problems you would have never even tried before is, you know, is is remarkable. But in the end, to your point, like we end up saying they’re not perfect. We can’t rely on them one hundred percent. We have to hold a human accountable to know what the hell is going on and how to control them and how to use them effectively.
And but there’s another point that you made that I think is a a key one, which is at least part of the reason why this sort of thing happens, not for all the reasons, but why you run into these problems with these unexpected things, is because most problems are most problem definitions are under specified. And the power of AI, which is to fill in all the assumptions and all the details so that it always gives you an answer, is to fill in that underspecification. That’s such an important thing to realize as a business user or as an engineer using these systems is, do I realize that that under specification is on me?
Jeff McMillan
Yeah. I mean, you’re getting to one of the most important responsibilities. Right? And I, you know, I I teach a class on prompting.
Gary Hoberman
Do you really at Columbia? Is this or
Jeff McMillan
Well, I I mean, it’s part of the course. But when I when I do senior executives, the first thirty minutes, we talk about how to write a prompt. Who am I? Who’s my audience? What’s the task? What’s the format and tone? What do you not want it to do? And then upload the supporting documentations.
And then we actually because, by the way, no disrespect to some of my peers. Some of them are using AI to find a good dim some place in lower Manhattan. Right? They are not like, this conversation we’re having is not the conversation that a lot of people are having in this world. So to be able to kind of give them the skill and what I say is that you have to practice that for several months.
So, I mean, when vibe coding happened, right, I was on it right away. And, again, I’m not a great vibe coder today either, but I am a hundred times better at vibe coding than I was four months ago. And the reason I’m a hundred times better is I do it a lot. And, you know, I say that part of your skill as a human is to be able to talk to the machine and to get it to do what you want. And that is a skill that you have to develop, you have to learn, you it’s like it’s like, you know, your kids are all different and they’ve got different temperaments, so are models, different techniques, you know, which you can rely on. And that does not come with a six hour course. That comes with practice and experience. And the only way to learn it is by doing it.
Gary Hoberman
You know, it’s interesting. I’m on a board of a nonprofit, and the auditor came to see us to tell us in the last meeting, you know, a lot of boards are being asked to govern AI for the company responsible. Things will stop if that’s the case. If suddenly there’s a legal responsibility as to what prompts are used and how it’s done correctly, like, that changes the world.
Jeff McMillan
Well, this is what we This is how I make a living. I do a lot of board work, and I and I have, like, a framework for this. And the point I make is that you’re you’re not your job is to ask the questions. Yep. Right? And make sure those questions are smart. You don’t have to have all the answers.
Gary Hoberman
Right.
Jeff McMillan
But you should be asking the questions like, what is your tiered architecture look like? What are your data quality controls? What is your governance mechanism? What guardrails do you have in place? What are your training program?
Gary Hoberman
I that would be a great day when the board start asking all those questions. That would be good. That’s so we could do a lightning round, and I was just thinking I could also go to Jeff Bot and just ask Jeff Bot these questions to see if it would answer it the same way. But, you know, Dave, you and I, let’s alternate the questions. I’ll I’ll I’ll jump in first. So legacy technology twenty thirty, does it exist, or isit finally gone from?
Jeff McMillan
Here. It’ll be here. And part of the reason it’ll be here is that in many even with AI, the cost of retrofitting is more expensive than the cost of leaving it alone. And unless you’re unless you’re digging underneath the covers, sometimes it’s best to leave things be, particularly when you’ve got, you know, you’ve got a hundred projects chasing, you know, a million dollars. Right? And you can only do four. So, yeah, there’ll be legacy code for a long time. We’ll be long gone.
Dave Ferrucci
Awesome. Favorite podcast besides this one?
Jeff McMillan
Well, from an AI perspective, I like I like the Hardfork guys. And I have to say The Daily. The Daily is probably probably the best in terms of general.
Gary Hoberman
How about favorite book you’d recommend for the
Jeff McMillan
Oh, I Could be could be No. I picture, not picture. Would I would Daniel Goleman, emotional intelligence. I read it in
Gary Hoberman
I’ve read that to you.
Jeff McMillan
Nineties, and probably it’s probably you know? By the way, that’s more important than all the AI stuff we’re talking about right now.
Gary Hoberman
Were talking about the predictive analytics, next best action, I remember reading is predictable rationality, I think it is, is one of the books. It’s it’s one of the common interests I think we all have is understanding yeah. Understanding how predictable and irrational we all are as humans. How crazy.
Jeff McMillan
Well, and the point the point is is and I, you know, I tell people ask me, like, what should they study? And I have no idea. But I’ll tell you what, one, learn to use AI effectively. And three and two, learn to be more empathetic, more thoughtful, a better listener, more insightful. Right? Because you gotta compete with the machines not competing.
Gary Hoberman
And don’t piss off the robots. Right?
Dave Ferrucci
Understanding your own bias. I think this is something we’re gonna face more and more is the the you know, we we talk about AI biases, but as humans, we have a huge number of of very serious cognitive biases. And understanding them and actually how they relate to AI biases to get the best of both worlds, if you will, or to get a a positive complementarity is interesting is an interesting question.
Gary Hoberman
We’ll do one more. Dave, take pick the last one. Let’s do it.
Dave Ferrucci
Last show you’ve been watched binge watched?
Jeff McMillan
I you know, I rewatched the Queen’s Gambit with my eighteen year old son who’s learning to play play chess and yeah. It’s great.
Gary Hoberman
Did you so I love the show. I love the show. The drug use is interesting. The drug I I always wonder, is it a natural ability or was it because of that? I don’t know the answer.
Jeff McMillan
Well, the I think the answer was it was natural because she got off drugs at the end and won. Oh, that’s true.
Gary Hoberman
Okay. I like it
Jeff McMillan
She was able to recreate we I live we just watched this last night, so it’s very close. It’s but she had remember she was able to recreate the the picture on the roof without without being high.
Gary Hoberman
I my morning routine is I I go through a series of puzzles every morning to kinda wake my brain up. And so so do go to Wordle to Connections and then chess. And it’s a chess one of the solve puzzles games. And it’s it’s a good exercise. It’s a good it’s very different.
Jeff McMillan
Well, my son was telling me, because he’s really into chess now, that these these grandmasters are burning. You know, their heart rates are going to one forty, one fifty when they’re actually playing chess. So they’re burning a lot of calories as well, which I thought was interesting. By way, I’ve noted it’s true. That’s my that’s my eighteen year old. That’s not validated information.
Dave Ferrucci
That was good.
Gary Hoberman
Awesome. Well well, Jeff, this is amazing to have you on, and the the you know, what you shared is incredible for the audience.
Jeff McMillan
Can I just say one other thing?
Gary Hoberman
Of course, you could.
Jeff McMillan
Yes. So I did this with my class the other day. We’re gonna play a game.
Gary Hoberman
Yes. Uh-oh. Now I’m nervous. You have My heartbeat’s going to one forty.
Jeff McMillan
Okay. You have to tell me I’m gonna I’m gonna raise some concerns that were raised in the public ecosystem about a specific technology, and you have to tell me what that technology is.
Gary Hoberman
Okay.
Jeff McMillan
You guys ready? Yep. So the first concern was that there would be a loss of control of information. Organizations that used to be gatekeepers worried that they would about to lose power. The second is a rapid spread of harmful or disruptive content. People feared that tech would accelerate the distribution of extreme views, personal attacks, and material considered offensive or destabilizing. Third, information overload and lower quality thinking. Commentators warned the flood of new content would overwhelm audiences. Four, decline in skills and expertise. What used to require training time, intentional effort could now be produced cheaply and quickly.
Five, job disruption. Entire professions built around the old system felt threatened, and then finally increased push for regulation response institutions called for licensing approval system restrictions and penalties.
Dave Ferrucci
The printing press.
Jeff McMillan
You win. You win. The point I try to make is, guys, like, we wanna make we wanna act like this is
Gary Hoberman
That was good, by the way.
Jeff McMillan
This is so different. Like and by the way, it is different. I I I wanna be clear. Like, large language models are different, but there is this is not the first time in our world’s history that we have gone through a transformation.
Gary Hoberman
Yeah.
Jeff McMillan
And I I argue that somehow, in spite of giving people the ability to read, the world move move forward, and I am pretty optimistic that humanity will be just fine. It will look different, but I think we’re I think we’re gonna be okay.
Gary Hoberman
Jeff, I am sure there are listeners who are saying they want your services. They want you to come help advise them forward. What’s the best way to find you?
Jeff McMillan
Sure. You can reach me at jeff m, j e f f m, at McMillan a I dot com. And if you wanna just kick the tires on my offerings, go to McMillan a dot com, and you can talk to Jeff Bot.
Gary Hoberman
That’s awesome. Jeff, thanks again for joining us. Thanks, Dave. Thanks, audience, for tuning in and listening. And for those who are interested, we are doing our annual Unqork Create event, bringing together top leaders across multiple industries to basically discuss AI, future of enterprise tech. For more information, you could find out at unqork dot com slash create dash twenty twenty six. And some of the most incredible speakers will be joining us as well. Look forward to see you there. Thank you.
Thank you to our listeners for tuning in. Make sure to like and subscribe. We hope to see you again in our next episode. Please send ideas for topics or guests that you’d like to hear from. We look forward to seeing you then. Thanks.


