Episode Transcript
Gary Hoberman
Welcome to architecting the AI enterprise.
We are all reading the hype in the news these days of everything from how is AI gonna transform the people, the code, the processes, the applications, your expenses, the businesses?
We are here to bring leaders, real amazing technology leaders to the table to help us answer these questions. What’s working? What’s not? What are they doing about this?
What are they piloting? What’s in production? And most importantly, where they see this going. I am Gary Hoberman, CEO, founder, Uncork.
With me today, I have our CTO, head of AI, Dave Frucci. Dave, just quick intro. We’ll get get into the details in a minute, but please just introduce yourself.
Dave Ferrucci
Yeah. Great to be here. Great to be doing this with you, Gary.
So I’ve been in AI really, you know, for a long time, for my whole life, my whole career, starting actually early when in college being inspired by computers and their potential. And I’ve been in the AI field ever since. I think I spent most of my career at TJ Watson Research Center, IBM, working on all sorts of different, research projects and AI projects at IBM, ultimately culminating in creating Watson, the natural language processing machine that beat the best at Jeopardy.
Since then, worked for Bridgewater. I had my own company, Elemental Cognition, and now it’s exciting to be helping Uncork kinda enter the age of AI. And as you mentioned, Gary, in a in a field now that is moving so incredibly rapidly. And, it’s just exciting to have been part of it for so long and to really be in this time where no one no one ever expected it would accelerate this quickly.
Gary Hoberman
You know, what’s interesting is when you look at this and our backgrounds are fascinating if you map this out in, like, some sort of chart or or graph, or maybe we ask AI to do it for us in a in a visual. You came from the research side. And yet you you and a lot of research projects. I’ve been to IBM’s research center, and a lot of research projects don’t ever see the light of day, much less see major TV.
And that landed you not just on Lex Fridman and other shows, which I’ve watched you on, but it it literally, you know, brought AI to the forefront where I remember driving into the tunnels in New York City, and you’d see Watson everywhere back in the day. Still today, it’s there. And my journey was the exact opposite, which was, you know, we’ve got a budget. We gotta build a a CRM system.
We gotta build a trading platform or money transfer, like and how do we do this with these constraints? And I always felt like I was that old show, which most people won’t know MacGyver, which was you’re gonna give me, you know, a language to use. You’re gonna give me some small hardware in a box, and let’s gonna go build this, and it’s gonna work. I was climbing up the corporate ladder on Wall Street on the on the CIO front.
You went from the research side after Watson to Bridgewater. So here we cross over where we’re both in the corporate world, and then we both left the corporate world to create startups, which is amazing. So and that’s that’s and then after that, we’re both here. So it’s really an interesting story. I know audience is gonna get tired of both of us jumping back and forth because that’s what we’re gonna be doing. We have very aligned views yet different views. We’re gonna get Dave’s opinion on, you know, what’s real, what’s not.
My opinion is all around this concept of how do you manage an enterprise technology team. And what I mean by that is engineering’s fun. Coding’s fun. You know, everything’s great until you threw it over the wall, and now it’s gotta run and maintain.
And when you move up that ladder in corporate world, and I I was managing about two billion of spend before I left it every year, what you realize is it’s a mess, and it’s just every single line of code ever written becomes your nightmare, your pain points. And so, you know, it’s fascinating as we’re gonna go through this, and I’m hoping you’re gonna hear it literally divergent viewpoints. And so some we we definitely align on. What I will tell you about Dave is when I met Dave the first time at his company Elemental Cognition, I saw something I’d never seen before, a AI tool that could be used in heavy regulated markets, explain what it’s doing, repeatable, not sensitive to model changes or any inputs or variables, and something which you could use in insurance or banking and onboarding and underwriting.
And and that’s when we started to get to work together. At Encore, what Dave did was come in and basically say, we have to rebuild ground up AI first. And it’s amazing to think that, like, that’s what we’ve been achieving the past several months as we’re getting ready to launch. This is the one episode I get to talk about on Cork with you, Dave, because what I’m every episode after this is really going to be about our guests and bringing them in and and you and I interviewing them to figure out what they’re seeing is real and and how is architecture important to them.
I’m curious if you could share a little bit about, like, you know, the probability of success, the, the fight to get it to go live in a way that you had the passion to see this through. Also, the risk taking you did, which is very unusual from a research point of view. I gotta say, like, I don’t know. But if you could share that story, that would be awesome.
Dave Ferrucci
Yeah. I mean, I think I think first of all, the Watson project, you know, started off as this desire for IBM to do something exciting, something that would win the company a lot of attention. And very sort of concretely, they got so much publicity from creating the, Deep Blue, the machine that beat Carrie Garr Garrigasporov at chess Garrigasporov. And, you know, that just got them so much attention to be at the leading edge of things like that in the area of artificial intelligence.
So I think it was eight or nine years after that they thought, what’s the next big thing to do like that? Meanwhile, I had been working with my team in a research capacity in the area of natural language processing and in the area of what’s called open domain factoid question answering, which now we kinda take for granted. Although facts aside or accuracy aside, we kinda take that for granted for LLMs, but, you know, it wasn’t fifteen or or or or even ten years ago that achieving any kind of accuracy even on basic factoid questions was somewhere in the thirty percent range.
Gary Hoberman
Wow.
Dave Ferrucci
And then when you added the linguistic complexity and subtlety of Jeopardy questions, that went way down.
And then the other big challenge that we understood from a research perspective, not from a business or or marketing perspective from a but from a research perspective was predicting your confidence. The likelihood that you will actually deliver the accurate answer was also very low at the time. And and this was an active research area. Many universities, many companies were trying to do this somewhere around consistent thirty five percent accuracy on much simpler questions with really very poor ability to predict your confidence, the likelihood you’d get it right. So this was a big challenge from a research perspective. But from a to your point, like, from a business perspective, it was really about bringing Lusser to IBM. Can IBM get out there, do something public, do something that we would actually get on television for, and excite people about the excite the world about the people at IBM Research, what they’re capable of, what it means to lead and be ambitious in a field.
I was the only one at IBM Research who thought we could do this. Now I had the advantage of having led a team in this area. I knew what the challenges were. I knew what the possibilities were.
Everyone thought it would it was impossible and that I would fail. So there was career risk in some sense. I mean, you know, people took that seriously. Me, not so much.
I was much more interested in the technical challenge. What can we do, you know, given the resources?
But to your point at the beginning, no one really knew. So IBM invested, you know, I’ll I’ll generously call it, like, incrementally. Meaning, you know, we’ll give you some money, see what you can do. We’ll start working with the Jeopardy team and shape this project as we go.
But we did get to a point where I was demonstrating real progress, and the executive team at IBM said, look. This is interesting enough. This is already very impressive Even though the chances of winning at the time might be fifty percent, it’s such an exciting area. Let’s just go, you know, public with this.
And at that point, once we went public, sort of the floodgates opened. And at IBM, it was it was, you know, don’t say no to Dave because it’s all on him.
Gary Hoberman
So you had to put your neck out.
Fifty fifty percent is not a big chance to win, by the way.
Dave Ferrucci
So But but, you know, the the trajectory was good. And then around two thousand ten, early two thousand ten, when it started to go more much more public, You know, we were up there in the sixty sixty percent range. We ultimately played the game, or I should say sixty five to seventy. We ultimately played the game because as you achieve greater accuracy and greater confidence estimation, like, eking out those percentage points was much, much harder.
So, you know, there were low hanging fruit at the beginning, but then it got harder and harder. We ultimately got to around seventy five percent accuracy with very good cons confidence estimation. We had real probability. So once we saw a question, we would be able to tell the chances, very precisely the chances we had of getting that right, which was a very cool part of the project.
So we’d know whether or not we should buzz in or not buzz in or depending on the game where the game was, we would calculate how much risk we wanted to take.
And even if our confidence was low, but we needed to get ahead, we would take greater risk and buzz in for that question. So it was quite a sophisticated system. When you you think about where we’ve gone and how quickly things have accelerated since then because this was the very early days of deep learning.
Only the ancestors of transformers, the basics behind, large language models were just very early research papers around that. And then that took off so rapidly.
Now we have large language models. It’s very interesting to compare the architecture of large language models with with with Watson. Large language models are so elegant and so effective managing and dealing with language and and and other artifacts like images as, you know, and music. So they’re so such powerful learners. We didn’t have that at the time.
But, anyway, things are moving very, very rapidly now.
Gary Hoberman
Yeah. I mean, I around the same time, two thousand nine, ten, eleven, I was MD at Citi at that point managing a team of about five hundred engineers. The project which really got me excited in that way, which doesn’t sound very exciting, was we had built a platform called Grand Central, which was being used by, like, a thousand apps. We built another platform called Q tips that I named.
I forgot the acronym, but it’s spelled like the Q tip, and it was about getting wax and bureaucracy out of companies similar to uncorked definition of uncorked the the bottle, get out the. I remember I didn’t support HR. I was in the business. I was building business facing systems.
The head of HR came to me for Citi and goes, we need to redo our compensation systems. None of the none of the vendor products work that we have, and we need to build pay for performance for the OCC and the regulators. Could your tech do this? And in thirty days, one of my platforms rebuilt Citi’s compensation system, which was I mean, we had more departments than people in the company.
You know? Like, that’s kind of the structure. It’s almost more than a one to one, which makes no sense, but that’s what structure was because you had you had basically people reporting it to multiple managers who decided there. And it was not an AI problem.
It was a deterministic pure data problem using XML at the time, not even JSON, but we did it in thirty days, and it became the actual core system of record for for Com. It’s such a different world you’re playing with.
Dave Ferrucci
Yeah. It’s so interesting because one of the things when you look at what happened with Watson, know, because after we built Watson to one Jeopardy, I mean, I was only at IBM for about another year and a half. One of the the things that was happening inside the company is so fascinating, which was the move or the expectations around going from deterministic programs.
You know, if you gave it if you gave it your Social Security number, a hundred percent of the time it got your name right. Right? Like, this is basic transactional systems, fundamentally deterministic. Everyone expected ninety nine point nine nine nine nine nine nine percent. Right? When you move to Watson, in spite of its confidence estimation where it would be able to tell you, you know, on this dataset, statistically speaking, you give me a question, I could tell you I’m seventy percent sure in this answer. If if it had to answer questions it was seventy percent sure of, it would get seventy percent of them right.
So it was that confidence estimation map to a real probability. That is so that kind of thing is so accepted now with large language models. That notion of probable probabilistic estimation of things.
But at the time, that was shocking. Like, what do we do with that? Right? And so no.
At at one point in one project, you know, someone you know, because once these things take off and customers start bringing their deterministic expectations to essentially a statistical probabilistically driven system, they start saying, what do you mean it’s not right a hundred percent of the time? And and and so you have people saying, Dave, you gotta stop with that probability stop. I just needed to rewrite a hundred percent of the time. It’s like, there’s not know, it’s the fact that it can accurately estimate his confidence was extremely valuable, but just not what everyone expected.
And my point is it’s so interesting. We went from customers in some sense saying, it’s really cool that weighs different answers. It’s really cool that it can balance the statistics and probability of something being right. But then when you go to deployment, they say I want it to be right a hundred percent of the time.
But what’s interesting is how much that expectation has now changed.
Because now we have AI agents running you know, threatening to run entire enterprises where nobody can tell you what the exact probability is any of this stuff will actually work and be right. And yet we’re doing it.
Gary Hoberman
So this is the question. So what would you in an enterprise like Bridgewater where you are, and I’ll I’ll talk for Citi MetLife. What percent of businesses do you think are deterministic versus probabilistic and have a have a risk reward there. I’m just curious, like, what you’ve seen.
Dave Ferrucci
It’s interesting because I think in some sense when you get to the really hard problems, they are statistical in nature and they are probabilistic in nature.
And, you know, it’s just that we’re so used to deterministic systems and very low error rates and many tasks. I I guess you could frame it in terms of companies. Like, you take a hedge fund, for example, a hedge fund is very used to living in the world of probabilities and statistical variance. And it really is all about understanding what that risk is and then deciding, you know, whether you wanna make that bet.
Everything is seen through probabilities. In other industries, accounting, finance, obviously, it’s very different. Right? It’s I expect this to be exactly right and I expect it to be exactly right every time and I want a deterministic explanation for why I’m making this decision or that decision.
So when you things you see things regulated from a safety perspective, from a risk perspective, from an accuracy perspective. So clearly, hedge funds trading is a whole different ballgame. But what’s I think surprising is health care is probabilistic. People don’t think it that way, but that’s the reality.
This drug works seventy five percent of the time.
Gary Hoberman
Yeah. The disclaimer on TV.
One
Dave Ferrucci
or two percent of the time, it has an adverse negative reaction.
Gary Hoberman
Yes. Yeah. That that’s a good analogy. That’s you know, it’s interesting because most of the businesses I faced off against were deterministic.
And it’s just the problem was more documenting what they did was a problem. Getting it down to a place where you could take the runbook and execute that and, you know, translate that runbook into a procedure versus, you know, someone’s mind and how it’s working. But today you know, think about agents today. So agents today are all around either generating code or executing a process in the highest level.
Right? There’s two, and they’re really one in the same. When you think about, like, agents executing a process, like, it’s amazing for something which is not life or death, I would say, or something that will be checked by a human. The older variables that go into that agent, the models, the inputs of the models, the operating systems it runs on, and all the, you know, connections and integrations back to the core.
To me, it feels like the day of, like I remember getting calls that the the trading app went down because someone kicked the plug in the data room. You know, it kinda brings back memories of that in some ways, which is all the variables that could go wrong if the server’s running under your desk in some way, shape, or form as opposed to data center.
Dave Ferrucci
I mean, look. I mean, I think this framing is very interesting because I’ve seen people take problems that are deterministic in nature and apply probabilistic methods for no good reason other than not really understanding the nature of the problem. I’ve often said to people, you know, half the problems you’re throwing at LLMs, you really should be asking the LLM to help you write deterministic code to solve those problems and not actually throwing them at a probabilistic system, which is very interesting in the role LLMs play and the role and the role AI plays in different problems. And you
see problems, for example, that are in fact, this is what elemental one of the things elemental cognition did was you see problems that are fundamentally constraints solving problems, optimization problems. These have you know, there’s a set of constraints. There are algorithms that deterministically find optimal solutions. There’s no reason to make this a probabilistic problem.
But if you don’t understand what you’re doing and you make it a probabilistic problem, you end up having errors when you don’t need to have errors. And sometimes I, you know, refer to that as you’re brushing your teeth through your ear. I mean, you’re getting there, but you’re really getting there in a painful and error prone painful and error prone way when there’s a much more direct way to solve the problem. So looking at problems in terms of whether they’re inherently probabilistic in nature, like predicting markets, and when they’re inherently deterministic in nature is just a powerful lens because getting that wrong is leads to unnecessary risk.
And so when we think about the role of AI Asians, how often are we experimenting with the AI to really just understand a deterministic process?
Right. And at what point should we admit that that probabilistic exploration of that space should actually then turn into a deterministic workflow?
Correct.
Gary Hoberman
Which could be monitored, audited, observed, maintained, consistently applied throughout a hundred percent accuracy.
Dave Ferrucci
The power of the of the generative nature of LLMs, like generative AI, is they can quickly search and prune a space that what might be very difficult or tedious or overwhelming for humans to try to tackle that because they have an enormous body of knowledge that is encoded in these statistical models that allow you to do this sort of smart generation. Like, this is a statistical variant that’s consistent with the prior data. It’s not necessarily right.
It’s not necessarily exactly what you wanna do, but I’m gonna allow you to search that space very rapidly. But then at some point, do you really want to can you tolerate can your application tolerate the risk of it potentially being wrong, especially when it’s taking action? Right? People always say, well, what if AI takes down the electrical grid? Well, don’t give it control over the electrical grid.
So, you know, there’s there’s this notion of exploring and understanding and hypothesizing, and then there’s notion of this taking action. And then you have to understand when you take that action, what is that risk? Do you understand the nature of the risk? Is it asymmetric?
Is someone gonna die? Are you gonna lose a lot of money? Right? So there’s a step of accelerating the exploration, accelerating the decision making, what is your decision flowchart? And then actually putting it in action as a deterministic process.
Gary Hoberman
So let’s Dave, like, let’s bring it to uncorked for a second. So there was a moment over a year ago, even more, where I reached out to Dave and I said, come join on Cork. The answer wasn’t a resounding, of course. The answer was come on board part time, see what we’re doing, see if you’re aligned to the concepts or strategies, but help us advise an AI.
And the reason why I reached out to Dave was not just because of what I saw him build elemental cognition and his background, but because of he was building an app in his garage at that moment in time. Like, that’s the it was like you were so hands on using every single agentic tool to generate code and copilots and cloud and everyone out there at that point, and you were in that and using it for your own purpose, which was really cool. Now what’s interesting is when you think about applying AI, I could tell you at that moment in time, I didn’t know if AI was an add on or feature. Is it slapped on top of Uncork?
Do we enable others to use Uncork to do this? I was actually not in the spot to say, here’s where we’d move forward. It was I needed that advice and help from someone who knew it. I’m curious from your side.
When you the first thing you did when you started to get into Uncork and help us was you focused on the word architecture from what we did in the past. And I’m just curious, like, what did you see that was interesting? And then let’s talk about applying what you just described, which is where are the agents working today in Encore? Where’s code generation?
They’ve enforced that all developers will use AI code tools to get their work done and help them. Right? And it was not something they were doing before.
So I’m just curious if you could share a little bit. Let’s let’s jump to that and see exactly where that applies.
Dave Ferrucci
First off, you know, I this goes back to what you mentioned that when we get started, you know, how our different paths intersected. You know, one of the things that distinguished me at, as a researcher was I always wanted to advance technology, deeply understand where things are going, make contributions about how to advance them, but I always wanted to realize them. And you’re kind of in this middle world between the high flying researcher and the engineer who wants to make practical things work. And I always sat in that in the middle of that, in some sense, delivering the Watson project, which was a nontrivial piece of code that we developed over four years, you know, about forty people hands on every day for four years, that ultimately worked flawlessly.
Gary Hoberman
By the way, it ran in memory. So how much RAM were you running on at that point?
Dave Ferrucci
Fifteen terabytes.
Gary Hoberman
Fifteen terabytes of RAM just for those those tech geeky like me. Like, that’s kind of insane. That’s Yeah. Amazing.
Dave Ferrucci
Yeah. And so, you know, in today’s scale of, you know, large language models, that was a self contained system, you know, something on the order of twenty five hundred cores, fully switched, meaning fully connected network and about fifteen terabytes of RAM. It never went to disk. We couldn’t afford to go to disk Wow.
At runtime. But yeah. So a lot of complexity came together, and it had to be rock solid. Right?
So a lot of work, enormous amount of engineering, regular testing, thousands of carefully documented performance experiments. So there was a lot of engineering in in all of that.
And when you’re bringing advanced technology to the marketplace, you can’t skip the engineering. So one of the things I’ve always been fascinated about and interested in is how do you use AI, you know, the confluence of advanced technology, artificial intelligence, and software engineering. And we’re seeing a boon in that today, you know, with AI copilots, which are helping you code. But what doesn’t escape that, and, you know, while the junior coder in some sense is threatened because of how good large language models are at rendering formal languages and mapping formal structures to, you know, intent and English and so forth, trained on billions of coding examples, so they’re super good at this. But ultimately, where the the bottleneck moves from the coding to making hard engineering and architectural decisions.
And this is a whole another ballgame. And in some sense, it’s so interesting because as coding becomes cheaper, the demand goes. If people ask, people come to me and say, Dave, what are we gonna do with AI? AI could do everything now. What are we gonna do? And, like, it’s really from the beginning of all this hype was you’re gonna do more.
Yeah. You’re not gonna do less. You can do more. Right? This is, you know, this is the famous Jay Vaughan’s paradox.
Right? It’s like so as this resource becomes cheaper and cheaper, the demand to do things that we never imagined we can do is gonna skyrocket. It’s not like we’re gonna do less, we’re gonna do more, but the bottleneck moves and the dynamics change. So where the bottleneck might have been on, we can’t find enough coders to do all this stuff or to work quickly enough.
The bottleneck starts to move to while we were able to code all over the place and, you know, everyone from
Gary Hoberman
Everyone’s a coder.
Everyone’s developing.
Dave Ferrucci
Kindergarten to CEOs can now write code.
What happens is now you need an accountable human to say, well, what can I trust here? It’s not it’s not about getting that function to work is where does that function fit in a complex architecture that all has to ultimately work together to to to satisfy a business goal, a business need for the enterprise. Who makes those decisions, and who’s accountable for those decisions?
Some combination, some confluence, some collaboration between business stakeholders, architects, senior engineers have to understand what all this means and how to govern control and be accountable for the results. So like the mental model is, you know, everybody’s driving around in golf carts and there’s a few traffic, you know, there’s a few traffic cops there directing them because, know, it’s all going pretty slow because golf carts are slow. Now you put all of them into f one you know, for your Formula One cars and they touch the gas and they’re flying all over the place covering a lot more ground And those few architects are sitting there trying to control what’s going on are just completely mowed over and it’s and it’s total chaos.
So now we have to start thinking, okay, we can go faster. Our cars can go faster. But how are we now making controlled governed decisions to ensure those business results are delivered with no more risk? Because we know when those f one cars are driving all over the place by drivers who are used to driving golf carts, and a lot more of them, by the way, because a lot more of them can drive now, you have enormous control and governance problem.
So now it’s about focusing next. It’s about controlling and focusing that speed. So what do you need? You need guardrails.
You need controls. You need governance. You need different traffic rules. You need all the things that are going to ensure that all that velocity is actually directed into the right channels and the right controls.
So when you think about what Uncork is about and one of the things that excited me about your original vision was that the commitment Uncork made early on and to try to control that.
Saying you can build applications, but if you build them with this curated set of components and you follow these configuration rules, we can guarantee you run certain runtime properties around reliability, around security, around scale.
So that’s the right idea.
And that’s what attracted me to Uncork. But now that idea has been not only I think it’s been challenged, but at the same time validated.
So it’s been validated. My goodness. You know, you couldn’t have been more right about the need for governance and control and for architected constraints and guidelines and a way to build in order to guarantee these one time results, this total cost of ownership, this zero tech debt over time. Right? These are things that enterprises absolutely rely on to control cost and control risk.
But now it’s so much worse with everyone able to code and to code quickly.
And absolutely they should be using AI tools. Absolutely. But now Uncork has to be reinvented, and we think of Uncork AI as the reinvention of that. Same fundamental principle, but now not controlling the golf the golf carts, but controlling the the the f one. Right?
Gary Hoberman
The FI
Dave Ferrucci
ratio.
Formula cars. So more of a challenge more of a challenge, but nonetheless, the right problem to be solving.
Gary Hoberman
You know, it’s there’s a few aspects. One, speed. One of my favorite moments in uncork was and it’s a craziest moment, was when COVID happened and hit the city. We jumped in with, you know, New York City CIO at that time and said, we could help, and we built the contact tracing system in two days.
We built the food delivery system that I think delivered forty million orders of medicine and food in just five days. We built it from scratch without AI yet. And the reality was it was probably fifty people configuring using Encore behind the scenes, which you’re now replacing with agents. And now the speed of those apps if I showed one of those apps today to someone and said, oh, I just used Lovable to do that before.
Let me show you the prototype where I used Claude and it built it in. The problem is when you’re you know, I I go back to the the nightmares I had as CIO. When you’re when you’re climbing up the corporate ladder and you have your career and you’re a system engineer or gonna be a project leader or a tech specialist or architect or like, my dream was always that CIO role. It was like I always vision that role as like, I’m in I’m in a decision seat.
I’m gonna make a difference to the organization. I’m gonna I’m gonna help people grow and you know? And when I achieved it, what I suddenly realized was it’s miserable. It is nothing but misery.
And what I mean by that is you you walk in and eighty percent of your budget is locked away for what we call keep the lights on, and that’s infrastructure security compliance, tech debt resolutions, end of life problems every ten years, and it’s it’s maintenance. And then and then you go face the regulators, and the regulators say, hey, Dave. How how secure are you? How compliant are you?
Show me your proof. And you demonstrate you ethically hack, you know, every app once a year if it’s external or twice if it’s critical and once every two years internal and you fix issues come back, you fix them within ninety days, but you never get to the low issues or the medium issues. You only did the high and critical issues because you didn’t have budget or time.
And when you reality every time I said we’re secure, I would have to, like, you know, cross my fingers or something like that because the reality was there’s six hundred vulnerabilities every week created in the world of hackers, and that was before AI. I you know, OpenClore, I’m viewing as a mass hacking device waiting to take over all systems and every everything out there. You know, when you look at this world, you say you weren’t so secure before, and you’d to be secure, you’d have to test every day with a different hack team, and you’d have to resolve every night issues. And you’d have to do that across eight thousand production apps across forty seven countries.
Even if you had a hundred times the budget, you couldn’t finish. You couldn’t do it. You didn’t have the people and resources. And and I don’t you know, what I see happening is every line of code to me was an expense.
Every single line of code was a maintenance nightmare, and, you know, we we would reward developers with apps they built. We should be rewarding with less code, which it was kind of the concept of Encore because can we do it without any code? Because that’s gonna be that’ll be secure. That’ll be better.
You know, it’s interesting because it’s
Dave Ferrucci
Oh, yeah.
I mean, less moving parts and less types of parts. Right?
So
Gary Hoberman
That’s the So, you know, I read there was an article someone wrote that said they believe open source libraries, which are reusable.
Open source to me is reuse and libraries are reuse and frameworks are reuse and, you know, they’re they’re like, it’s gone. You should guess everyone should just write all the code they need from scratch.
What is your opinion? I’m curious.
Dave Ferrucci
I mean, that doesn’t make a lot of sense to me. Yep. I I I I think probably what I think what it’s probably trying to echo or to be generous about the comment is that there is a growing demand for customization, but that growing demand for customization doesn’t exclude or discount the power of component reuse and the notion of cons you know, constrained configuration rules over new code every time because exactly the reasons that you mentioned.
I think AI can help with that, but you have to make that kind of commitment to begin with. I think the idea that, you know, all the code’s gonna be different all the time has so many other implications that don’t, you know, don’t make sense to me.
What does make sense is and this is where you think about what’s happening with the SaaS companies and so forth. Right? Is that it’s not that the notion of SaaS is problematic. I mean, the notion of a company building and providing for you all the application infrastructure so all you worry about is its application to your business is a powerful idea, always will be a powerful idea.
It’s about focusing on what you’re good at. You know? I wanna focus on selling lemonade. I don’t wanna focus on software and infrastructure and all the things that I have to worry about to maintain to maintain that operational efficiency across all my lemonade stands.
I wanna focus on selling lemonade. So I don’t think that goes away. I think what does happen though, and this is going back to the point of as code becomes cheap, what we were willing to tolerate before, we don’t wanna tolerate anymore. So in other words, you know, that SaaS application didn’t exactly do it the way I wanted it to, but that’s fine.
You know, I can make that work because
Gary Hoberman
Or or worse, you had to pay, a strategic integrator eight times the license cost to make it do what you wanted.
Right? That’s the alternative is Right. The cost That’s right. Hacking.
Dave Ferrucci
Hacking.
So so you want it to be different. You either accept it or you pay a high premium to change that.
So I think what’s gonna happen is SaaS isn’t gonna go away, but the pricing dynamics, the demand for customization, and the expectation of cheap customization and and for a particular business is going to go up. Does that mean that everyone’s gonna rewrite the these massive applications themselves with AI?
I don’t know. I think it’s more along the lines of the industry is gonna be required to your point of lowering the cost of customization.
Gary Hoberman
I am with you on the long term. I’m worried short term.
I think we’re going to see a lot of technology teams who’ve invested in AI and have a large budget they’ve spent in trying to make AI work. Look at the SaaS providers as they have clear documentation requirements there because they’ve done it before. It’s already working. It’s configured.
And look at it as let’s just rebuild it using AI and reduce costs. And what’s going to happen is they might achieve that. I believe they could. I don’t think it’s a technical challenge.
Now they have to maintain each line of code they created.
And that was why SAS came about. The SAS came about like, I built Smith Barney CRM from scratch. You know? We moved it to Siebel because we were paying a fortune in maintaining it and keeping it up to date with not my code. Every it was it was the patching. It was when you know, the Windows NT server was being a patching.
Dave Ferrucci
Yeah. The patching, the chain the but the changing as requirements evolve. And and so, you know, I think it depends. I think it’s a spectrum. If it’s a very small amount of function because some people are in this situation where they get these giant SaaS contracts. They use a relatively small part of the funk.
Gary Hoberman
Yeah. Replace it. Then it makes sense. You’re you’re getting a good value. The question is value.
Long but, like, it’s gotta be long term in the sense, not just of the cost of the line of code to create it, which is now going down to cost of electricity. It’s gotta be long term in the sense of risk and total cost of ownership. How much does it cost to see me to host, maintain, secure?
Dave Ferrucci
Total and total cost of ownership has to include the fact that as requirements change, technology changes, expectations change, whatever you created becomes, quote, unquote, legacy. So you’re always always adapting it. You’re always trying to keep it up to date in a variety of different ways. What is that? What is that cost?
And what bets are you making? And this goes back to architecture. One of the biggest challenges architects have is they have to make bets. They have to make bets about what’s actually going to happen from a business perspective, from a technology from a talent perspective? Because all those externalities change the way you architect software.
Gary Hoberman
I I think I know the answer, but in the future, in a technology function, what’s the most important role in your mind?
Dave Ferrucci
Oh, in my mind, it’s the architect. I mean, I I it’s the person who thinks about all the various choices that need to be made and what the implications are of those trade offs because they’re always trade offs. You can say, well, in in in the future, every time you realize you made the wrong trade off, you can go and magically recode it all. I just think that people are underestimating the dependencies.
So, you know, silly example is your business, your software business, you make some architectural decisions that result in an implementation that has certain constraints. There’s just basically the laws of physics and software. Right? I mean, they just have some constraints.
Your customer base becomes dependent on them. So you say, well, I’m gonna magically recode it. Yeah. But now you have to bring all your customers along and all their processes have to change as a result.
Are they gonna come along or not? This is just there’s a lot of interesting economic dynamics here, business dynamics.
It’s just a lot more complicated than I think people
Gary Hoberman
I I like your answer.
I like the idea of the architect because the architect really became a governance function of just thou shalt use this set of tools, but you could do whatever you want for these reasons. So real quick lightning round. I’m gonna go through as fast as we can here. This is what we’ll be asking guests, so I’m gonna do this with you, I’ll also answer. But by twenty thirty, on Cork wipes out legacy technology, which technology are you most excited about wiping out?
Is there anyone?
Dave Ferrucci
This is a little bit snarky snarky, but PDFs?
Gary Hoberman
Digital paper. I am that’s a good one. Digital paper’s good. I would said Lotus Notes because I still see it in our customers and production as shocking as that is, and it’s still here. Next question. Favorite AI podcast or any podcast besides, of course, this one?
Dave Ferrucci
Well, I mean, I you know, look. I I have I have a general, you know, respect and admiration for Lex Friedman. I don’t know if you consider that an AI podcast.
Gary Hoberman
But I consider an AI podcast because my answer yeah. That’s a good my answer would be Joe Rogan, which is has a lot of AI guests, but I I’m waiting for him to have you and I on the show. We’re gonna make a call there and just say, favorite book you’d recommend?
What’s your
Dave Ferrucci
You know, think fast, think slow.
Gary Hoberman
You know, and I’m similar, barking up the wrong tree, Eric Baker, who quotes all of kind it’s Dan Daniel kind of it, right, in think it’s an amazing book. Last show you binge watched?
Dave Ferrucci
Oh my gosh. The last show I binge watched, I think I think it was beast and me or something like that on Netflix.
Gary Hoberman
Okay. I I did landman. I actually really enjoyed it. That was that was good. And let’s see.
Tech product you wish you had invented, if you could go back in time.
That’s a good one to end on.
Dave Ferrucci
So many. I mean, there’s so many cool invent so many cool inventions. I’m gonna stay away from the AI stuff, and I’m gonna say lasers.
Gary Hoberman
Oh.
Oh, you could build my feet the what I dreamed of, like, the laser gun when you’re a kid and just playing with that’s a good one. I like that. Mine would be Facebook just because it’s such a simple technology to build. I mean, we could build that in, like, a day or two, but it had the network effect. And I like the I like the idea that something takes off, which is great. And with that said, I’m hoping this is the podcast that will take off and bring everyone insights into what’s real, what’s not, how to apply this in your business.
We have some amazing technology leaders coming on after this, and each each week or two, we’re gonna be bringing you some some amazing perspectives, and, you’ll join Dave and I for that. So looking forward to the next episode. Thanks, Dave.
Dave Ferrucci
Thank you.
Gary Hoberman
Everyone. Thanks.

