Episode Transcript
Gary Hoberman
Hi. Welcome. I’m Gary Hoberman, CEO and founder of Unqork.
Dave Ferrucci
Hi. Dave Ferrucci, chief AI officer and CTO at Unqork.
Gary Hoberman
And welcome to architecting the AI enterprise podcast. In the podcast, we get to bring amazing, incredible leaders across technology and enterprise, specifically enterprise. They live and breathe this to share their experiences, their background, but more importantly, where do we think we’re all going to in this crazy new world of AI we’re living. And with that, I am honored to have Jenny Larsson join us here. Jenny, I’ve first got to know Jenny as I did as a former global CIO. And Jenny is a world class technology leader, AI advocate. And just, Jenny, thank you for joining us here today.
Jenny Larsson
Yeah. Thank you for having me, Gary, and nice to meet you too, Dave.
Gary Hoberman
Awesome. So, Jenny, we always start off with, like, the hero’s journey of how you got to where you are. So did you always wanna be a CIO? Give us give us the the Jenny story, the background, the technology journey, and how that came about.
Jenny Larsson
Well, I started out as a developer at one point, which was great. I coded embedded systems, c, c plus plus, Java. That was kind of my my backdrop. And then I I moved into banking and financial and insurance and just kind of went through the ranks. And all of a sudden, I found myself kind of as this in this global company, I was with GE for more than ten years, amazing journey, just working with amazing people across the world. Then I moved on from GE when we saw I was first part of just selling, you know, a hundred and fifty billion dollars of assets when all of GE Capital was sold. And then I found myself coming out of that and thinking, okay. What do I do now? And then I went for a COO job, and I realized, you know what? I could actually do that, and I enjoyed it. It was great. It was I moved into so I was in g, kind of large global organization, operations across Asia, Europe, and then found myself into this fast moving company that was growing a hundred percent year over year in the financial services space, and it was just a completely different experience. But it was just amazing. Alright? And I’m I’m really happy about just the global insurance companies that I’ve been with, but then also the global banking companies, and then just the fast scaling environment. Because you learn completely different things in terms of how you operate just with organizational inertia or if you have a huge momentum that you have when you are in in kind of the growth mode, and you’re just trying to figure out how do you enable your organization for the next month so that you can grow with the pace that you’re at. Right?
Gary Hoberman
That’s and so tell tell us, jumping right in. So Yeah. Five years from now, jump in a time machine, come out of the time machine, what is the most important role within a company? What do you what do you see the future organization being like structured?
Jenny Larsson
So for me, it’s not just the IT org. I think about the overall organization. So two layers to this question. IT organization itself, but then every other organization. And you guys have touched on this in previous episodes, I believe, because even now, I am seeing leaders and people are taking on AI agents alongside their teams. Right? So in in the next few years, everybody’s gonna be having teams of AI agents. Right? Either working alongside them or even managing them. And then you also say that architects, AI architects, or enterprise architects is the most important role. I would I would push a bit further on that because the most important capability isn’t necessarily a role. I think about judgment and I think about critical thinking and knowing when to step in and knowing when to trust. When you think about what the IT organization is gonna evolve like, there is a more radical shift that’s most likely on the horizon where IT becomes the backbone. Right? Infrastructure, security, governance, architecture, coupling it all together, all of the things that nobody really wants to care about. But then the big IT development pieces that we’ve had in the past, the creating, the building, that gets democratized across the entire organization. Because you have citizen development, you have the tools that are available for people now. And then you’re gonna be blending domain experts and software builders as part of that.
Gary Hoberman
And So so splitting up. Typical build versus run. Yeah. Right? Build versus run-in a bank or insurance. Like, sometimes organizations split their IT group into build and run, and, hey, the run is the boring stuff. That’s keeping the lights on and, know, just
Jenny Larsson
And you typically need different people. Right? Because there’s the people that love the building, loves the change, and then there’s the people that run, and they love the run as well. Right? It’s almost different personality types.
Gary Hoberman
So in your view, though, the build group gets merged back into the business? Yeah. They become business. Yeah. Back to and that’s what it was. That was the way it was when I started coding is, like, there was no concept of CIO function and IT, and and, you know, like, the you you sat on the trading floor. You built systems with the traders. You sat with the business. Your bonuses were based on how well the business did. What a concept. Yeah. You know? And if I remember when
Jenny Larsson
I started building. It was the same I started building. I was building with the business. I developed products and c c sharp, and I shipped products. Right? And I was the QA tester. I was developer. I was the one gathering requirements. It was all, you know, in one So
Gary Hoberman
so, Dave, what do you think? In that world where there’s the I’m gonna call it the janitorial staff in some ways because it’s a cleanup. It’s the it’s the keep it running cleanup, and then the build goes to the business. I’m curious. What do you where does architecture play in that debt? That’s an interesting question. Is architecture a build, or is it a run?
Dave Ferrucci
First of all, yeah. So, I mean, look, I’m an engineer and scientist. I’m not a business person, but I’m very applied. So I think that the relationship between engineering, design, development is always in service of the business. You have to understand why you’re doing things or a problem you’re trying to solve. You don’t really want to throw things over the wall. So I think the the integrated view where everyone’s focused on the business outcome and the business function, you know, makes a ton of sense. One of the things I wanted to pick up on on what Jenny was saying was, you know, about how things are evolving for all the players with regard to the use of artificial intelligence. And you talked about judgment. And and then I and I think the role humans play in all of that as AI and agents get better, is around accountability, judgment, and trust. And I often think about the job of an engineer or an architect making judgments all the time. And they’re not just making judgments, they’re When you combine that with accountability, they’re making bets in some sense. Right? They have they’re projecting forward in time about how a lot how they expect a lot of things to change, the surrounding technical environment, the surrounding market, the surrounding technology, how the how the system itself, how different components will change independently of each other. And they make assumptions. They make bets. And they engineer and design with respect to those assumptions, and then they’re accountable for them. I don’t think that complexity ultimately goes away. I think the AI accelerates, implements many of those things for us, moves things faster. But that judgment, ultimately that accountability, ultimately that insight that makes us makes us humans have to say, I’m gonna bet on this being the right direction and then being accountable for it. I don’t think that goes away.
Jenny Larsson
I think you’re gonna have to have accountability for sure as part of it. I I was recently in this agentic course at Harvard because I felt, you know, I’m gonna go out and see what the research says, the the top universities in the world. And what stood out to me wasn’t necessarily the tech. It was the ethical design frameworks and how you need to think about things going forward and how you need to put that into your design framework up front. So the best example of this is your self driving car. Right? You have to design upfront what what are you gonna do? Are you gonna hurt your driver or are you gonna hurt pedestrians? Right? And that’s the ethical design framework. And that’s not necessarily a software engineering question in the future. Right? That is gonna have to be values, leadership, and people are gonna have to step in and be part of that process. And that’s a big gap that I’m seeing in most organizations. As you’re rolling out agents in the organization, are you thinking about that enough?
Gary Hoberman
What’s interesting is, like so there’s that MIT driving game where I play it, where you could go online and say, okay. The car is driving. I can either go left and hit the elderly couple or I can go right and hit the three kids and the mom. And what should the car do based on and what’s interesting is the car itself potentially might have a a say to say, I don’t wanna get hurt either down the road. The car could say, which which of these is gonna be least damaged for my myself as a car? But it’s interesting because the the fear that I have is, like, Jenny, like, when you would deploy a change in the global insurance company or g, like, every change you deployed on a weekend or a Friday night and then tested the weekend and then rolled back probably, every change was impossible to get right. Just adding a simple field in a database required altering tables, offloading data, loading data back in, changing the and it wasn’t done well. Like and I just keep picturing when the model changes or a variable changes in the model, the answers change and we’re not aware. Like, suddenly, the behavior changes and the car is gonna, you know, do something that you didn’t expect. Like and that’s that to me is the interesting part is, like, what what what’s your acceptable rate of error in that case? Right?
Jenny Larsson
Yeah.
Gary Hoberman
That’s Yeah. That’s the scary part.
Jenny Larsson
Scary part, for sure. And the way the way I think about this, first of all, I think about it from a standpoint of for me, it depends where you are in the process and what the consequence of that error is gonna be. So in regulated industries, and I know you guys have touched on that in previous episodes, right, health care, banking, it’s a regulated industry. You cannot go wrong. Right? And you have to upfront you have to plan for that. You have to put that as part of the design process, governance, and human oversight. But then you have so many other areas of the organization where, you know, if an agent gets you seventy percent of the way, you know, when it used to take three days for somebody to do it, and you finish off the last twenty minutes and thirty, you know, the last thirty percent in twenty minutes, that’s that’s a huge leap forward for an organization.
Dave Ferrucci
I was gonna say, like, I think there’s just a another kind of more fundamental issue going on with this probabilistic nature of large language models and their usage. Because when we use them for things that are actually not designed for, that creates more fragility in the system, I think we’re we’re just getting carried away. I mean, this is a bad design. So it’s very easy to start asking agents to do things. And then agents are making decisions, possibly hallucinating, possibly making errors that are really they shouldn’t be making at all because there are very clear determinist processes for doing that. But it’s so easy to throw an agent at a problem, rather than step back and say, This aspect of the system should be absolutely deterministic, error free. There’s no reason to have any probabilistic process involved. How do I re architect that? Now, you might have the AI help you write the deterministic program. That’s fine. But throwing certain problems at probabilistic agents is just a bad design point that I think we’re getting carried away with, and we’re really not stepping back and thinking about how to architect these things more effectively. I think to your point, there’s always like this different types of risk. So if when you have to do that risk assessment, in some processes, especially for finance or regulatory issues or health care, your tolerance is zero. It’s just zero. Throwing an LLM at that when there is a deterministic process is just a bad judgment, speaking about judgment. On the other hand, to your point, if you can accelerate things with LLMs, and you understand the nature of the errors that might happen, and they’re worth the risk, that’s fine. But that’s another judgment to come to that determination.
Gary Hoberman
You know, it’s interesting. So we didn’t talk like, so the car example. So you could do vision, you could do lidar, depending on which one, and I guess there’s an is Rivian coming out later this year with both? That’s what I heard they’re gonna have but the whole vision aspect of knowing what’s around the environment is a great AI use case and and understanding be what’s there, and that’s but I guess once you know it’s a child versus the elderly, that, Dave, that could just be a deterministic process at that point. It could just be a rule. Right?
Dave Ferrucci
It just It could be a value it could be a value judgment that’s made and predetermined and that that’s right. That can be a rule. I mean, I’m just I’m just the the re why I don’t like that analogy is because it’s it’s good on one hand, the reason I don’t like it when it applies to software and engineering and because everything’s always not that stark. I mean, are micro judgments that are made all the time that when you accumulate, they amplify throughout the system. There’s no one stark, you know, go to the right or go to the left, know, do they it’s it’s you know.
Jenny Larsson
And then when you have teams that are responsible for different parts of it. Right? They, you know, they don’t really see when system error just rises through the ranks. Right?
Gary Hoberman
Exactly. So, Jenny, go in this period, we’ve advanced five years. So build is now inside the business. Run is here. I’m curious. You you yourself, where are you in this role? What do you see your if you were going to be able to define your role right now for what you’d want to be in five years in that picture, are you in the business? Are you run? Are you where do you see that? I’m curious.
Jenny Larsson
I would see myself on the business development side, for sure. Because if if you think about my background, it’s all been transformational work, very tightly coupled with how are you driving your organization forward. So I would definitely see that being more on the business development side. But then how is, you know, however that’s gonna look, who knows? Right? Because we’re just evolving.
Gary Hoberman
So in the CIO world, I remember when I before I got the the MetLife CIO role, they the the recruiter who called me said, there’s a matrix, and there’s this matrix, and the matrix, the quadrant on the right is innovative CIOs that understand cost maintenance, and then there’s so it’s it’s cost control versus innovation as the two matrixes. And where do you sit in here? Are you the innovative, but you couldn’t care less about the cost? Are you the cost CIO who just wants to focus on expense reduction, but can’t think of anything new? In this new world, I guess that that’s the bifurcation of, like, hey, if you’re innovative, you go into the business, and you’re gonna help them directly drive and you’re not gonna be constrained. If you’re cost cutting, you’re gonna be the run guy or girl, and basically get that done. And it’s really it could be a good way to think about it is, you know, that that’s the dimensions we’ve always talked about in the past. The the car analogy, bringing it back to, insurance, like, are helping insurance companies today to take in unstructured data through emails and documents and, you know, PDF and Word and Excel and and making meaning of that using foundational models and LLMs and and extracting out the data where it used to take days to your point. It used to take three days and now it could take twenty minutes. But but then applying the deterministic underwriting rules, like then deploying the because underwriting is is not a you don’t need a foundational model for that. It’s it’s pretty cut and dry. There’s a runbook that says, here’s what the regulator will allow you to do or not allow you to do, and you can’t consider anything else except for what’s in this runbook.
Jenny Larsson
When I think about people who are saying, okay. So, like, what’s the error rate and what’s what’s allowed? The other the other piece that I feel we’re not talking enough about is we’re expecting it to be a hundred percent. But then when you take any team and you put an assignment in front of pipe five people, are you gonna get the same exact five answer back from them? Most likely not in a lot of cases. So even in operations or whatever function that you’re running, you will have human judgment or subjectivity. Right? That is, you know but you’re always expecting agents all of a sudden to run at a hundred percent. But do you even know what your own own organization is running at?
Dave Ferrucci
Yeah. But I I mean, I I go so I agree with that. I think you always have to have a baseline. And having the human baseline is really, really important. It’s not just a human baseline, though. Right? There’s also solving the problem differently with a computer in a traditional conventional algorithm is yet another baseline. So the times a computer incorrectly looks up your social security number, I’m sure approaches zero. We’re not doing that probabilistically. So, I think when you look at a problem, have to sit there and say, what is the right baseline to apply? What is the right expectation? You’re absolutely right. Errors are mean, humans are error prone. Driving cars is great. Like if all the cars were, you know, AI self driving cars, I’m sure the overall accident rate would be dramatically lower than humans driving. And so that’s true in that case. So when you look at a system, you don’t and Gary’s gonna laugh because I love this phrase. You don’t wanna brush your teeth through your ear. If you know there’s a deterministic process that is ninety nine point nine nine nine nine percent accurate, that becomes your new baseline. And you don’t wanna introduce a probabilistic process when you know how to serve that solve that deterministically with that level of error rate. There other problems that we just can’t easily do deterministically, where these LMs are incredibly powerful. And then our baseline switches over to, well, who’s doing it now? Humans are. What’s their error rate? Yeah. And underwriting, like
Jenny Larsson
And how long is it taking?
Dave Ferrucci
Yes. And how long is it taking?
Gary Hoberman
Yeah. I mean, if you could do underwriting and explain the thought process, and that’s so Dave’s Dave’s model, he before he came to Unqork was deterministic, what we call reasoning AI reasoning engine that was not just repeatable, but explainable to come back with, how did you come up with the answer? And, you know, you approved that loan. How did you come up with that answer for that loan? And go through the steps. If you’re able to keep the log of the steps that the AI went through and record that along with the answer, that’s better than a human. I know. Go ask a human, Jenny, you approved this loan. Why did you approve it? You’d be like, I
Jenny Larsson
I don’t know. Let me look in it.
Dave Ferrucci
That’s right. That’s right. Yeah. I mean, that’s I mean, look, I absolutely. I hate what I’m doing right now is I’m tweaking everything that you guys are saying, but I can’t do it. I can’t resist.
Jenny Larsson
What are saying?
Dave Ferrucci
I can’t I can’t resist. Because you can get the you can get the LLM to to follow a procedure, and you might as well just have it write a program that follows a very specific procedure. But if you can’t, because because that is so ambiguous and fuzzy and so language based and so experience based, you can certainly get it to do something that would otherwise be very very hard to program and to mirror the intuitions that humans use or whatever. And then you ask it to explain itself. There’s a difference between ask it asking it to explain how it went through a series of rules in a very systematic way and asking it to explain a prediction it made, a probabilistic result. Because what’s fascinating about LLMs is they give post facto explanations. Often like humans do. They look at the result and they go, How might I have come up with that in a way that makes me look good?
Jenny Larsson
But isn’t that why you need to build it in as part of the decision process? Like, when you design the flow Exactly. You design it upfront to log, like, here’s why, here’s the data I’d used, here’s the decision I made. Move on. Right?
Dave Ferrucci
You need to make sure it follows that process. You you and again, you can always and and this is why you can always use the LLM, the AI, to help you define that process. Now you have to assure it follows that process and then give you the explanation that reflects that process. And then you have the best of both worlds.
Jenny Larsson
Yeah. But the other the other piece to this so people aren’t talking a lot about the human error. The other piece that I don’t feel people are talking enough about either is when you design some of these agentic workflows, or you design you’re you’re taking a lot of them, and I’m very passionate about this, coming from GE, like, Black Belt, Six Sigma, it was what was ingrained in us. Right? Designing the workflows from an agentic perspective as opposed to just taking the human process. Because when you do agentic, you can do workflows in parallel, which, you know, in the human, you had to do in a certain sequence. Right? So it it’s the analogy of, yeah, you can put the Ferrari. Right? You can put put the Ferrari in the traffic jam, but it’s not gonna go any faster. So if you don’t design your workflows from from an agent standpoint, if you’re taking new workflows, you’re you’re just gonna be in the same situation that you can.
Dave Ferrucci
Yeah. I I agree with that. I I think it has to be kind of AI first. You have to look at it holistically. I’m solving this problem differently now. How do I use the tools most effectively? Yeah. Yeah. I agree. I agree.
Gary Hoberman
And what, Jedi, if you looked back and said, in the corporate world, which department would you think is most able to be replaced by agents today? Like, what do you what in reality, I’m asking you, which department was the most painful to work with? I guess I’m saying. Probably two.
Dave Ferrucci
Which?
Jenny Larsson
First of all, I think the opportunities are endless for any department, to be honest. I always think about the mantra, which is finding the biggest pain point that nobody wants to do, but that’s critical to the business and making you money. And that’s where you should be putting AI first. I would also push back a bit on on the word replace because I have not maybe you guys have seen this, but have you guys seen organizations actually replace complete departments? I I’ve only seen the best ones and they’re actually elevating them. They’re finding their best people and then they’re giving them AI as coworkers. That was my kind of cue to it, but if you look at what the market is doing, right, customer service, a lot of them are going after customer service, marketing, finance. There’s a lot of kind of repetitive admin work that people are doing. So I think they are for sure opportunities. Even HR recruiting, there’s a lot of repetitive work. Right? But I haven’t seen complete departments. What have you guys seen?
Dave Ferrucci
Well, I mean, I I could speak to a couple of examples where I agree that there aren’t completely eliminating department, but I think they’re definitely junior people are not as valuable. I think there’s definitely this sense that I definitely agree with and I definitely see, which is that more experienced senior people are more valuable because they their insights, their experience, their judgment, their accountability, their the trust that people have in in the wealth of their knowledge is now amplified with AI. And they become, like, the more, you know, the more valuable people, and they don’t need as many as assistants, or they don’t need as many people working with them to achieve the same result. I think that’s a very significant trend. It does raise the question or the challenge, how do you create more people like that if if you’re if you don’t have entry level positions?
Jenny Larsson
Yep. That is gonna be
Dave Ferrucci
Yeah. So you need more mentorship. You need different sorts of programs to allow those senior people to, you know, to keep the pipeline open for that. Anyway, it’s an interest it’s a it’s an interesting question. I have a more provocative version of Gary’s question, which is
Gary Hoberman
Go for it.
Dave Ferrucci
Yeah. At this point in time, in your work life or your personal life, do you find yourself thinking, gee, I’d rather discuss this with an AI than with a person?
Jenny Larsson
Oh, yeah. But you? Of course. Yes.
Gary Hoberman I
hope Dave doesn’t say
Jenny Larsson
I’m Dan.
Gary Hoberman
And he’s like, before I ask Gary, I’m just gonna go talk to my little chat bot over
Jenny Larsson:
Haven’t you guys built a Gary brain yet? So that we can go and talk
Gary Hoberman
more reach out, Jenny, and said they want to basically give me a twin CEO to make decisions for the team. And I’m like, how do you know how my brain works?
Jenny Larsson
That’s what people are doing. Right? Or even, you know, your mentor ship, right? You build out your mentor, the brain of your mentor, right? Yeah.
Dave Ferrucci
Yes. It’s an interesting phenomenon, right? I mean, I I it’s where, you know, you have a topic you wanna discuss and you think, I’d just rather discuss it if I had to pick anybody in my life to discuss this with. It’s it’s it’s just an interesting place, I mean, for humanity to have reach, quite honestly. I diverge from the topic of software and but I
Gary Hoberman
mean, I use so I’m using all the different chatbots, and all those chatbots know probably more about Unqork AI than we’ve disclosed publicly in any any of them. And I last weekend, for fun, I just said each of them, said, build me a pitch deck. You know, assume tell me whatever you knew about this from me and from everyone else. Build me the pitch deck. And it did a really good job. It was it was surprising. Okay. Jenny, let’s get on to the the most favorite part, the speed round of questions. By the way, you could always ask us any as well, so feel free. And and Dave’s gonna Dave’s gonna pick the first question for you.
Dave Ferrucci
Alright. So I like your favorite use of Gen AI in your personal life, not your professional life?
Jenny Larsson
My my kids typically, like, they have a they have a saying. Oh, who’s mommy talking to? Oh, she’s talking to her best friend. AI. Yeah. That’s the I use it for everything. I plant my garden. I coat my coach myself in terms of being a better parent. I build up, you know, brains or workspaces with all kinds of things. So yeah. It’s
Gary Hoberman
How do you not get so like, the one problem is I do the same thing, then I have to stop myself and go, it’s making me feel really good about myself, which is not good. Like, I almost want, like, the the grandmother AI behind the scenes slapping me back into
Jenny Larsson
You know what my husband did? Like, he actually did this because in your custom instructions, my husband did this. So he put in his custom instructions something about a mentor, and then he had a version that was just being really harsh. Like, you’re gonna critique the heck out of this, which it did, and he ended up being really discouraged. Now he softened that out and he had more of a mentoring voice on that one. But yeah.
Gary Hoberman
Jenny, favorite book?
Jenny Larsson
So I have this author and I utilize, like, the tips and tricks from the book all over. So his his name is Scott Halford. It’s Activate Your Brain. And I had the privilege of just just meeting him at one of the GE, Crotonville kind of leadership courses. And it’s the it’s how you kind of how you nurture your brain, how you think, how you handle pressure. You need to scribble on your notepad, right, to to really learn what you’re what you’re hearing or just, you know, how do you feed your brain in in, you know, before an important conversation, for example. So, like, it’s stop centric.
Gary Hoberman
I guess my my morning is always playing Sudoku, Kenken, Wordle, Connections. I go through I go through a series all the way up to chess puzzles, and then at the end of that, I kinda feel like I’m ready for my morning. That’s it. So that’s my activation.
Jenny Larsson
How long do you do that for?
Gary Hoberman
I spend I take a cup of coffee, and by the time the coffee is cold, I should be done with that, which is about twenty minutes, I would say. And that’s that’s my before I exercise. I used to I’ve got a broken ankle now, so I can’t exercise. But so now I spend a lot longer doing puzzles than than exercise, but I’ll be back soon. But
Jenny Larsson
How about you, Dave? How are you using your AI tools in your day to day?
Dave Ferrucci
Oh my goodness. Well, I’m similar to you. I, I use it for a lot of a lot of things. I mean, I use it for coding and prototyping, like but even for personal projects. But it’s a huge research assistant for me. I debate with it. I I use the deep research. I have a lot of hypotheses, and I wonder if they’re true. A lot of them in health care, a lot of them around biohacking and understanding the human body and health related stuff, and I just get into it. I mean, one of the things I did before was one of the first rag systems that we did at Elemental Cognition was really directed toward toward bio bioscience and bio research, and so I do a lot I do a lot of that. I I’ve gotten to a point where I I will actually, in a restaurant, because I’m on a particular diet and protocol right now to try to achieve certain things, I’ll just take a picture of the menu and say, consistent with my dietary goals and my health goals, what would you pick off of this menu and tell me why?
Jenny Larsson
I just did it this weekend. Same thing.
Dave Ferrucci
Does an amazing it does an amazing job. But, you know, yeah, I’m a very, very aggressive user now for a very long time. I’ve had amazing experiences. You know, I did a big project in composite laminate theory, which I knew nothing about when I started, And without AI, I would have never taken it on. And at one point, it sent me in loops where I wasted, I don’t know, ten hours because it was basically lying to me. It was telling me that it was checking the work and writing programs and running those programs, and it actually wasn’t. It was insisting that it was right and had validated its answers. And then eventually, when I I proved it wrong, you know, uncontri you know, undeniably wrong, and I said, please analyze how the hell you did this to me, and I want you to write a confession. And, you know, it was interesting because it did. And it admitted it lied and cheated, and it gave me reasons. It was just kind of a really bizarre thing. And it stands to kind of it’s word to the wary. Right? So if I’m doing anything seriously, I always ask for citations, and then I’ll use another LLM that doesn’t have the same context because it’s gonna start looking for coherence across that context. So I take it out of context and go to another system and I say can you validate for this for me? Explain why. So it’s fascinating. I don’t I I’ll I’m never going back but I’m also conscious of, you know, some of the traps.
Gary Hoberman
I love I love the cartoon that’s all over right now where it shows the hand holding a mushroom and asked the AI, is this poisonous? And the next part of the cartoon, it shows an RIP with the robot standing in a hole. The wink. Oh, yes. I actually do see now that this was poisonous, and I’m making a note for next time. It’s of course, it wasn’t. I’m very sorry. And this is Jenny, as as I expected, having you on is amazing, and thank you for being a guest here on the show with us and sharing with the audience your your your beliefs, your feelings, your excitement, your passion for AI. I think your I think the idea you have about the business and tech and coming together around the build and the innovation part is exactly where we’ll we’ll be, And I hope we’ll be there. And just again, appreciate that you’re you’re sharing this, and it’s it’s great to see you again, of course. Thank you.
Jenny Larsson
Well, same. It’s great.
Gary Hoberman
Great conversation. Until next time, everyone. Thanks again for watching other episodes. Remember to like and subscribe, and see you next time. Thank you. Bye bye.


