Digital Transformation & AI for Humans

AI Strategies & Implementation: Promises vs. Pitfalls

โ€ข Dr. Alex Heublein โ€ข Season 1 โ€ข Episode 54

AI Strategies: Promises vs. Pitfalls with Dr. Alex Heublein | Leading Sustainable Growth with Human-Centric Innovation

In todayโ€™s episode, we dive deep into AI Strategies and Implementation, exploring the Promises vs. Pitfalls with the brilliant Dr. Alex Heublein from Atlanta, USA.

Alex is the Chief Innovation Officer at Netsurit, bringing extensive leadership experience from senior executive roles at Oracle, IBM, HP, Red Hat, and more. In addition to his corporate career, he serves as an Adjunct Professor of Business & IT at Clayton State University. As a human-centric visionary, Alex is deeply engaged in advancing how Artificial Intelligence can drive meaningful and sustainable business growth.

Netsurit has earned international recognition for its ability to stabilize, innovate, and optimize complex IT environments. With a strong belief that shared purpose and core values form the foundation of great partnerships, Netsurit is committed to Supporting the Dreams of the Doers โ€” both within the organization and across its global client base.

Topics we discuss:

  • Executive ambitions vs. reality in AI strategies
  • Common disconnects between AI planning and real-world implementation
  • Key factors that differentiate AI success stories from stalled initiatives
  • Aligning AI adoption with business outcomes and customer value
  • Hidden pitfalls leaders must watch for in AI transformation
  • Building trust, governance, and human oversight in autonomous AI systems
  • The crucial role of organizational culture in AI-driven change
  • Actionable advice for leaders to navigate AI strategies with sustainable, human-centered growth

Whether you are a business leader, innovator, or entrepreneur navigating AI transformation, this episode is packed with strategic insights you don't want to miss!

๐Ÿ”— Connect with Dr. Alex Heublein on LinkedIn: https://www.linkedin.com/in/alexheublein/
๐Ÿ”— Learn more about Netsurit on LinkedIn: https://www.linkedin.com/company/netsurit/
๐Ÿ”— Explore Netsurit: https://netsurit.com/

๐Ÿ”” Subscribe for more visionary conversations on AI, Digital Transformation, and Human-Centric Leadership!


About the host, Emi Olausson Fourounjieva
With over 20 years in IT, digital transformation, business growth & leadership, Emi specializes in turning challenges into opportunities for business expansion and personal well-being.
Her contributions have shaped success stories across the corporations and individuals, from driving digital growth, managing resources and leading teams in big companies to empowering leaders to unlock their inner power and succeed in this era of transformation.

๐Ÿ“š Get your AI Leadership Compass: Unlocking Business Growth & Innovation ๐Ÿงญ The Definitive Guide for Leaders & Business Owners to Adapt & Thrive in the Age of AI & Digital Transformation: https://www.amazon.com/dp/B0DNBJ92RP

๐Ÿ“† Book a free Strategy Call with Emi

๐Ÿ”— Connect with Emi Olausson Fourounjieva on LinkedIn
๐ŸŒ Learn more: https://digitaltransformation4humans.com/
๐Ÿ“ง Subscribe to the newsletter on LinkedIn: Transformation for Leaders

๐Ÿ”” Subscribe and stay tuned for more episodes

Speaker 1:

Hello and welcome to Digital Transformation and AI for Humans with your host, amy. In this podcast, we delve into how technology intersects with leadership, innovation and, most importantly, the human spirit. Each episode features visionary leaders who understand that at the heart of success is the human touch nurturing a winning mindset, fostering emotional intelligence and building resilient teams. Mindset fostering emotional intelligence and building resilient teams. In today's episode, we are going to talk about AI strategies and implementation and take a closer look at the promises versus pitfalls with my fantastic guest from Atlanta, the United States, dr Alex Hubline.

Speaker 1:

Alex is a Chief Innovation Officer at Naturit, bringing extensive leadership experience from senior executive roles at Oracle, ibm, hp, red Hat and others. In addition to his corporate career, he is an adjunct professor of business and IT at Clayton State University. As a human-centric visionary, alex is deeply engaged in advancing how artificial intelligence can drive meaningful and sustainable business growth. Natura has earned international recognition for its ability to stabilize, innovate and optimize complex IT environments. With a strong belief that shared purpose and core values form the foundation of great partnerships, natura is committed to making a real difference, supporting the dreams of the doors, both within the organizations and across its global client base. Welcome, alex, it's a great pleasure to see you here in the studio today.

Speaker 2:

Thanks for having me. It's a pleasure to be here.

Speaker 1:

Let's start the conversation and transform not just our technologies but our ways of thinking and leading. If you are interested in connecting or collaborating, you can find more information in the description below. Subscribe and stay tuned for more episodes. I would also love to invite you to get your copy of AI Leadership Compass Unlocking Business Growth and Innovation the definitive guide for leaders and business owners to adapt and thrive in the age of AI and digital transformation. Find the Amazon link in the description below. Alex, to start with, I would love to hear your story and learn more about you. Could you tell a few words about yourself?

Speaker 2:

no, absolutely so. My story is actually fairly, uh, simple. I started out my career, uh, as a software engineer, and so I've always loved technology, I've always. I think I got my first computer when I was nine or ten years old, and this was a long time ago, so these things couldn't do very much. But even as a child I saw the sort of potential of computers and the potential of information technology. So I started out my career.

Speaker 2:

I wrote software for a long time, probably about 20, a little over 20 years. I was a software engineer, architect, managed a lot of software engineers and architects, kind of worked my way up in that role and I eventually became the CTO of a division of a fairly large IT company. And then, about sort of halfway through my career, I decided you know, what I really like about technology is being able to build things, to be able to build not just applications, not just solutions for people. But I got far more interested in the human side of it and so I kind of went back to business school, spent way too many years in graduate business school and kind of pivoted over a little bit more to the business side. So I've spent the last 15 or 20 years running businesses, building great teams of people, sort of focusing more on that business and human element of it, and I've tried to keep pace with the technology.

Speaker 2:

It's hard, though. It's really, really difficult if you're trying to do all these other things and you're trying to stay at the cutting edge of technology, but I think I've done at least a reasonably good job of trying to strike that balance and so, yeah, so it's been an interesting career. I've gotten to play a lot of different roles. I've been a CTO, I've been a chief revenue officer, I've been a president of a software company, I've been a chief operating officer and now I'm the chief innovation officer for NetShirt, which is kind of a roll up of a lot of different things, but it's really about how do we as a company, innovate and then how do we help our customers innovate on an ongoing basis, and I think that's one of those things that never you never stop, you never get to the finish line of innovation, right, there's always, there's always the next innovation, there's always the next thing that's coming along. So it's been an exciting lifetime for me.

Speaker 1:

This is a truly fantastic journey, alex, and so inspiring and impressive, and it's great to hear as well that, despite all your technical focus, you are still so human-centric. It is extremely important nowadays, in the AI era, and I would like to develop this from here. So, from your C-level experience at companies like Oracle, ibm, hp, red Hat and others, what were the most common ambitions and promises tied to AI strategy at the executive level? Let's take a look at that human-centric part.

Speaker 2:

Well, you know, it's interesting. Over the last few years, you know, ai has come a long way, so the expectations that people had five years ago or 10 years ago are very different than the expectations that they have today. So I think there's a couple of things that I've seen. One is I think, around AI there are sort of three categories of people. There's one category that thinks this stuff is magic, like it can do virtually anything, and so they have these very inflated expectations of what's going to happen. Right, you go talk to them and they think, well, I just want it to read my mind and do whatever I'm thinking, and you're like, well, that's probably not going to work. On the other side of the spectrum you have sort of people that are they just really don't know what's possible, so they don't really have any ambitions, or they don't really understand the promise of AI because they just they literally don't know what it can do. And then you sort of have people in the middle that at least have a reasonably good idea of what you can accomplish with AI, but that they don't fully understand it. They don't fully understand what it can do and what it can't do at this point, and part of that is because it's changing so rapidly. So if you go look at companies that I've worked for in the past, the expectations were fairly low, except in very specific circumstances. As we've gotten sort of into this new era of AI.

Speaker 2:

Really, over the last couple of years, since generative AI really came to the forefront and I think it's just in the last, I don't know probably six months or so I think it's finally crossed the chasm, right. Ai has finally gone into the mainstream. I think last year people heard a lot about it. They said, oh, this is interesting. And now what we're seeing in the market are people that are like, wow, this is real, we have to do something about this. This could potentially affect our business. Now I think the reality is I don't care what business you're in, it is almost assuredly going to affect your business. It's not a potentially situation anymore, it's just a matter of time. So I think you know sort of the ambitions and the promises of AI have changed a lot over the years. It's gone from, I think, sort of a curiosity, a niche kind of technology, to being a technology. It's become much more mainstream. I think the expectations have gone up, but a lot of people don't know what expectations to have because they don't know what's possible.

Speaker 1:

I totally agree and I see the same thing that if before it was nice to have, today there is no choice anymore. It is important to be a part of this game, and here we come to this disconnect between the strategy and the implementation Because, as you said recently, there are three different groups and that group in the middle is the most balanced one, probably, but still you know they are not so many. So what are some of the biggest disconnects you witnessed between AI strategy and actual implementation inside the organizations?

Speaker 2:

You know there's a lot. The first one is that you know it's not very difficult to build a prototype of what you're trying to do. Right, like, building prototypes with AI is relatively straightforward, at least in a lot of circumstances and for a lot of use cases. I think the challenge you run into is going from that okay, we've proven out a prototype, we've done a proof of concept, so to speak. We've taken a very easy use case and we've done those. And now I want to A, want to go tackle the full problem, and, b, I want to scale it out to many users, many hundreds or even thousands of users. And I think that's where the disconnects come in. It's sort of between that that here's what we can do with the easy use cases, and then you start to add complexity to them and at some point you reach sort of the complexity level that current technology can handle. So I think that's one right. It's going from that sort of prototype or proof of concept to production. That's always a challenge when you're building IT solutions. But I think it's even more of a challenge with AI, because the prototypes can be really impressive. You see this and you're like, oh, my goodness. And then people start extrapolating in their heads Well, wow, if we can do that, it must be able to do this sort of thing. And it turns out doing the more complex things can be more challenging and they can take a lot longer to implement. So I think one key with regard to that is sort of setting the right expectations up front. This isn't magic. I know it looks like magic to a lot of people, but it isn't. It's incredibly useful. It's incredibly powerful technology, but there are things it can't do. Useful. It's incredibly powerful technology, but there are things it can't do. You have to choose the projects and choose the use cases for AI very carefully. So we found that to be.

Speaker 2:

One way to avoid the disconnects is how do you make sure that you're choosing the right types of problems to solve that AI in its current form is well-suited to solving and again, I think that's true of most technologies. But it just gets exacerbated with AI because people see it and they and I've done it myself right, I see prototypes we've built and I'm like that's amazing, this is going to change the world. And then you try to scale it up, you try to add more complexity to it and at some point you kind of hit that. You hit that complexity ceiling and you have to bring your expectations back in line. So I think choosing the right problems to solve avoids a lot of the disconnects between strategy and implementation.

Speaker 2:

I think a second big one is a lot of companies don't really understand how dependent they are on the data that they have. If you go look at most companies, their data is just all over the place. I mean, it is even at NetShirt. Half the time I'm like where did we put that thing? Or where is that file, or where is that database. So I think you know, being able to at least get a handle on the data that you've got to feed these things and maybe getting it into a bit better shape than it's in right now, I think is very important.

Speaker 2:

But I think one of the really neat things about AI, particularly large language models, is that they're very good at reasoning across both structured data, which is what you typically think of when you think of data. You think of data in a spreadsheet or a database or in a system somewhere, and it's very nicely structured. You've got tables and columns and rows and relationships between your data. But it's also even better at reasoning across unstructured data, and so that makes for some very powerful use cases. If you have a use case where you're like no, I've got all this data that's in a structured format in a structured database, but then I've got also all of this other data that's in documents or PDFs or PowerPoint decks or stuck in SharePoint somewhere. Being able to reason across both the structured and unstructured data is actually one of the most powerful capabilities of generative AI and large language models. They're great at it and that's a capability that we traditionally haven't had.

Speaker 2:

If you were to go talk to people about AI, even two or three years ago, they would tell you, oh no, you've got to go through this big data cleansing exercise, you've got to get all your data lined up, and there's still people that say that right, there's still people that think that you need to go through a year-long exercise to clean up all your data before you can do anything with AI. In fact, I've talked to these people. We even have customers that think this way and you're like guys, if you do that, that's great. It's fantastic that you want to clean up your data and you'll get nothing but long-term good returns. But if you wait to implement AI in any way, shape or form. Until you've got all your data cleaned up, you probably won't ever implement it right. You're probably going to be sitting here a year from now going we still don't have any AI solutions. We're still not getting any benefit from this.

Speaker 2:

Yes, our data is beautiful and it's all structured and it's all in a data warehouse or a data lake or whatever. This is great. But so we encourage people hey, let's start small, like, don't try to solve the world's problems here. Let's pick some relatively straightforward use cases where we know we have the data or the data is in at least good enough shape to where we can work with it. And again, large language models are very tolerant of unstructured data. They reason across it very well. So I think that's another big piece to it is yes, the data is important don't get me wrong but don't get hung up on that or you'll never get anything done with these initiatives.

Speaker 1:

I love this point and your insights are truly priceless because, as a former data and analytics leader myself, I can recognize that problem. Within the larger analysis, the corporate lets create a perfect data set. It's just impossible. It's close to impossible.

Speaker 2:

It is.

Speaker 1:

You slice and dice and there is no time to wait. It is important to start and even if it's a smaller scale, it is important to start small and scale it up and roll it out, because time is money and if you don't start today, your competitors will start it today or probably already started yesterday.

Speaker 2:

They already have.

Speaker 1:

yes, Exactly so. Thank you for pointing out what really is important and what really matters in this case is important and what really matters in this case. So, in your view, what separates companies that scale AI successfully from those that remain stuck in the pilot phase?

Speaker 2:

Oh yeah, that's a great one. So I think one of the distinguishing factors is that I find that companies that get stuck in the pilot phase and never get out of the pilot phase. One of the characteristics they almost all have in common is that they tried to tackle a problem that was too big. They tried to eat the elephant in one bite, so to speak, and then they're constantly having to change what they're doing. So instead of starting small with a piece of the problem, they try to go after the whole problem. So I think one of the tricks is to break it down into bite-sized pieces, design those pieces so that you can implement one without necessarily having to implement the other, and you're still driving towards business value. So it's really more of like, instead of trying to build a house, it's's like start with just some bricks and let's start building a wall here and let's start let's start building part of it and and build units uh, of work and units of capability that in and of themselves are valuable, but then they become the building blocks for these bigger initiatives. So so that's just a very fancy way of saying start small, like don't boil the ocean here, don't tackle the most complicated problem you have within your business, you'll fail almost assuredly, and that's and I think that's true of almost every emerging technology. Whenever you go through paradigm shifts, what you find is that the technology starts out and it's not as good as what's in the market today, but it gets better and better and it keeps getting better and it keeps sort of pushing everything up in the market. So we're still in very early days with AI.

Speaker 2:

I think it's very important that you tackle problems that are very easily definable. And another problem that people get into is they're very nebulous about their definition of both what it is they want to do, and then they're very nebulous about their definition of both what it is they want to do, and then they're very nebulous about their definition of success. So again, don't try to change your entire order invoice processing. You know end-to-end process. Pick one piece of it, do it really really well and then learn from it. Right, learn from your mistakes, because you're going to make mistakes. It's built into the process. If you're not making mistakes with AI, you're not doing anything useful with it, probably. So there's a lot of this experimentation, there's a lot of this iteration and iterative feedback that you need to build these systems.

Speaker 2:

And, by the way, the technology is changing so rapidly that if you try to take on one of these giant projects, you get six months into it or nine months into it. You might realize actually we could have built this in a month if we had just waited nine months, because now the technology has advanced to a point where all that effort we just spent the last eight months working on is irrelevant now because we can build it using the technology that we have today versus the technology we had nine months ago. And that's really true in AI. There were some projects we tried nine months ago that, honestly, if we had just waited nine months, we could have built them in a fraction of the time and the cost and still gotten it done in the same amount of time. So that, I think, is one big piece of it.

Speaker 2:

Right. These things get stuck in pilot phases because people try to take on use cases that are too complex, that they're too large to take on. They don't break it down into pieces. And then they come around nine months or a year later and they're like why hasn't this thing gone to production? Well, I'll tell you why. You tackled the wrong problem. So problem selection, I think is the most important aspect of this Problem.

Speaker 1:

selection, I think, is the most important aspect of this. This is very interesting that something that could take so long time before you can solve it in a fracture of time today due to this rapid development of new technologies. And it's understandable, but it puts leaders in a very difficult situation because it's always easier to say maybe we should wait for a few months and then do it faster and then, anyways, we will be done at the same moment in time on the timeline. So how should leaders approach ai adoption differently when aligning tech initiatives with business outcomes and customer value, and and how should they view those risks and define when to enter this game and start new initiatives?

Speaker 2:

You know, it's a good question, because I think part of the problem is that we think about projects and we think about solutions in a certain way, and the reason we do that is that we've been thinking about them that way for so long that it doesn't occur to people that you can think a different way about them. So so, for instance, you know, like when we go talk to our customers, there's a fairly large percentage of our customers and we'll tell them hey, we're going to do a bunch of really cool AI stuff for you. And they say you know? And we tell them you know, this technology is evolving very rapidly, it's moving fast and it requires a different approach. Right, it requires this very iterative, almost hyper iterative, approach to doing things, because this stuff is literally changing every week to play with and new capabilities, and they make things that were hard in the past very easy to do in the present, right. But then these same people will come back and say, okay, well, I'd like a big giant statement of work and I'd like the Gantt chart and the work breakdown structure for the next nine months or the next year to be able to do this. We tell them that's not how this works right. So you're thinking about projects the same way you would think about building a bridge or something, or a very large construction project. Well, there's all these different workers and we need to schedule them and we need to have tasks, so on and so forth.

Speaker 2:

And when you're in emerging technology spaces, you have to think very differently. So I think a lot of people just get stuck in the past. I think a lot of people just get stuck in the past. They get stuck in the way that they've been thinking about IT projects and they want to take that same thinking and apply it to a world that has changed dramatically and is changing dramatically. So that's the first piece. So, if you're a leader thinking about AI adoption, a start fairly small, but B you have to be willing to iterate. No-transcript thing that's going to give us all this competitive differentiation and we want to do it very, very quickly.

Speaker 2:

And then you get into it and part of it you go. Meh. Turns out we can't do that right. Turns out that's not going to work and their view on it is oh, this is just a complete failure then, right, I mean. So you have to set those expectations up front and you have to have those expectations as a leader that at least some percentage of what you're going to do is going to fail. It is not going to work out the way you think it's going to, and that's OK, because you're still going to get a huge benefit. You're still going to get an amazing return on investment from doing this. You're going to have things that can potentially transform your business and get you out ahead of your competition.

Speaker 2:

But if your expectation is perfection, if your expectation is that we built this giant project plan and every single thing in it is going to go perfectly and it's all going to go to plan and it's all going to end up perfectly at the end, don't do it, don't bother doing it, because that will not happen. There's a 100% chance that that will not occur. So it's having that mindset that fail fast mindset, that mindset that not everything is going to work. But look, even if 80% of it works, you're 1,000% better off than you were before. So having that mindset, that recognition upfront that this is a rapidly emerging disruptive technology space, if you have that mindset, you'll do well.

Speaker 2:

And if you set those expectations with all the different stakeholders right up front and we tell this to customers, we're like guys, not everything here that we're telling you is going to work, I tell them that before we even sell them the deal, there's a 0% chance it's all going to work out exactly the way we say it's going to work out.

Speaker 2:

So, and if you're not comfortable with that, then don't do it. But if you are comfortable with that, if you make the realization that even if 80% of this stuff works, I get a thousand percent return on investment, who's not going to take that deal right? So it's that it's getting out of that old way of thinking, getting into a very new way of thinking. That's, to me, the most important thing that leaders need to recognize when they're trying to align these technology initiatives with business outcomes and customer value is that it's not all going to work right. There's going to be alignment. You're still going to be better off, way better off than you were if you had done nothing. But you have to have that recognition up front 100%.

Speaker 1:

There is no happily ever after anymore with AI initiatives, so it is about being ready and understanding and adjusting your expectations to the real potential and outcomes. Well, it made me think about another question. When we're talking about technology and IT and business and customer centricity, I see different approaches where AI actually belongs in the company.

Speaker 2:

So what is your take on that? You know it's interesting. I mean there are companies out there that I mean, when I look at AI, I think there are companies that aren't ready for it yet. There are companies that probably don't have the right level of maturity to take on projects like these, and my advice to them is you probably need to get that level of maturity quickly, or you're probably not going to, but your long-term prospects don't look good of that. But I think there are companies. How's that? But I think there are companies.

Speaker 2:

And the really funny part, amy, is that when we go in and we start working with customers, the first thing we do is we go in and we kind of explain what we do, and then we do an ideation session with them and so we show them the technology, we show them some of the things that are possible, and at first you know they're kind of like I don't know about this AI stuff. I mean I've heard about it, I'm interested in it, but I'm not sure I believe in it. So you show them some things, you show them the possibilities and then you sit down and say all right, how would you apply this to your business? How could we use this within the context of your business to do things that not only can you not do today, but things that no one could have built you a solution to do five years ago. Like these solutions, some of the solutions that we built for our customers were literally impossible five years ago. There's no way we could have built the system, much less do it at a reasonable cost where you get a positive return on investment. There was no way to actually do this stuff five years ago. So what's fascinating about it, though, is that you know, if you think about if you've been running a company for 20 years, you probably had some really good ideas 10 or 15 years ago that you went and looked at them and said I'd love to do this, but I'm going to have to employ 50 people to go do this thing, and it's just not worth doing all right. Well, now, what if you could employ one person and accomplish what 50 people could have done five years ago? And and the funny thing is they've written off those ideas. They said those ideas are impossible, I'll never get a positive return on investment, and they don't even think about those ideas anymore.

Speaker 2:

So part of it is about like helping people resurrect these old ideas that they buried because there was no good way of doing them cost effectively. And getting them to rethink that because now we can do it at a fraction of the cost we could do it with before, because before we had to employ a lot of humans and humans are expensive, right, humans get sick, humans have have flaws to them, but we can build things that are much, much less expensive. They don't get sick, they run 24 hours a day. So, bringing up some of those old ideas from the past into the present and saying, could we go tackle some of those ideas, that's maybe the most gratifying part of this.

Speaker 2:

Right, when you have people that literally they wrote off this idea 10 years ago. It was a brilliant idea, but there was just no good way to cost effectively do it. Now we can. And getting them to go back and think through that. It's an amazing part of this job. I mean it's an amazing part just just seeing the wonder, uh, in their eyes like whoa, wait a minute, I can do this now.

Speaker 1:

Uh, it's brilliant, it's absolutely amazing exactly when dreams true, it's so magic really, and when you can bring that magic into their life, their business, it must be a great feeling it is.

Speaker 2:

It really is. It's very satisfying, it's very gratifying, just hey. We get to play with cool technology, getting to see people live out their dreams, getting seeing, seeing business owners giving them capabilities that they thought were impossible. I mean, there's nothing that excites me more than that right Sort of doing things now that were impossible a few years ago. That's a great. It's a great thing to be able to be with people great thing to be able to be with people.

Speaker 1:

Totally. Lucky you, Alex, and lucky them that they have. What hidden or underestimated pitfalls have you seen in real world AI implementations that leaders often overlook? Now, when you told already so much about all those cases and describe how leaders should lead their projects and take their implementation into successful development and rollout, so I would like to learn more about what you know. But probably those who are listening to us and watching this video didn't think about or didn't have time to take into consideration yet.

Speaker 2:

Yeah.

Speaker 2:

So look, I'll tell you the biggest one by far, and this one is one that and I'll give you an example of it in a minute. So the biggest pitfall that I see today, and probably for the last few months of implementing AI in the real world few months of implementing ai in the real world is that ai by its very nature. If you look at large language models, you look at generative ai it's non-deterministic, which means that if I give it the same exact input once, I'll get this output, and if I give it the same exact input again, I might get a slightly different output, right. So so the and, and and it's. It's that that's not a flaw in this system. That's the way these things are designed right. They're designed to be a little bit non-deterministic, and the problem you run into, though, is that people want to go tackle problems that need 100% accuracy, and so they go off and they say well, we, we and I'll give you a great example of this self-driving cars.

Speaker 2:

Now, I don't know about you, but I don't have a self-driving car, fully autonomous self-driving vehicle, and I don't know anybody else that has one. Now, first of all, even if I did have one, I wouldn't get it. There's no way I believe in the technology that much. Right, because even if there's a one in a thousand chance it's going to crash and kill me and my family. All I have to do is get in that car a thousand times and I'm pretty much dead. So, so part of the reason, though, that we don't have self fully autonomous, self-driving cars now and and Elon Musk said we were going to have these in 2020, right, so this is like five years ago. Right, he said we're going to have self-driving cars by 2020. And there was some truth to that, in the sense that you can buy a car today that will drive down a highway or a motorway on a bright, sunny day in perfect conditions, and it does just fine. Drive down a very twisty road at night in a torrential rainstorm. It can't do it right. So so you can solve 90 of the problem, and, and what we see is that if you pick use cases where you say 90 is good enough, I don't need 100 accuracy, I don't need this thing to be 100 perfect, you'll do well. The biggest pitfall I see is when people assume they can solve 100% of the problem with an AI solution and that they're going to get these very deterministic results. That's where these things fall down. That's the biggest pitfall and that leads to the sort of the biggest deflation in their expectations, right. So I predict that it'll be a long time before we have self-driving cars that can drive in the middle of the night.

Speaker 2:

Down effort. The last 10% takes the other 90% of the time and effort, because that last 10% is just as difficult as the first 90% to go solve. So if your expectation is you're going to solve the whole thing with 100% accuracy, with 100% fidelity, you'll fail. So we try to work with customers on making sure that we choose the right problems to solve to where we say, look, we're not going to generate 100% accurate solution, we're going to generate a 90% accurate solution that a human is then going to have to take a few minutes to look through and make sure there's no issues, and for a lot of use cases that's fine, because I've just eliminated 90% of the effort, I don't need to eliminate 100% of the effort, right, and so and I think that kind of goes into this whole human element of it I'm not a big believer of AI with no human oversight, like to me, that's really, really dangerous.

Speaker 2:

And not only is it dangerous for sort of the long-term implications of these things, but it's just dangerous because that's not how they work. That is not how they're designed to work. They need, in most cases they need that human oversight because there's still a lot of things we can do that they can't and that will probably be the case for the foreseeable future, that you'll need that human judgment. You'll need the humans to go look through and make sure and these things halluc, hallucinate, right? I mean you know we've all used chat, gpt and had it come up with some crazy answer. That still happens, right? Hallucinations still happen because these things are pretending to be intelligent and so you're going to get some element of that non-determinism, those hallucinations. There's no way to prevent them 100% of the time. So I think, bringing the humans into it, making sure that they've done it right, that it is accurate, and maybe it needs to be tweaked before it goes out the door, I think that's probably going to be the case for at least several years to come.

Speaker 1:

I agree it's a beautiful example, and it made me also think about my conversation with a friend from Florida earlier today. Exactly, We've been working with all those pictures and visualizations and everything and he said but how long time will it take until it can use text on the pictures properly? Why can't it manage simple text? Steve.

Speaker 2:

And they just fixed that. They just fixed it this week.

Speaker 1:

Yes, now it is closer to the point, but still, I've been using the system today and still, you know it is not 100%. As you say. It is close, but it still requires our supervision and we still have to be in the driver's seat and understand that there is bias, there are hallucinations and the outcomes, which might be dramatic for us, they should depend on our decision-making after all.

Speaker 2:

Absolutely.

Speaker 2:

But now we're coming closer to the next level in this interview and as we move towards more autonomous systems and AI agents. How should companies you know trust but verify? I think is the old saying right, yes, you can trust, but you need to verify the outcome. So I think that that one is kind of what we just talked about. I think human oversight is important, not only from an accuracy and fidelity standpoint, but I think it's also important because, you know, if you look at the way we developed these systems, we trained them on humanity's collective knowledge and collective wisdom and sometimes the other part of this we trained them on humanity's stupidity as well. Right, that we didn't just give them the good stuff, they got the bad stuff too. They got all the dumb ideas in their training as well. And so so I think that you know, you've always got to have governance in a situation like that, because, because we have we have sort of molded these things in our own image, and we as humans are very flawed creatures, right, we are emotional, we don't always make rational decisions, we don't always have the right ethics in the right. So to expect that an AI system that was trained on everything that we do is going to not have those flaws is crazy, right, that doesn't make any sort of logical sense. So I think this notion of governance is really important and we're starting to see more and more of this and I think it's a real conundrum.

Speaker 2:

And there's governance in the sense of sort of governing things from like a true sort of regulatory standpoint, and then I think there's governance at sort of governing things from like a true sort of regulatory standpoint. And then I think there's governments it's sort of a company level as well, and I and I think we're seeing some challenges. So a lot of customers we go into we ask them well, so what's your AI policy? And they're like either we don't have one or the policies don't use it, because they're so worried that their private information is going to get out into the public accidentally, and so that's a valid concern. If you don't know what you're doing, you can take all this very confidential information and expose it to the rest of the world if you're not very, very careful. So I think there needs to be a level of governance at the sort of organizational level and then the true governance where you get into sort of governmental regulatory governance. I think that's where it gets really interesting, because a lot of governments, and there's a big disconnect right now between sort of different governments around the world. If you look at the EU, the EU stance on AI is very different than the US's, and the US's is very different than China's or other countries, right, and so so the trick is can you regulate this thing in a way that doesn't impede progress?

Speaker 2:

And I think that's something that I don't have a good answer for. That I don't know. I don't know. I'm not a politician, so, and I'm not a policymaker, so I don't.

Speaker 2:

I don't know what the right answer to that is, but I think what we're seeing is there are a lot of things that are happening, like just the idea of copyrights. These systems have been trained on a whole bunch of copyrighted material. We all know that, everyone knows it, even though the open AIs and the metas and the Googles of the world don't want to overtly admit it. We know it's all been trained on a whole bunch of copyrighted information, and I think, from a governance standpoint, the governments are looking at this like well, is it that big a deal? Like, is it worth saying no, you can't train this thing on copyrighted material and you have to go throw those models away, or is the benefit to humanity of these things so great that we're willing to sort of overlook those transgressions? I mean, that to me is a very small part of it, but it's illustrative of the kinds of challenges that regulatory bodies and governments are facing right now, and I would not want to be those people. There's no good answer, right, there's no perfect answer to that problem. It's about making trade-offs between what's good for humanity in the short run and then what's good for humanity in the long run. And uh gosh, I hope they're really good at that because I'm not smart enough to tell you that of a good answer to that. But hopefully they'll kind of get that solved.

Speaker 2:

But I think you know the idea is we do need oversight. We do need we. We can't trust these systems completely, whereas before maybe we could, when we had very, very deterministic computer systems. You could trust those right, and we use those every day. If you go to an automated teller machine and want to get cash out of your bank account, there's an implicit trust that, a it's going to give you the money you've requested and, b it's only going to deduct that amount of money from your bank account, for instance. So we've learned to trust systems that are deterministic. The question is, can we trust systems that are non-deterministic or aren't completely deterministic? So I think that lack of trust right now is probably justified and you need that human oversight to establish that relationship of trust with autonomous agents and autonomous systems. So I think this is going to be maybe the biggest area of policymaking in governments that we've probably seen in our lifetimes, because it's difficult, it's complex, it's balancing priorities and there's no perfect answer to it.

Speaker 1:

I totally agree and it's a great point. I think I saw the news about it just a few weeks ago that some Altman requested to train models on the data like that, officially, because all the copyrighted data yes, it was used, but it was somehow between the lines. And now the question is there this big elephant in the room? How are we going to deal with it? So I'm also very curious about how it's going to unfold and, exactly as you said, I wouldn't like to be somewhere there where those conversations are happening, to be one of those pitfalls, because there is no right answer to that and it is not easy. Once again, you see, it's non-deterministic. Even there, there is no wrong.

Speaker 2:

Right, right, exactly, it's non-deterministic, even at a policy level, and it's it's hard. I think this is going to be one of the hardest challenges we've seen, uh, from a regulatory standpoint. Um, you know, probably in 50 years, probably in in our lifetimes, um, because it is so complex and you're trying to balance priorities that, um, you know you're trying to. You're trying to balance priorities that you know. You're trying to balance individual rights with collective rights. You're trying to balance existing laws, as is, and the regulations, as is, to this entirely new thing. There's always going to be gaps and there's always going to be challenges if you just try to take what you have and apply it to this new paradigm totally just.

Speaker 1:

I'm not sure. Humanity has 50 more years and something will field in one or another direction much earlier than that, much sooner Possible. Let's see how it goes. And now I'm thinking about that resistance from humans, from people, exactly for the reasons mentioned above and earlier, and within the organizations. A lot depends on the company culture and on humans who are supposed to adopt those new technologies. So what role does organizational culture play in accelerating or blocking AI adoption and transformation?

Speaker 2:

It's huge, it's maybe the most important factor. And I'll tell you, you know, we've gone into companies and sort of talked to them about AI and they're like this is great, this is fantastic, we love this, because they see the potential for their business, they see the potential for their organization and they're kind of looking at it very optimistically, right. But at the same point we go into some companies and we're just met with open hostility, like this thing is just going to eliminate everybody's jobs. We're a human centric business. We don something that innovation within organizations is largely a function of culture. If you have a culture that doesn't reward innovation and thinking differently and doing things differently guess what? You won't innovate. You will stay where you're at, and that might be great if you're in an industry that doesn't change very much. If you're in sort of one of those static, steady state industries, maybe you don't want too much innovation, right? Maybe you just want things to. We need to optimize for what we have right now.

Speaker 2:

I think the challenge with AI is that I think it's going to affect virtually every business. In fact, I can't think of a business it isn't going to affect over the next five years. It's just a matter of how much will it affect it. So if you have a culture that is used to the status quo, if you have a culture that rewards pure execution and doesn't reward ideas, it doesn't reward thinking differently, then you're probably going to be at a very big competitive disadvantage over the next five years. Then you're probably going to be at a very big competitive disadvantage over the next five years. So, working to encourage people to think, encourage people to have ideas, encourage people to question why are we doing it this way? Now there's an old saying that if you're doing something a certain way because that's the way you've always done it, you're probably doing it wrong, and I'm a big, big believer in that. I watch company after company and get worse the bigger the company. In my estimation, small and medium businesses actually pivot very quickly, right. They can think differently and they can change and move in a different direction much more rapidly than very large enterprises, right, that takes forever. So I think sort of the small and medium business world has a big advantage there If you have a culture that rewards that thinking outside the box, doing things differently, flying things, knowing that 80% of what you try may not work. It's the 20% that does work. That changes the game for you. So I think culture is a huge determinant of success and accelerating and blocking AI adoption and just transformation overall. Right, it doesn't have to be AI, the cultural element is huge.

Speaker 2:

And then I think there's sort of this humanist view of things, and the humanist view of this is I don't want these machines running around, a displacing people's jobs. B doing things in a very impersonal way. C sort of taking over processes that maybe do actually require some human oversight, and there's a lot of fear. And when there's fear it's typically fear of the unknown. There's this sort of natural reaction to say no, we're not going to do anything. When you're scared of something you don't want to get anywhere near it, I don't want this thing infiltrating my business like it's some kind of virus or pandemic or something. They really kind of view it as I'm just going to keep it at arm's length and we're not going to do anything. I think that's the wrong approach.

Speaker 2:

You can implement AI in a very human way to where it augments what humans can do. It takes away a lot of the repetitive, mundane, just drudgery of jobs and frees people up to do things that only humans can do. And if you look at it that way, if you have a culture that thinks about people that way, it's actually a very humanist thing to do. No one wants to do just repetitive tasks over and over and over. I mean that's a horrible existence. I want to do just the same thing over and over and over and over, every day, eight hours a day, doing the same exact thing. If I can free up people to use their creativity, to use their imagination, I can not only generate a ton of business value, but I can actually improve the quality of their lives. And if you improve the quality of the lives of the people in your company, they'll probably stay with you, right, they'll probably love what they do. And when you have passion and love for what you do, in my experience you you tend to do a much, much better job of it than well.

Speaker 2:

I've got to go do my expense reports now. Like I hate doing expense reports because it's just drudgery. I have to go photocopy or take pictures of all these receipts, I have to enter them into the system. I mean I might as well be a data entry clerk if I'm doing that stuff and I hate doing it Like I want an AI, where I just say, here you go, figure it out, right, and that frees up time for me to do what I do for a living, right. That really adds value to the business. So I think it's all about how you look at it. Perspective here is really, really important as we go forward. That rewards change and thinking differently and innovating and trying things out. You may not have a business in five years, honestly, but the companies that do, I think, will do extremely well in the next few years.

Speaker 1:

I totally agree and you touched down on so many important topics. I've been thinking about large corporations with their huge budgets but being so rigid, being so much slower and limited by their legacy and their silo effect and everything what belongs to the big companies, versus those SMBs which are faster paced and they have this entrepreneurial mindset as well, compared to the large corporations, but at the same time their budget is smaller and they navigate in a completely different way. So the world is redefined by all those ifs and buts and it's really interesting to understand and to see how different players are mitigating this game in a different way exactly and at the same time, it's a great example with the expenses report.

Speaker 1:

I remember every time I was thinking about a business trip, I was thinking about that report in advance, like, okay, if I do this, then I will need to deal one extra time with that as well. So it belongs to that story and very relevant, at least for me. I can also think about another aspect that those who are human-centric, those who are trying to stay aside right now because they don't want artificial intelligence to impact human centricity and the place of human beings in the world At the same time, maybe they should double down on implementing artificial intelligence, because if they don't do, then somebody else who doesn't care about human centricity will go fast forward and redefine everything without having to say from this other side, who care about humans for real? And that's why it is important. If you care about humans, then you should think about how you are going to use AI and implement it and grow your business based on the new technologies, because it is also up to you to define how the future is going to look like indeed, indeed, the, the, the.

Speaker 2:

When I was at hp, we had a saying at hp and that's that the best way to predict the future is is to invent it. Um, I've always loved that idea. You know that that if you really want to know what's going to happen, you can't just sit back and wait for it to happen.

Speaker 1:

You have to go do something right, and you can and at least in a small way you can invent what the future is going to look like, as opposed to the future just being something that happens and then you have to react to it definitely 100, and I'm working now on developing a program, educational program, together with your academy in Sweden, and it's called Lead the Change, because it is exactly important to lead the change, to redefine the future and to belong to those who choose how it's going to be like and look like.

Speaker 2:

That's great.

Speaker 1:

Yes, Alex. What powerful advice would you give to today's leaders to navigate AI strategies with more impact and long-term success, keeping human centricity in mind?

Speaker 2:

I mean, I think the first powerful piece of advice I would give people is do something. Don't just sit around and wait on this, you know. Be part of the process, like we were just talking about. I watch a lot of people are kind of sitting on the fence right now going let's see how this pans out. Is this another one of these bubbles that happens that, you know, three years from now, we're going to forget all about AI, and this has happened throughout history, right, where everybody hyped something up and everybody said, oh, this is the next big thing, and it turned out not at all to be the next big thing. I think there's very little question in my mind that this is definitely the next big thing. This may be the next biggest thing ever. So my advice is try it out, get involved, do something. Don't just sit there and wait to see how this thing pans out, because if you wait, there's a very high probability that your competitors will pass you by because they're actually doing something and in doing something, they may not get short-term results. They may not get the short-term results that they think they're going to get, but they're learning and they're building that institutional muscle, that institutional capacity for using these technologies. They're getting better at it and if you're sitting on the sidelines just waiting, you're not building that institutional capacity, you're not learning from what's happening out there and how you can use these things to apply to your business or your industry or whatever vertical you happen to be in. So that's my first bit of advice Go out there and try it right, use this stuff, experiment with it, see how it works and if you really don't know, bring in somebody.

Speaker 2:

This is one of the things that we do for our customers is that we go in and we actually teach them. Here's what this can do, here's what it can't do, here's how it might apply to your business. So not only do we have sort of a consulting business around this, we also have a training and education business around this, where we come in and we teach people. So I think, if you don't want to try it out yourself or you don't want to do that, bring in somebody who does do it. Bring in a partner that can teach you about it, that can show you some of these things and start to work with you on it. But get started. Don't wait. If something's moving this fast, if you miss this bus or you miss this train, you're probably not going to ever get on this train. It'll probably be too late for you. So that's sort of the first piece of it right.

Speaker 2:

The second piece of it is engage the humans as you're building things right. Don't do some big corporate level initiative that you then try to push down onto all the people within your company. Work with them to build these things from the bottom up, from the user's perspective, from the people that are doing these jobs today. Get their input right, make them part of that process and you'll end up with not only will you end up with a lot better buy-in for this, you'll probably end up with a whole lot better solution as you go forward, and solutions that actually augment what people can do, rather than trying to replace what people can do.

Speaker 2:

I'm a huge believer in this notion of using these things to do things. That, and that's one of the first things I ask people when I go in and talk to them in companies. I'm like what do you hate about your job? I'm like other than people you work with that you may not like I can't fix that problem, but like what are the tasks and what are the things on a daily basis that you hate doing, and we start with those because if you can solve those problems, you've got them. If they can see what's in it for them and it meaningfully impacts what they do and takes away a a thing that they dislike doing, let me tell you they'll. They'll support you from here until the end of time. So it's all about getting the humans involved in this process as early as possible, understanding what their needs are, understanding the things that they don't like doing. Focus on those things and you'll you'll end up with a tremendous level of support, adoption within your organization absolutely brilliant.

Speaker 1:

I love your approach and I love your way of turning dreams into reality thanks for having me thank you so much, alex. I really appreciated everything you've been sharing with us today, and thank you so much for inspiring and guiding leaders from strategy to successful implementation. I love it.

Speaker 2:

Thank you, awesome, great talking with you.

Speaker 1:

Thank you. Thank you for joining us on Digital Transformation and AI for Humans. I am Amy and it was enriching to share this time with you. Remember, the core of any transformation lies in our human nature how we think, feel and connect with others. It is about enhancing our emotional intelligence, embracing a winning mindset and leading with empathy and insight. Subscribe and stay tuned for more episodes where we uncover the latest trends in digital business and explore the human side of technology and leadership. Until next time, keep nurturing your mind, fostering your connections and leading with heart.

People on this episode