Digital Transformation & AI for Humans
Welcome to 'Digital Transformation & AI for Humans' with Emi.
In this podcast, we delve into how technology intersects with leadership, innovation, and most importantly, the human spirit.
Each episode features visionary leaders from different countries who understand that at the heart of success is the human touch - nurturing a winning mindset, fostering emotional intelligence, soft skills, and building resilient teams.
Subscribe and stay tuned for more episodes.
Visit https://digitaltransformation4humans.com/ for more information.
If youโre a leader, business owner or investor ready to adapt, thrive, and lead with clarity, purpose, and wisdom in the era of AI - Iโd love to invite you to learn more about AI Game Changers - a global elite hub for visionary trailblazers and changemakers shaping the future: http://aigamechangers.io/
Digital Transformation & AI for Humans
S1|Ep94 Leading into the Future: Vision, Leadership and Innovation Insights from Intel and Google DeepMind
Let's tap into vision, leadership, and innovation insights shaped at Intel and Google DeepMind, to lead into the future, together with my amazing American guest, Steve Brown, from Portland, Oregon.
Steve has been operating at the edge of intelligence long before AI became mainstream, as an entrepreneur, advisor, and former futurist and executive at Google DeepMind and Intel, working at the intersection of AI, high-tech, and large-scale transformation.
Steve has advised global organizations: Bank of America, Lenovo, Nespresso, etc., he is the author of The AI Ultimatum, and The Innovation Ultimatum, which earned him features in Fast Company, Forbes, and more.
Today, Steve is a global keynote speaker and trusted advisor to Fortune 100 companies.
Steve is a part of the Diamond Executive Council of the AI Game Changers Club - an elite tribe of visionary leaders redefining the rules and shaping the future of human-AI synergy.
๐ Key Topics Covered:
- Emerging AI & innovation trends beyond mainstream narratives
- Leadership misreads of the future
- New leadership risks in embedded intelligence
- The real opportunity in AI transformation
- When โfuture-readyโ organizations are exposed by AI
- Durable advantage vs short-term AI wins
- Responsibility and governance in autonomous systems
- The assumption leaders must unlearn
- Steve's strategic advice
๐ Steve on LinkedIn: https://www.linkedin.com/in/futuresteve/
๐ https://www.stevebrown.ai/
๐๐ The AI Ultimatum
๐๐ The Innovation Ultimatum
About the host, Emi Olausson Fourounjieva
With over 20 years in IT, digital transformation, business growth & leadership, Emi specializes in turning challenges into opportunities for business expansion and personal well-being.
Her contributions have shaped success stories across the corporations and individuals, from driving digital growth, managing resources and leading teams in big companies to empowering leaders to unlock their inner power and succeed in this era of transformation.
AI GAME CHANGERS CLUB: http://aigamechangers.io/
Apply to become a member: http://aigamechangers.club/
Podcast: https://dt4h.io/podcast
๐ AI Leadership Compass: Unlocking Business Growth & Innovation https://www.amazon.com/dp/B0DNBJ92RP
๐ Book a free Strategy Call with Emi
๐ Connect with Emi on LinkedIn
๐ https://digitaltransformation4humans.com/
๐ง Transformation for Leaders
Hello and welcome to Digital Transformation and AI for Humans with your host Annie. In this podcast, we'll delve into how technology intersects with leadership, innovation, and most importantly, the human spirit. Each episode features visionary leaders who understand that at the heart of success is the human touch, nurturing a winning mindset, fostering emotional intelligence, and building resilient teams. Today, I invite you to tap into vision, leadership, and innovation insights shaped at Intel and Google DeepMind to help lead the future together with my amazing American guest Steve Brown, joining us from Portland, Oregon. Steve has been operating at the edge of intelligence long before AI became mainstream as an entrepreneur, advisor, and former futurist and executive at Google DeepMind and Intel, working at the intersection of AI, high-tech, and large-scale transformation. He has advised global organizations including Bank of America, Lenovo, Nespresso, Cameco, and Intuit, helping leaders prepare for what he calls the intelligence age, a shift so profound that many organizations still struggle to even grasp its implications. Steve is the author of the AI Ultimatum, a clear warning to leaders that AI during transformation is already here and waiting is no longer a neutral strategy. His earlier book, The Innovation Ultimatum, earned him features in Fast Company, Forbes, and more. Today he is a global keynote speaker and trusted advisor to Fortune 100 companies, focused on how leaders can move from confusion and delay to clarity and intelligent action. I'm honored to have Steve as a part of the Diamond Executive Council of the AI Game Changers Club, an elite tribe of visionary leaders redefining the rules and shaping the future of human AI Synergy. Welcome, Steve. I'm so happy to have you here in the studio today.
SPEAKER_01:Thanks, Amy. Let's have some fun together. I'm looking forward to a chat with you.
SPEAKER_00:So am I. Let's start the conversation and transform not just our technologies, but our ways of thinking and leading. If you are interested in connecting or collaborating, you can find more information in the description. And don't forget to subscribe for more powerful episodes. If you are a leader, business owner, or investor ready to adapt, thrive, and lead with clarity, purpose, and wisdom in the era of AI, I would love to invite you to learn more about AI Game Changers, a global elite club for visionary trailblazers and change makers shaping the future. You can apply at AIGamechangers.club. Steve, to start with, I've been looking forward to our conversation for quite a while, and I would love to open it up with learning more about yourself, about your journey, about your passions. Could you share a little bit more?
SPEAKER_02:Sure. So in the introduction, you said, you know, American, uh, I suppose I am as well, but I grew up in England. So I grew up in the UK. I worked for Intel there when I was a young lad that still had a good head of hair. And uh I moved to America, to Portland, Oregon, again, working for Intel in the late 90s. So I've been here almost 30 years, um, and I've managed to keep my Axel, which I'm very happy about. But over my career at Intel, which was a 30-year relationship, if I include my time interning with them, I did a couple of degrees in microelectronics. So I could design chips and boards and write software. I don't do any of that now, but at least I understand the engineering side of the equation. I mostly grew up in the business at Intel. So whether that was line of business, uh managing products, going to see customers, or helping people think through digital transformation on the communication side, I used to run Intel's worldwide event program for developers. So I've had my finger in lots of different pies and I've had a very varied career. Um, I remember somebody at Intel once calling me the Madonna of Intel because I kept reinventing myself every few years back in the days when Madonna was doing that. Um once I got to the mid-2010s, um, 2016 or so, rose to be one of Intel's two futurists. So I worked in Intel labs thinking about what the future of the world would be like five, 10, and 15 years out. It became clear to me then that Intel had a murky future, let's say. And so I decided to go out on my own. So I've been helping people um think through the future um as an independent futurist for some time. And then I got uh asked to go to London and spend a couple of years working for Google DeepMind in London. So uh I went and did that uh just after COVID and spent some time with just amazing people there. You asked about my passions. I have always, since I was a little boy, been fascinated by the future and how technology will allow us to build a better future for people. So I suppose that's my driving passion. I have side passions. I love music, I love photography, I love video editing. I always use lots of uh edited videos in my talks, my keynote talks because I love putting them together. And my wife and I collect wine. We enjoy drinking really nice wine from around the world. So um, yeah, I enjoy doing lots of different things, but my driving passion is people. When I was working at Intel in Intel Labs, my boss, Dr. Genevieve Bell, was a cultural anthropologist. And she really gave me the perspective as of always looking at everything through the lens of people. What do people want? What do they care about? What are their what are their passions in life? What are their motivations? What are they scared of? What are their uh ambitions? And to not just think about technology and changing business and creating new capabilities, which is all well and good, but how can you use technology to make life better for people? And so I suppose if if I have any guiding passion or principle, it's really how do you use technology to build a better future for people, in whatever way that is. So that's me.
SPEAKER_00:Fantastic. Thank you so much for sharing all this. It's so inspiring, and I can see that we have quite a lot in common in uh many of those aspects. I'm also very much excited about the future, and it needs to be somehow uh taken back into the human spirit and to the value of humans who we are and uh how we want to see our reality, not just today, but many years ahead. Steve, as a futurist who has worked inside both large-scale technology ecosystems like Intel and Frontier AI research environments like Google DeepMind, and author of two books on the future of innovation and artificial intelligence. Which emerging trends do you believe will most reshape the world over the next five to ten years beyond what is currently discussed in mainstream AI narratives? I've been following on the latest reports from Davos 2026, and there are so many conversations on all these topics, and uh I absolutely want to hear your vision.
SPEAKER_02:Yeah, so clearly AI is front of mind for many people right now. And at Davos, you saw that there's a there's an acceptance and a realization that AI is going to reshape everything, it's going to reshape work, the way we communicate. It is going to reshape our economy, rewire our society. And I call myself a futurist, um, which is a fancy, slightly weird title. And it doesn't mean that I predict the future. That's a fortune teller. I try and empower people to ask and answer two questions. What's the future we want to build? And what's the future we want to avoid? And so, futurists, you're looking at how trends are going to naturally progress over time and how they will converge and run into each other. And when you have the convergence of these trends, what does that make possible in a certain time frame? And I think we're in this moment where people are realizing, oh, the world is going to be very different than it was even three years ago. And it can feel a little bit overwhelming, like none of us have any agency in this. We don't have a say. But I call myself a futurist. I I think we all need to call ourselves futurists. We all need to, you know, I deputize you all. Everybody listening to this right now, I deputize you all as a futurist. And what that means is thinking about and having conversation with other people about what the future is that we want to build and what we want to avoid. So I think that's what you're seeing happening at Davos, this realization that we need to be having this conversation. I think that the change that we all sense is coming is coming much quicker than 95% of people realize. Um, maybe that's that's starting to get across to the people at Davos, but I'm not sure if they understand just how quick and profound this is. If we break it down and look inside the box that says AI, right? There are lots of different flavors of AI, different capabilities, and those capabilities are improving over time. The flavor of the moment that everybody's been talking about for about the last year or so, and seems to be front of mind for most people, is agentic AI. An AI that is able to perform tasks, it has agency, that's why it's called agentic AI. So it can use tools and APIs to connect to existing software and services and get things done for you. So AI stops being this turn-by-turn conversational chatbot and turns into essentially a digital employee, a digital coworker that you can partner with human talent to get more done. So that's that's kind of the flavor of the moment. We're still not quite there yet. There's still some more capabilities that have to be built out. We can get into the detail of what that is if you want. But what's coming next, and I think is going to be the biggest change for us all, is gonna be the next few waves of AI. So agentic AI, spatial AI, also talked about in the terms of world models. There's a lot of conversation at Davos about world models, and then beyond that, physical AI, these are the waves that are really going to change things. Um, so what are those things? Um, spatial AI is the ability for an AI to perceive the world, and not just I'm not talking about handwriting recognition and machine vision. I'm talking an AI that can intelligently see the world, that understands objects and the relationships between objects and people and places and events and causation. If I do this, then that happens. Spatial AI is a way of AI understanding the world, not through the lens of language, which is what large language models are, but to be able to understand a view of the world. And that's important because it then enables an AI assistant, which could be an agent, the ability to understand a human being's context right now. And it can help them a lot more in the moment. And so connecting a genic AI to spatial AI, whether that's a camera on your phone that you hold up so that your assistant can see what you can see, or a camera that's in the hinge of your glasses. You're going to see quite a bit of that coming this year, or perhaps on a pin, on your chest, whatever it is, being able to have that AI understand a person's context, whether that is somebody on the manufacturing floor in a uranium mining company, or somebody in a hospital, or just a person walking down the street trying to get some things done, running some errands. Having an AI assistant that understands what you're doing right now, they're much more able to help you with useful insights, information, advice, and to support you if they understand your context. So that's that's a big, big deal. Not a lot of people have been thinking as much about that. And then world models are very tightly connected with that. Spatial AI lets an AI understand your context and sort of see and hear, if you like. Uh so it gives AIs eyes and ears. World models gives it an imagination, it gives it the ability to say what if. So if I pick up this glass and shake it about, the water will move. If I tip it, the water will pour out. These are what-if statements. Intuitively, as a human, I know that. I know that you know fluids pour and that if I tilt it enough, it will fall over and water will go everywhere. An AI doesn't intuitively know that unless it has a deeper understanding of the world. So at the moment, most AIs understand the world and how it works through the lens of language, large language models. World models attempt to go beyond that and understand the physics of the world, causation, the what-ifs. So it enables AI models to have an imagination and to project. So when you have a robot, for example, that has a world model built inside it, and you say, Pour me a glass of milk, it can think, oh, okay, I need to pick up that jug over there, because it has milk in it, and I need to tip it, and the milk's going to come out into the glass. Trivial for a human being. We learn this when we're five or six years old, right? But for a machine, that's really quite difficult. And that's what a world model is all about. Once you have AIs that better understand the world, how it works, and I gave some trivial, sort of more physical examples with fluids, but just understanding how the world works, again, AI can help us much more because it understands the world that we are navigating, and it's able to help us much more intimately in the moment because it has a deeper understanding of what we're trying to achieve and how we might achieve it. So those components, spatial AI, world models, and ultimately physical AI, which is where we embody AI in a machine and give it physical agency. So that's humanoid robots and other types of robots. I would argue a self-driving car is in that category as well. We're going to start to see machines everywhere that are autonomous, that we can have a conversation with. You get a lot of the same questions when I'm out interacting with people. One of them is what's going to be the next Chat GPT moment, right? The moment that everybody goes, Oh my God, I had no idea, right? And wakes everybody up to how fast things have moved. And I think that that next moment will be when you see a robot walking down the street on an errand for somebody. And that is not that far away. You're going to see humanoid robots in American homes this year, 2026. So, and it's just going to go from there. So these next three waves change our relationship with AI because it can understand our context more. But it's also going to change our relationship with physical work because now robots will be able to do physical work in our space. And I think people have really underestimated the speed that will happen. It will be limited largely by the manufacturing capacity to build robots at scale. But guess what people are going to do? They're going to use robots to build more robots. And so you're going to see that you know hard takeoff cycle uh go much quicker than people think. So those are the things I'm watching that I think people are not talking enough about. But it starts with a GENTIC and goes from there. And work changes this year, and then the economy really starts to change next year and the year after.
SPEAKER_00:Exciting times ahead. And um I'm thinking was in my mind. Because I wanted to mention that I'm looking forward to all those amazing parts technologies can offer to us as humans, but at the same time, I can avoid thinking about all the scary and dangerous parts which it brings with all that development. And I see that many leaders, those who are developing AI, they are referring to this new world as yet another technological revolution, the fourth one. But do you see that it's exactly the same pattern, the same type of uh upgrade and transformation, or is it something completely different? If we take it in a very simple form, is it the same or is it different?
SPEAKER_02:In some ways it's the same. In some ways it's different. So let me unpack that. If we choose to use AI as an amplifier for people, then in some ways it can be the same as previous industrial innovations. I think of a combine harvester that enables a single farmer to harvest an entire field that would have taken a small army of people, you know, 500 years ago. So there's an amplification there, and in in some ways, we're going to amplify people and their abilities by connecting their cognition and their intelligence with machine intelligence and amplifying their abilities. So in some ways it's the same. What's different here though is that AI is it's a multidimensional technology. What do I mean by that? Over the last 50 years of IT, the driving use of IT tech, you know, information technology has largely been to boost productivity and efficiency. And that's been the driving thought and the ethos and what leaders have used IT for. How am I going to cut costs and deliver more impact, boosting my employees' productivity? But it's it's this sort of cost-cutting efficiency mindset. You can do that with AI too, but you can also, this time, you can use it to not just boost someone's productivity, but to boost their creativity, to improve their, expand their knowledge, to improve their decision-making skills, to improve their intuition. You can pair people with an AI and amplify them in ways you've not been able to do in the past. So I think it is a bit different this time. We're also seeing exponential change. And the thing with exponentials is they feel linear to begin with. Exponentials feel flat until they're not. You know, they get they trickle along and then off you go. And I think that's likely to catch people by surprise. You know, AI is not ready for prime time yet. Well, until it is. And if you haven't done the hard work to figure out where does AI fit in my organization, how am I going to use it to amplify what I want to do, you're not going to capture that AI tailwind when it comes, and you'll be left behind. So that to me is the way that people are underestimating what's coming. And I encourage everybody, I mean, that's that was why I wrote the book, to arm people with the information they need, to understand what's coming, how quickly, and how do you how do you recognize what's different about this moment?
SPEAKER_00:That is really helpful. And to all our listeners and viewers, you can find the link to the books in the description. But I'm thinking, Steve, what about the dangers? What about the fact that AI can replace humans? What about the fact that we are going to be surveyed to the point where every word and every step might be turned against us? What about the fact that we might miss that turning point where our reality will turn into a dark place?
SPEAKER_02:Yeah, and that's why we all need to be futurists right now and ask those two questions. What's the future we want to build? And importantly, what is the future we want to avoid? There are things you can automate and things you should automate. And they're not the same group of things. And we need to have a conversation as a society about where we want AI and where we don't want AI. You know, personally, I don't want to go to Broadway in New York and watch a bunch of robots leap about on stage. That makes absolutely no sense. There are lots of places where we don't want robots and AIs. There are lots of places where it makes absolute sense, but we need to have a conversation as a society about how we're going to embrace AI and where. And as leaders, leaders now have a much broader responsibility. They're not just responsible for the stewardship of their company and their fiduciary responsibility to their shareholders. They also have a responsibility to society at large. They have to think, you know, fiduciary responsibility to shareholders is to generate long-term business value. Short-term thinking would be: let's use AI to automate everything we can, cut our costs, be competitive, and then what happens then? If everybody does that, and that no one has any workers anymore, or very few workers, then suddenly companies have no customers because inconveniently consumers and workers are the same people. So we have to think more broadly, more holistically as a society, system-level thinking. And leaders have to think beyond the walls of their company as to how do they want their brand, how do they want their company, how are they going to show up in the world to use AI to amplify human capabilities and not to automate it away. So that's the thing I worry about most is that leader will cling on to 20th century thinking, which to me is cost cutting, productivity, yay. That's the old playbook. The new playbook is how do I use AI to amplify my people and increase the reach and the impact of my company, not incrementally, like, hey, let's grow 15 or 25% this year. How do you 10x, 100x, 1000x the impact you have? Because you're moving away from the engine of the company being people to the engine of the company being AI and then overseen by people. If you do that, you can move from a scarcity mindset, which is where we're all at today, right? We all have limited resources, limited budgets, a certain number of people, right? And we want to achieve as much as we can with it. That's that's the role of leadership management, is to optimize the use and deployment of those resources. Moving from that, if you have an AI-first company, now you can start deploying the resources you have and having an abundance mindset because you can deploy massive amounts of intelligence on demand, supported by, yes, the limited human capital you have. And therefore you can start to shift to a different business model. Historically, our revenues are based essentially, they map to hours worked, right? Revenue is based on hours, whether that you know you're billing by the hour or you're in a business where you're limited by how much throughput your humans have, hours in the day, right? It's revenue equals hours, hours worked. You can move in an AI-first world to revenue being based on the outcomes you deliver. Because you live in this abundant world and you're putting AI as the engine of the business. Moving to that AI-first thinking and getting out of this incremental approach is what is going to differentiate the companies that win and thrive and have huge impact for humanity in the world and those who become utterly, utterly irrelevant.
SPEAKER_00:I couldn't agree more. And thank you so much for this developed answer because it is so important for all the leaders and everybody who is using AI and developing AI and creating the future of business to think about all those questions you just mentioned. That's something we need to discuss together. And uh I love your take on the fact that we all must be futurists and we all need to look into the future in order to understand what we want to create and what we want to avoid.
SPEAKER_02:It's hard to put that extra responsibility on leaders' shoulders, but I'm sorry, that's that's what the time calls for. And we all need to step up and recognize that we have this broader leadership responsibility that goes beyond the walls of our companies. And we have to think about leadership in a different way. So I'm I'm thank you for bringing up the question, giving the opportunity to talk about it. I don't think leaders have yet realized that they are shouldering a much bigger burden than perhaps they realized.
SPEAKER_00:And it's not by choice actually, it's just the name of the new game. And we need to adapt and take our responsibility. So I totally agree. From your vantage point, where are leaders most at risk of misreading the future, overestimating some developments while underestimating others? And which blind spots concern you most right now?
SPEAKER_02:Yeah, we sort of talked about that a little bit already. I think misreading the future would be that exponential thing, not understanding the exponential curves are coming, and we're on exponentials, and you always feel like you're just before the knee of the exponential curve, and so suddenly things are going to take off. If you haven't already done the hard work, or you're not already in the process of doing the hard work, of figuring out what should my company look like in the era of AI, or where do I put AI in workflows to complement the human talent that I have, then you miss out on the moment where essentially the dial on AI's capabilities keeps getting turned up. And you know, we're heading towards human-level intelligence, AGI. When it comes, Elon Musk will tell you it's probably the end of this year, maybe. Not sure that's true. Others will say a bit more pragmatic, perhaps will say it's within five years or 10 years. But whenever it comes, whether it's this year or 10 years from now, if you haven't done the hard work to figure out where in my business does AI fit, you miss the moment where you essentially turn the dial up to 11. And that's a spinal tap reference for those of you who are old enough to remember. Um, and you get the full benefit, you get the tailwind effect of AI on your business and propel yourself into the future. That's the biggest thing I think people haven't figured out yet. You know, doing trials with co-pilot is not an AI strategy. Sorry. You know, if that's if that's your level of engagement with AI, you've got to you've got to pull up your bridges and get involved because that is not AI. That is old lame AI and Microsoft is phoning it in. I'm sorry, sorry, Microsoft, but co-pilot, anybody who is under the misconception that by saying to Microsoft, we'll pay you a little extra each month so that all of our employees had access to Copilot, you're leaving 99% of the value of AI on the table. So you have to think about things in a different way. The other thing I would say, blind spots that concern me right now, I think was your phrasing. I don't think we're thinking enough about responsible deployment of AI, making sure it's safe, making sure it's ethical, making sure that we do the best we can to strip out bias that AI has. AI models learn based on human-generated data, and buried in that data are human patterns, and humans have biases. So AIs will reflect the bias of humans. And I think the same way that we want our children to grow up to be better than us, we should want our AIs to be better than us and set that standard. So making sure that we do everything we can when we train AI models to reduce or eliminate bias, I think that's an important part of it. But just how do we deploy responsibly? How do we make sure that we maintain human connection, that we keep human judgment in the loop, we maintain oversight of humans in the loop, that we are using AI to amplify humanity and not diminish or sideline it. These are the types of decisions that we have to make when we talk about responsible deployment and making sure we build governance systems and so on. So uh those are the things I think I'm most concerned about.
SPEAKER_00:I'm so grateful that you mention all those things because they are truly crucial, and there is no way forward to end up in a better place if we are not going to address all of them. Steve, as intelligence becomes embedded across systems, products, and decision flows, what new leadership risks emerge that didn't exist in previous technology waves? And where are organizations and individuals least prepared? We already touched a little bit on this, but let's dive a little bit deeper.
SPEAKER_02:Yes, let's come at it from a slightly different angle. What every leader needs to recognize is that they are now going to be leading and presiding over a blended workforce, a blend of human workers, yes, still human workers, digital workers in the form of agents, and robot workers. If you have a physical component to the work that you do. So now you have this blend of three different pieces of your organization. What it means is you need to think differently about workforce planning. It means you need to ensure that your CIO and your CHRO are best buds because they now preside over this blended workforce. Actually, one company wrote about this in the book. Um they were the first company to say to recognize this, and they have had those two positions come together in one person. So there's one person now that oversees the IT department and HR because they are becoming part of the same organization, same workforce. So you need to expand the view of what management is, because you're now no longer just managing people, you are managing people and you're managing a digital workforce, you're managing agents. And that means you now need to expand management to also mean governance and policies and guidelines within which these AIs are going to operate. So everyone needs to get ahead of this and recognize this is coming, and this is what leadership and management is all about this year and next year. So thinking about systems design, um, how do you train your human people to thrive in this new world of a blended workforce? What are the right incentives to encourage that? Uh, what are the guardrails you need to put in place? How do you have enough oversight on your digital employees in the same way that you have oversight of your human employees? The way you achieve that is different. So thinking through that uh is really important. You know, so you're now um architecting behaviors using governance, and you're looking at the whole system of how humans and agents and robots a few years from now will all work together in a team. And you have to design that and oversee that. That I think is the big change, and it presents all kinds of new risks that weren't there in the past because leaders are gonna be making this stuff up as they go along. That's right. This is we've never had to do this in human history. We've never had, you know, when when you were using a tool, a hammer, you didn't have to figure out how you're going to govern and manage that hammer because it wasn't intelligent. We have to rethink AI as being a digital component of the workforce, and that means you have to manage all the risks that come with that, right? That the same way that you have to worry about managing the risks of having humans because we're all flawed and you know make mistakes. How do you mitigate for all of that? Now you have to do the same thing for agents and you do it in a different way.
SPEAKER_00:Incredible times. When I think about how many things have changed in the last two to five years, it's absolutely unbelievable. What we thought was just science fiction, purely science fiction, today it is all around us and it's just speeding up. But let's take this part one step further and looking at innovation through a futurist lens. Where do you see the greatest opportunity being created today? Not just by new technologies themselves, but as you mentioned, we need to merge certain profiles and certain competence into one. I'm thinking about those leaders who know how to align people, systems, and long-term vision and intent.
SPEAKER_02:Yeah, so um there's a there's a fun little story about Jensen Huang I like to tell. Uh when I'm trying to, when I'm doing keynotes, I talk about a question that Jensen was asked back in late 2024. He was being asked about NVIDIA and his plans for growing the workforce. And he said, you know, we've got about 30,000 employees now. This is back then. And over time I plan to grow that to about 50,000, supported by hundreds of millions of agents across every group. That was his answer. So he's thinking about growing the company and the scale and the impact of the company in a very different way. He's not talking about, oh, I've got 30,000 employees today. Maybe I can get it down to 20 by automating away 10,000 of them with AI. That is the old thinking. He's thinking about how do I amplify the people I have by pairing them with technology. What does this signal to everybody else? Because he's right. That's the right approach. If you have mastered this idea that you have a blended workforce and that you have, you know, components of that are digital. What this means is go big or go home, right? The era of incrementalism is over. Going for just 10 or 15% or 20% growth, you know, no, that's not going to win it because markets are now going to be dominated by the bold leaders, the bold companies who use AI to deliver extreme value at low cost, expanding their impact and reach 10x, 100x, 1,000x by using AI as the engine. So what I'm essentially saying is make Jevon's paradox work for you. And that is, you know, William uh Stanley Jevons back in, I don't know, 1860 something, um, he made an observation about coal prices. Uh and people talk about Jevon's paradox a lot in the AI world now, but that's the opportunity for every leader now, is to figure out how do I use AI to reduce the cost of delivering goods and services to the point where demand explodes and overall I make more money. That's what Jevon's Paradox is about. It's that as pricing comes down, demand goes up, and overall net net, you multiply those up, you're creating more value over time. That's what happened with coal, and that's happened in lots of other places. So, to me, that is, if I'm looking through my futurist lens to your point, that's the big opportunity, is to make sure that you go wildly big in your ambition. This isn't it, this is a time for leaders to, as I said, go big or go home, but to really unleash huge ambition. And that means you need leaders with vision, not managers who are pencil pushers and you know are good at sticking to their budget. You need those people too, but you've got to have the leadership visionary part to be able to say who could we be three years from now if we truly unleash ALS potential in every way that's possible.
SPEAKER_00:This is so spot on. And your questions, they are truly so visionary, and they're also very deeply connected to those action plans leaders have on their tables. And uh it's just yet another reminder that we need to upgrade our mindset, the way we're running business, and overall the thinking about the future. Drawing from your work with global organizations navigating high-stakes transformation, can you share an example where leaders believed that they were future ready, but a deeper structural, cultural, or leadership gap was exposed once AI or advanced automation entered the system?
SPEAKER_02:I think it's probably still too early for those stories to be here. We're still early in the AI transformation. But like I said before, you know, future ready doesn't mean you got co-pilot. Um future ready means a wholesale rethinking of how you go to market and how you create value. This is not, you know, you can't incrementally improve on this stuff. You have to go back to the drawing board and ask the question how do we want to show up in the world in the next five years? And almost your history is almost irrelevant, right? Because there's a new game now being played. And if you're not thinking that way, you're at high risk of someone coming and eating your lunch. Because we can talk about AI first versus AI native companies, but you have to be on the lookout for somebody who is wielding AI in the way it can be used and wielding it against you, and you're going to see these scrappy AI natives come and displace companies that have been wildly successful for 20, 30, 50 years, right? That's the risk. So there's a lot of change you have to go through. And as you lead your organization through this wholesale rethinking of how you're going to market, how you create value, what's possible, you need to bring people along with you. Visionaries don't get very far unless they have the communication ability to inspire people to follow them. He's a lovely man. I've met him uh many years ago. And he says, leaders need only one thing followers. That's you know, that's what defines you as a leader. It's brilliant and simple, but you have to inspire people to follow you. So, you know, they're it's all about communication. It's communicating, over-communicating to your workforce and including them in the conversation about the change. You know, I I had a client who was early to deploy agents. So they built AI agents and they rolled them out. And they were telling me that they'd rolled this stuff out and got enormous pushback from employees. Employees were like, What the hell is this? I I didn't ask for this. I don't want AI in my life. You know, if you force me to use this, I will quit. And people quit. And the people, this client I was talking to was sort of surprised by this. Um and I just said to them, Did you include your workers in the design process? Did you consult them? Did you involve them from the beginning? Well, no. We just built it and then rolled it out. And I just I shook my head and said, Well, that's why. You know, you and there are two reasons that you need to include people up front. Right? The first reason is you're trying to build supporters, not saboteurs. Because if you try and foist stuff on people, they will push back in overt ways and covert ways, and you will get saboteurs who will do everything they can to make sure that your projects fail. So you have to include people up front, build their trust, and then talk to them about the with them and the with who. I talk about this in the book. With them, most people know what's in it for me. Right? You have to explain to somebody, and by co designing with them, you can help them see why will their job be better, why will their life be better with AI helping them. Right? That you want to have that conversation. What's in it for me? You also want to be able to, as a leader, talk about Wi-Foo. What's in it for us? Look at your Your core humanistic purpose as a company. Why do you exist? And then figure out how will AI allow you to amplify your ability to deliver against that core mission statement and communicate, communicate, communicate that to everybody in your organization. Sure, they're there for a paycheck, but hopefully they're also there because they like something about what your company does, the value it brings to the world. Help them understand the connection between being able to embrace AI to accelerate your ability to deliver against that mission. So that's the first reason to include people up front in the conversation is to get them on board and get them excited so that you get supporters, not saboteurs. The second reason is the people who are doing the work actually understand how it gets done. Leaders typically don't. They have an idea of it. Managers hopefully understand what's going on mostly. But time and again, when I've worked with clients, the way that things are written down in the manual on processes and how things get done are not the way things actually get done. People find shortcuts, they find better ways to do things. And so if you design an AI solution to support the way you think workflows happen, they won't work because it's not the way work is actually done. So that's the other reason you want to include workers up front, because they will help you design a solution that actually helps and actually lands and works. So you want to design for the world that is or will be not the world the leaders believed it to be. So that's that's the major lesson that I've had from working with my clients is you know, over-communicate and include people up front so that they take ownership, you build trust, and you actually increase your chances of a successful deployment of AI.
SPEAKER_00:Truly great point. While I'm listening to you, I feel such a deep joy that you found me in this big world to discuss all those important topics because it's truly timely and uh it is going to be spread globally as well. So amazing conversation. Steve, as we move toward more autonomous, self-improving systems, how should leaders think about responsibility, accountability, and governance when outcomes can no longer be fully traced back to a single human decision?
SPEAKER_02:Yeah, when things go wrong, we know we today we do these post-mortems and go on the witch hunt, right, to figure out who screwed up. And you do it with the view that you're not trying to punish somebody, but you're trying to figure out what went wrong so we can mitigate this, it doesn't happen again, right? It's part of continuous improvement process. And that's that's the way we've done it for a long time. I think as we move into this world where you have this blended workforce, you have AI deeply embedded across the way that you get stuff done. We need to move from asking, well, who decided, right? Who made that decision that led to this problem to up-level that because we're now looking at things more broadly? Who designed the system? Who's overseeing the system? Uh, and more practically, we need to start thinking about when we build agents, make sure that they have audit trails, that we can see the decisions they made, why they made those decisions. Do you put in place performance monitoring? This is all chapter five of my book. Um, you have escalation paths. If something goes wrong, how does it get escalated to a human? And how do you deal with an issue when you spot one with an agent? Are there shutdown mechanisms in place? If things really go sideways, how do you hit the button and stop things? Right. So you're not having an agent that's giving away 80% discounts to all your customers, for example, right? So governance in that world moves from a set of rules to now continuous oversight because you have self-improving systems. These are going to get smarter and better and more capable over time. Part of the design of agents is that they have memory, they can use self-reflection to improve over time. So just innately, agents are have the ability to self-improve. That means you have a moving target. So you can't just have a fixed set of rules. You have to have continuous oversight. Because what you're looking for is to detect drift in the system where it's not quite doing the things the way you thought you wanted them to be done, uh, so that you can intervene as needed and then mitigate that risk. So it's a continuous cycle of oversight and system level thinking. What that means is in the AI era, governance is now a strategy, not a compliance mechanism.
SPEAKER_00:This is brilliant. We're talking a lot about governance, but still so many conversations are reminding of that old paradigm, and uh this is exactly the turning point where it needs to be shifted. From what you've seen, what differentiates leaders who turn this moment into a durable advantage from those who achieve short-term gains but create long-term fragility?
SPEAKER_02:I mean, the first one is don't get stuck in that 20th-century cost-cutting mindset of sort of incremental ambition. Focus on value creation, amplification, 10xing your impact. And if you really want a thought exercise, and this is I when I consult with leaders and we have the doors closed and I'm giving them advice, uh, this is what I tell them. Imagine an AI native competitor out there. Or if you if you left this company, you quit tomorrow, and you set up a well-funded competitor who was going to use AI from the ground up to build an organization that could come and take, you know, take your lunch, um, eat your lunch in the marketplace. How would they do that? What would they build? How would they go to market? How would they compete against you? What are your vulnerabilities? And use that to figure out what you're gonna do in response. You can't probably be an AI native company because you know, there's momentum systems, you know, your company is it's a built environment. What you can do is pivot to have an AI-first mindset and retool the company and redesign the company so that you can compete against an AI native company. That's what I counsel people when the doors are closed, and I'm being brutally honest with them. You know, you have to imagine the worst possible scenario possible. How could someone else use AI to come at you and take you down and then be ready for that and use it's kind of what's your judo move where you use their strength against them? Um, how do you how would you respond to that by becoming stronger through being an AI-first company? Here's what I mean by that. One of the adages in the retail industry when Amazon came at them was okay, well, we're going to focus on WACD, what Amazon can't do. Right? Amazon was a digital internet first company, internet native company. And then you had all these retailers who have all this bricks and mortar and distribution networks and blah, blah, blah. Um, they had to learn to become digital first companies that embraced the internet, had fulfillment, um, could do, you know, multi-channel, omni-channel, so you could buy online and return in a store, all those things, right? They adapted and they've leveraged the strengths that they have that Amazon doesn't have, which is I have a showroom where people can come and try stuff on. And, you know, there's people that will give them advice on how pretty they look in that, or you know, I get to see all this extra choice and I get to try the products out, or see the TVs and see how nice the screens look. Right? There's something about that physical presence. And so that's what people, leaders need to think about now is think about how an AI native company could come at you, and how do you become an AI-first company to respond to that and to be able to compete, leveraging your existing assets, but also using AI as the core engine of your business.
SPEAKER_00:This is such a great advice. I see reverse engineering in action, and that's amazing. I love this approach to everything because it's super efficient, truly. Steve, you mentioned quite a few things uh which should be really transformed and changed in order to start winning in this new game and on this completely new playground. But what is one assumption about leadership, innovation, or the future that leaders must unlearn to avoid becoming obsolete or misaligned in the decade ahead?
SPEAKER_02:I think it's that it's that 20th century mindset. We've kind of already covered it. It's you know not getting caught thinking, well, yeah, we've as a company, we've been winning in the marketplace by playing the old game and we've won 30 times in a row. So let's just keep playing the same game. That's not going to keep you winning for the next 30 years. You need to do something very different, and I think that's the biggest risk is people feel the momentum of success, and they don't recognize that it could all end if you don't keep up with what's changing. I mean, there are plenty of examples out there. They're probably the most famous one is Blockbuster, right? Don't be Blockbuster. Um, Blockbuster had the opportunity to buy Netflix not once but twice, I'm told. Um, but they thought, oh no, it's it's all it's all about videotapes and and DVDs. And guess what happened? So just because you're wildly successful now and the business model you have is working for you now, great, good for you. It doesn't guarantee that that continues. So have a long, hard look in the mirror. Game out what could happen. If I can help, I'd be delighted to. That's what I do for companies. But don't be complacent and don't uh rest on your laurels. This is a time of extreme change, and nobody, nobody is excluded from that. We all have to show up differently.
SPEAKER_00:That's so valuable. Just all the golden nuggets based on your experience, and uh usually I wrap up the conversations with uh asking for one advice, but I feel that you already shared that advice. When you think about your experience specifically at Intel and Google Deep Mind, would you like to add something as an advice for leaders based on everything you see coming, the trends, the risks, and the opportunities? What is your that most important one advice to leaders preparing for the coming years in how they think, choose, and lead under this increase in uncertainty? Because there is no certainty insight and it's just going to escalate.
SPEAKER_02:Yeah, so the first thing I'd say, I mean, embrace possibility thinking, where possibility thinking is defined as asking the question, how could we dot dot dot right? How could we something? It's not incremental. It's it's asking, how could we 10x? How could we asking the big bold questions? Secondly, it's to acknowledge that you don't know everything. And you're making this up as you go along. Because we all are now, right? This is a brave new world. No one really knows how to do AI transformation. We're making it up as we go along, we're figuring it out as we go along. There's a lot to figure out, and to get through that, you have to have a clear vision of the world you're trying to build, and then you figure it out as you go along. But you you have to have that clear vision so that you can lead others to that place. And that vision has to be bold. You know, it's got to be bold because if it's not, you'll get steamrolled by somebody else who has greater ambition than you do. So it needs to be a bold vision and it needs to be an inspiring vision so people will follow you. Leaders need followers. And the way you get there is to embrace AI as the engine of innovation for you. So you kind of put that all together. Um, we all need to show up with a different identity, sense of self as leaders. Leaders, for the longest time, you get to be a leader because you've been around the block, right? You know where the bodies are buried. You have a lot of experience, you have a lot of knowledge. You are a sage, right? You're a sage leader. People come to you with questions, and you can use your experience to help answer those questions and guide things forward. We're now moving to an era, as you as you pointed out, of great uncertainty, where we can no longer be these sages because the knowledge that we have from the past doesn't necessarily apply forward to the future. So we have to shift our identities as leaders from these sages to now being what I call philosopher explorers. This is again something I talk about, I don't remember, maybe chapter nine of the book, but showing up differently as leaders, where we take a philosophical, thoughtful approach to what we're trying to achieve. We bring bold vision, and then we admit to the people who are following us, hey, I don't have all the answers. But we're gonna figure out figure them all out together, and we're gonna do that using possibility thinking. How could we? That's a different way for many leaders to show up in the world, and it takes some humility to do that and say, I don't know the answers, but I am in a position where I'm gonna lead us all to figure them out together, to be inclusive and to lead everybody into what we hope will be an amazing future. So that's the best advice I can leave people with is to show up differently and uh lead people to do something amazing. This is a once-in-a-lifetime, probably once in 500 or a thousand years uh opportunity for us to have a technology that can allow us to shift the course of human history. And you're on the front lines, you you have a front row seat, and you get to participate. And that is the most exciting thing I can think of.
SPEAKER_00:Beautiful, Steve. I so much admire your clear and bold vision. You brought so much inspiration into this conversation. Thank you so much for sharing your insights and your wisdom with us today. Truly appreciate it. It's been so funny.
SPEAKER_02:This was fun, Amy. Thank you. And the questions really got me thinking. Sometimes things came out of my mouth I didn't know were in my brain. So it was really fun conversation and uh I enjoyed it immensely. Thanks.
SPEAKER_00:Thank you so much. Thank you for joining us on digital transformation and AI for humans. I am Amy, and it was enriching to share this time with you. Remember, the core of any transformation lies in our human nature, how we think, feel, and connect with others. It is about enhancing our emotional intelligence, embracing a winning mindset, and leading with empathy and insight. Subscribe and stay tuned for more episodes where we uncover the latest trends in digital business and explore the human side of technology and leadership. If this conversation resonated with you and you are a visionary leader, business owner, or investor ready to shape what's next, consider joining the AI Game Changers Club. You will find more information in the description. Until next time, keep nurturing your mind, fostering your connections, and leading with heart.