Digital Transformation & AI for Humans
Welcome to 'Digital Transformation & AI for Humans' with Emi.
In this podcast, we delve into how technology intersects with leadership, innovation, and most importantly, the human spirit.
Each episode features visionary leaders from different countries who understand that at the heart of success is the human touch - nurturing a winning mindset, fostering emotional intelligence, soft skills, and building resilient teams.
Subscribe and stay tuned for more episodes.
Visit https://digitaltransformation4humans.com/ for more information.
If you’re a leader, business owner or investor ready to adapt, thrive, and lead with clarity, purpose, and wisdom in the era of AI - I’d love to invite you to learn more about AI Game Changers - a global elite hub for visionary trailblazers and changemakers shaping the future: http://aigamechangers.io/
Digital Transformation & AI for Humans
S1|Ep92 AI, Governance, and Systemic Risk - Leadership and Value Creation When Certainty Is Gone
Let's talk about Leadership and Value Creation When Certainty Is Gone, and explore AI, Governance, and Systemic Risk together with my amazing guest Michael Herkommer from Zurich, Switzerland.
Michael is a a co-founder, CIO and CAIO at Riskovate, strategist, executive advisor, and systems thinker with decades of experience operating at the intersection of technology, leadership, and institutional complexity.
Michael has worked across strategy, digital transformation, and innovation, including working with and advising global organizations such as IBM, SAP, Accenture, and public sector institutions at European level. He has operated both as a builder and a critical advisor, supporting organizations through large-scale transformation, governance challenges, and moments of structural pressure.
What distinguishes Michael’s work is his systems-level perspective. He doesn’t look at AI, automation, or digital transformation in isolation, but through the lens of decision-making, accountability, governance, and long-term value creation.
This conversation is not about tools or hype. It’s a boardroom-level dialogue about leadership, risk, and responsibility in the age of AI.
Michael is a part of the Diamond Executive Circle of the AI Game Changers Club - an elite tribe of visionary leaders redefining the rules and shaping the future of human-AI synergy, and the Chapter Chair in Zurich, Switzerland.
🔑 Key topics discussed:
- Systemic risk in the AI era - where uncertainty compounds across technology, regulation, and geopolitics
- AI agents and autonomy - how risk shifts when systems move from tools to decision-makers
- Governance beyond compliance
- Regulatory fragmentation
- Value creation vs legitimacy erosion - building AI-driven growth without losing trust
- Innovation failure modes - when acceleration quietly increases systemic risk
- The human layer - leadership judgment, culture, and ethics as the true leverage point
- Unlearning outdated beliefs - what leaders must let go of to navigate permanent uncertainty
🔗 https://www.linkedin.com/in/herkommer/
About the host, Emi Olausson Fourounjieva
With over 20 years in IT, digital transformation, business growth & leadership, Emi specializes in turning challenges into opportunities for business expansion and personal well-being.
Her contributions have shaped success stories across the corporations and individuals, from driving digital growth, managing resources and leading teams in big companies to empowering leaders to unlock their inner power and succeed in this era of transformation.
AI GAME CHANGERS CLUB: http://aigamechangers.io/
Apply to become a member: http://aigamechangers.club/
Podcast: https://dt4h.io/podcast
📚 AI Leadership Compass: Unlocking Business Growth & Innovation https://www.amazon.com/dp/B0DNBJ92RP
📆 Book a free Strategy Call with Emi
🔗 Connect with Emi on LinkedIn
🌏 https://digitaltransformation4humans.com/
📧 Transformation for Leaders
Hello and welcome to Digital Transformation and AI for Humans with your host Amy. In this podcast, we delve into how technology intersects with leadership, innovation, and most importantly, the human spirit. Each episode features visionary leaders who understand that at the heart of success is the human touch, nurturing a winning mindset, fostering emotional intelligence, and building resilient teams. I invite you to talk about leadership and value creation when certainty is gone. Let's explore AI, governance, and systemic risk together with my amazing guest Michael Herkomer from Zurich, Switzerland. Michael is a co-founder, CIO, and chief artificial intelligence officer at RiskAweight, strategist, executive advisor, and systems thinker with decades of experience operating at the intersection of technology, leadership, and institutional complexity. Michael has worked across strategy, digital transformation and innovation, including working with and advising global organizations such as IBM, SAP, Accenture, and public sector institutions at European level. He has operated both as a builder and a critical advisor, supporting organizations through large-scale transformation, governance challenges, and moments of structural pressure. What distinguishes Michael's work is his systems level perspective. He doesn't look at AI, automation, or digital transformation in isolation, but through the lens of decision making, accountability, governance, and long-term value creation. His background spans industry, consulting, and cross-border environments, giving him a grounded understanding of how strategy actually plays out when certainty is gone. This conversation is not about tools or hype. It is a boardroom-level dialogue about leadership, risk, and responsibility in the age of AI. I'm honored to have Michael as a part of the Diamond Executive Group of the AI Game Changers Club, an elite tribe of visionary leaders redefining the rules and shaping the future of human AI Synergy. Welcome, Michael. I'm so happy to have you here in this studio today.
SPEAKER_00:Thank you, Amin. That was somehow full of a nice introduction. I was sitting here thinking, who is this guy? Well, thank you so much to be here and having me.
SPEAKER_01:You're very welcome. Let's start the conversation and transform not just our technologies, but our ways of thinking and leading. If you are interested in connecting or collaborating, you can find more information in the description. And don't forget to subscribe for more powerful episodes. If you are a leader, business owner, or investor ready to adapt, thrive, and lead with clarity, purpose, and wisdom in the era of AI, I would love to invite you to learn more about AI Game Changers, a global elite club for visionary trailblazers and change makers shaping the future. You can apply at AIGamechangers.club. Michael, to start with, I would love to learn more about yourself, about your journey, about your passions. Could you please share all this with us?
SPEAKER_00:Yeah, um it's a mixture of uh background here that makes me the person I am today. But my first couple of years in my life was on the Border City of Constance between Germany and Switzerland with a South German father and uh and a Swedish mother. I had uh a lot of different perspectives on life and also how things play out. And my biggest passion, uh, I started doing that, that was my goal in life, was to become a chef, a professional chef, and of course go to the Swiss gastronomic school. And my father, being German, said, Um, you're not going to be able to do that, but I will give you a challenge you can't succeed with. And I said, Okay, still remember it. Went back to my room and said, Yes, take me. I was not very old. Uh, and um, I got accepted to work during uh summer months at a remote location in Switzerland in the mountains as a young apprentice. And I did that for four years, four summers, and then finally I got the thing I could get into the school, and then my parents said, No, you know, being a chef, that's not a future thing, you should become an engineer. My father was an architect, so so that's where it all started with passion. I don't really um regret that. Instead, I do uh uh cooking every day, and I think what I do today with helping uh customers, clients, and peers around is to have that visionary thinking when it comes to being uh part of a system like you are in a professional kitchen. Everything has a place and everybody needs to collaborate and you need to communicate. And a kitchen, it's dangerous because you have sharp knives, you have hot pans, and you have pressure. So um I like to see that a lot of these things started from that part of my life. And adding to it, you need to be curious. And I think my curiosity combined with a little bit too much energy in my younger days, that brought me into the mindset of saying, why not? Or how does this actually work? So, what makes me a person today and bringing value as I hope my clients see is that I don't only take a technology perspective, I very quickly change and look into how would this affect the leadership position, or how would this actually help our culture, or where where can we find issues in it? So playing around with all these variables uh is is very rewarding and it's very fun. And um, I'll I'll share one more um thing from the tech side. When I grew up, as you can see, I'm a little bit gray-haired. Um, you couldn't buy a computer, um, didn't exist. Uh, it was early days. So I built my own, and um, of course, it was not very easy. So I had to do my own uh circuit boards, and I actually did them using my mother's solar lamp. So I started doing these things and really trying to experiment. And I'm very happy that I started from that way because it it helps to understand how things fit together, even if uh technology today is completely different. So that's a little bit about uh me here, and I'm a true European. I've been to most of the cities, lived in most of them, and a lot of different languages and cultures. Um happy to see that Europe is such a diverse group of people, but still share a common goal of humanity and helping and exploring each other. So it's really nice to be here with you, Amy, and um taking part.
SPEAKER_01:Thank you so much for sharing your story, Michael. Such a fantastic experience. And I could feel that there are many things we share through our life journey, so I can totally refer to some parts of it and your comparison with the kitchen where you are creating something uh delicious and fantastic, but at the same time, there are so many risks and dangers you have to take into consideration. I absolutely love it. Michael, you've worked across industry and business at moments when systems were already under strain. Which experience or achievement in your career best prepared you for today's AI-driven uncertainty and why?
SPEAKER_00:Yeah, that's kind of interesting because I always went into the uh the hot parts of things. Uh, when people said this can't be done or this is a problem, my response is interesting. I need to learn more about it. And then it ended up with me trying to solve it and solve it. That was the younger me. I was looking for a solution. But then I matured, I realized there's not a solution, you need to look for a common goal instead. So, what helps me today and what uh molded me into really understanding is to quickly gather everybody and to find out why are we doing this? And really, really lowering the expectations and saying, why are we here? Why do we go to this workplace and why are we doing it? I worked with uh nonprofit organizations, so you can't say it's because of the money, and that learned me a lot to say why are people going to this workplace and doing things? Because they think they bring something valuable, it's good for it, feels good at the heart. And if you can find that, I call it a North Star. If you can find that one as a common ground, then everything else falls into place very, very quickly because you can drive everybody towards a common goal. And that's where where most of my work starts. Finding that uh North Star. And once you have it, navigation becomes so much easier because you can now start looking into like the nonprofit organization. I still remember one where we don't have money. And I said, okay, so first thing on the list is to get money, so now it's done. What's the second on the list? And they couldn't because they were so blocked into we don't have money. And uh, once we solved the money thing, which was quick and easy, they really had to struggle on what's step two and three. So getting that common denominator, I think, is important when you start the North Star.
SPEAKER_01:I couldn't agree more. And um that is an interesting story you just shared, because there are such situations where an organization is really so focused on only one goal, and once that goal is achieved, there is no next step from there yet, and takes time to develop a long-term strategy. So it's definitely a great reminder for everybody who is watching this uh interview and listening to it that it is important to define your why, your north star. That's exactly what I often refer to as well, and then to have a broader picture and perspective for your strategic moves. Looking at the latest developments, AI agents, large-scale automation, regulatory fragmentation, and geopolitical tension, what systemic risk do you believe leaders are still underestimating right now?
SPEAKER_00:Yeah, that's a good question. Um, I was thinking about that one when you mentioned it now. I think the biggest challenge is all the leaders are educated in an old school model, which means that you're working in silos and then you have reports. And that's too slow because if you work in a silo and then you're reporting, then someone else needs to look into all these reports and do something smart with them. It doesn't move things forward. So I think the biggest risk today is since you have that system, if you don't address it, the AI is going to accelerate uh those silos even more. I had a conversation earlier, a large company, um global, and they uh confirmed exactly that. The silos are working, they have created something which is supposed to unify it, but the unification doesn't fit into the hierarchical system, so it doesn't get any power to continue. So even if you're adding uh technology things like agents or whatever the latest model might be, it's very, very hard to get it into the companies and getting the real value of it. You might get it in a small silo, but you're not getting that overall overarching uh effect that you were looking for.
SPEAKER_01:This is a very interesting point, and I could see it through my experience, and I can see that this gap and this problem are just getting bigger and deeper, and it is something that requires time to fix, and definitely a new mindset from leaders. So we need to approach this situation from so many different perspectives at once in order to find the right solution and uh apply it to the reality of our businesses. Michael, as AI capabilities evolve faster than regulation, what does effective governance look like in practice today beyond frameworks and compliance checklists?
SPEAKER_00:Ah, that's an even better question because governance is something uh people are afraid of. And if you talk to tech people, that's something which is blocking very both innovation and operational uh capabilities. I couldn't disagree more. And um, the company I founded together with my co-founder, she's uh chief risk officer. We actually got to know each other because of that particular issue. That if you come from a tech side, you would not want to work with governance and vice versa. Uh, we did it a bit differently. I said this is going to go extremely fast, so we need to uh maintain extreme control over all the risks, and we can't do this twice a year. We need to do this almost daily. Because once we can get the understanding of what's going on, we can then uh take uh educated decisions on what to do and what not to do. And what also happens is you can find something we defined it as there's a swan model. I don't know if you heard about it. Nicolas Talab, I did this thing with black swans, uh, something that might happen, but you couldn't have foreseen it. And then you can go for gray swans, and you also have green ones for the environment. And um, we created something we call pink swans. You can see it shimmering there, and if you look at it very fast and quick, you might be able to turn it into an opportunity instead. So if you go back to the question about governance, if you use it for guidance and if you use it for a directional indication, then you would understand that this is probably a risk I can turn into an advantage, but this is a risk I should stare far away from. So that helps us if you go for the metaphor about I like the thing of navigating, that you are are you on open water? Uh, do you have any risks if you're navigating where? Are you going towards your North Star, or do you need to do a little deviation? So that gives us a systemic understanding. And I would like to explain when we talk about systems, because um a lot of times system goes for tech people, and then you only talk about um an IT system. When I talk about system, I mean everybody and everything involved in the machinery doing thing. So it's about the people and the tools and the purpose. And if you can see that as a well-oiled machinery, you can do whatever you want. And that means being able to see where we're heading and then using the governance to understand: am I relational moving in the right direction, or am I moving in the wrong direction? So these things are not binary. And uh, if it's binary, you would say it's yes, good or bad. But if you just measure the relationship, you can see, yeah, I'm moving towards my North Star, I'm making a deviation, but it's still in the right direction. And this is where I can see that governance is going to be um key factor for the ones who actually are going to bring value into organizations faster and more stable. It starts slow, the systemic shift starts, and then it just starts accelerating. And AI is great at accelerating things, but remember it also accelerates uh the bad things. So that's why governance is needed.
SPEAKER_01:That's so true. Governance is needed, and everything is accelerating so incredibly much, and many leaders are not ready to this type of pace. Michael, in an environment of permanent uncertainty, how can leaders create real business value with AI without eroding trust, resilience, or long-term legitimacy?
SPEAKER_00:Well, that's a favorite one of mine. Um, I actually had the opportunity to meet with the American Richard Chambers. Uh, he's head of internal auditors and a legend in uh looking at risk. I met him um this spring, and he just released a book called uh Perma Crisis. And uh it's kind of amazing. He's over 70 years old and still sharp as a knife. And uh the discussions we had was like you can't anymore say that when is the crisis gonna pass? When is it over? There's always gonna come a new one. And as soon as you realize that, that there's gonna be a constant shift of crises one after another, that's the new normal. You shouldn't try and see, I need to steer for calm water. You should just say, Yeah, I love it, it's sailing, I'm going here, it's gonna be rough, it's gonna be storms, sometimes it's sunshine, that's also nice. But you just need to take it all in, all 360 degrees of what's going on there. And once you do that mentally, the whole thing shifts. And when I talk with leaders about that, they realize, oh, we've been thinking it's absolutely wrong. We've been waiting for it to pass. If this is the new normal, we just need to look at indicators differently. And that's where I go back to my favorite word: is observability. You need somehow to be able to get signals on what's going on. And if you can do that, you can learn how to interpret them. In the beginning, you don't know why I am getting the signal, but never mind, just get the signals in there, and then you will learn. And this learning thing and the feedback from learning is going to make you feel that a crisis uh is not that bad, actually. You're just gonna sense it and say, yeah, it's fine. This is the new normal.
SPEAKER_01:The new normal. This is exactly the hardest part for so many of us, because not everybody is used to be outside of this comfort zone in the struggle all the time. And actually, it requires many new qualities, and uh we also need to level up some of our other skills and qualities in order to succeed in this new environment. And uh when I'm thinking about innovation, there are so many cases where AI is applied to innovation, but not really as it could and should be applied. So, where do you see innovation going around today? And what distinguishes responsible value creating innovation from acceleration that quietly increases systemic risk?
SPEAKER_00:Yeah, I think innovation um has extremely good potential today. But if you're an innovator, you should start by slowing down a little bit and use all your skill set and looking at the problem and really defining the problem rather than getting in love with the latest AI model or tool and using pushing buttons and being amazed on how fast it is or what it can do. So, from where I'm sitting, I I can see a lot of innovators, they are also stressed from the speed of things, and then they forget about the true calling of being an innovator. So being innovator doesn't mean that you partially are making things 20% faster, more efficient, or better. That's something else. Being an innovator in my book is you're disrupting it. You're really thinking about what's this problem? How can I how can I change it? Why not? Why not do this? And once you do that, you will find that there's a lot of um support in the AI tools that quickly can make your hypothesis make or break. And that's another one I love to uh to talk about. It's when you start something, please define a hypothesis. And the reason I love that word is it's it sounds it sounds nice, it doesn't sound intimidating. So if you're talking about a hypothesis, it's easier, it's hard to say, but it's easier when you're talking to people because. If someone confronts you and says, Is this decided? Is this a decision? Then we can just say, No, it's a hypothesis. We we believe it, we think so, we don't know. And the power of saying we don't know opens up something completely different. Because when people are lowering their combatant face on saying, You don't know, no, we're gonna figure it out. Either it will work or it will not work. It doesn't really matter. If it doesn't work, we have learned something and we'll bring it on to the next one. And anyhow, you are going to be part of a team. And the last sentence, when you say you are going to be part of it, that's where you bring in anybody who has a strong feeling, subject matter experts, or just someone who is very passionate about something. That's where you can find the true innovation, that you bring someone on board with that understanding. But then you need someone in the innovation space to open up these thoughts. And that's where I can see uh innovation is going to be so interesting to see what's going to happen the next couple of years here.
SPEAKER_01:I'm curious myself because there are so many incredible opportunities around innovation. But I love your definition of innovation because I see that oftentimes it is mixed up and exactly something that is just an upgrade or optimization is also called innovation, which is actually not exactly that. And uh we need to level up as well and find other ways of creating things and processes which we didn't have before, and that requires a lot of creativity, a lot of knowledge and experience, and that's why AI can be of real help there, and I see it as a fantastic combo where humans and AI can get together and open up new doors and completely new opportunities. And we have fantastic examples of that as well. Can you think about a few examples where AI really opened up for something completely different and enabled breakthrough in the innovation area?
SPEAKER_00:I think uh the biggest one is the speed where you can validate or unvalidate your thoughts. And also, I'm a very visual kind of guy, so I I like to see things uh visualized. And normally that would take a long time, but with the AI, you can get both models, you can get visual imagery of things that triggers more things in the human brain to actually ask more questions. Why, why not? And that's where I can see the biggest advantage: the visualization and the quick speed of validating or invalidating uh your thesis.
SPEAKER_01:Totally. Great example. As AI systems become more autonomous, where does the human layer, leadership, judgment, culture, ethics, and decision making become the greatest point of strength or failure? How do you think?
SPEAKER_00:That's a good one. I think the hardest one to unlearn is that we're still thinking about AI as a system because it's delivered through IT, so it must be a machine. If it's a machine, we have learned since the machine was born that it has true or false, it's binary. Now, for the first time, we are getting a machine which doesn't have a true or false, it's not deterministic. But we fall into that all the time when we're talking about the machine. So if it's going to be autonomous, it will look like it's thinking and doing things, but we don't really know if it's true or false. And I think we should stop looking for true and false. We should look rather into a more adaptive relational layer seeing that this might go into this direction. Since it's predictive, it makes much more sense. Uh, let me take another example on this one. If we took an example of law, I was reading one article and actually read some things about it, where we are talking about the sovereignty of data in Europe. So the article clearly says that an American company could have to send the European data to the US government. Now, that is legally correct. If you look at the law and if you do that very narrow-minded thing, uh, you could say it's true. So, from a binary point of view, it's true. But if you now start looking into it, what would it take to send that data? Now it opens up a very large area of complexity. You need a lot of decisions, you need to go through different instances, you need to discuss things legally, commercially, diplomacy. And you can ask one easy question. If you're afraid of your data leaving Europe going to the US, first thing is, what kind of data do you have? Why would it have any interest for US government? And when you start asking these questions, you realize, oh, it could happen binary, but the risk of it happening is almost diminishing. So I think not being scared about these uh binary thinking of it legally could happen, much easier to go back to see why it should happen and the different layers of it. And that's where you're getting the human perspective of it. We're still looking into um the human judgment. And imagine now that we were using AI to look into law and passing judgment based on 50 years of historical uh data sentences. That wouldn't make sense today. So we still need people to understand and look into the different cases and make judgment calls. The AI is still always trained about historical data. So this is where we really need to put the human together with AI a lot more, and the human has to be uh behind the steering wheel, not the other way around.
SPEAKER_01:Exactly. You made me think as well about the problem around synthetic data, which is going back into the AI systems, and it is creating a completely different level of problems. So, what do you think about this situation and how could we avoid or optimize this process?
SPEAKER_00:I think the synthetic data, if you're trying to fix something which, if you take a step back and start thinking as an innovator, you don't really need it. The reason you're doing synthetic data is because the law says you can't use real data for training it or for testing it. But I would rather say the right question would be, what are you trying to achieve? Uh, don't go for I have a lot of data, I want to play with it and see what I can create in value. I think you need to do the lesson saying, I have a mission, my North Star says I want to do this, which helps uh customers. I actually had a recent case I was following from the um Swedish authorities. It was a law firm who wanted to go through thousands of old cases to look for patterns, train it with AI, and use SVAT for future reference. And um, I think the report coming out was very balanced because it exactly as I'm saying here, it comes back to even if you're going to synthesize the data, you still need permission because the cases or clients, they were never informed that their case is going to be used for AI training. And that will hit against GDPR, and I think it's it's it should. If von the Abraham law firm would say we want to serve our customers better and we want to make our lawyers even more capable, then it would be easier for every case, for the lawyer, to enter that into the AI system, not on a personal level, but just the learning experience. How did we put the judgment? What was our recommendation? What was the different parameters we were playing with? That means the law firm is going to increase our knowledge base based on our own knowledge together and enhanced by AI. And in that case, we are not using any synthetic data because we are not mining into actual user data. We are using the result from the different cases. So that's my take on it. And I've been involved in some projects forever. We've been trying to do master data, synthesize data, all these things. So far, I haven't seen anyone really succeeding. Someone are getting very close, but it's a moving target. And I don't think AI is going to solve it. Uh, but there might be. Um, I might be corrected on that one.
SPEAKER_01:This is an interesting point, an interesting experience. And I'm also thinking about all the data we already have generated across the internet, and that data is also used by AI, and uh AI is learning from it. So at certain point we are creating a vicious circle, and uh it is interesting how we are going to step out of it, and uh the cases you mentioned where we can unpersonalize data and still use real life examples in order to build that knowledge base by avoiding something generated, but still create a higher level of our systems, and that's something we should really come closer into. And hopefully that time is not that far away. What is one belief leaders urgently need to unlearn in order to navigate this new reality? There are so many underwater stones, and today we discussed so many exciting opportunities, but also many risks and limitations. So, what do you think is one thing we should unlearn in order to thrive in the future, in the coming years?
SPEAKER_00:I think uh we touched it a bit, and I think the biggest one is to uh drop the thinking of it as an IT project. In fact, drop completely it being a project where you can go in and have a start and a stop point and say, I just need to do this as a project. This is something that fundamentally changes your whole organism. And um, if you feel like if you take the approach that the terrain is always going to be moving and you can't control the terrain, it's always moving. That means that you need to have a strong belief system. Your North Star again. If you have that one, that's always going to stay. So you can control on how you should act, interact, and react depending on how the terrain changes. Are there rain, sunshine, thunderstorms, or whatever it is, you can have a process for managing this thing. You can have indicators saying it looks like it's going to blow for a storm, it looks like it's going to be dry weather or whatever it is. So you can deal with it in a different way. You can't do that with an IT project thinking mind. So the biggest one to unlearn is to drop that one and go into guiding instead of controlling. Now, since you accept that you can't control the terrain, you need to guide instead and saying, okay, what are my earlier indicators and how can I deal with that one? So that's where I think you should more think as a captain. You can be a captain of a boat, or you can be a captain of a soccer team. You're not really playing it, but you're observing and you're helping your system and your players reading different indicators and signals to be successful, whatever the terrain uh looks like. And that I think is um one of the most important ones for leadership.
SPEAKER_01:I'd like to dive a little bit deeper from here. I love this. So for leaders navigating the business and innovation landscape right now, what is one grounded piece of advice you would offer as certainty continues to disappear?
SPEAKER_00:Uh slow down, observe your system. I think it's underrated the power of slowing things down. And um, as a normal worker in a team, it's very hard to slow things down because you get all the pressure, you need things to be done yesterday. But as a leader, you have that luxury, and I think it's more than necessary now to exercise that luxury where you actually can slow things down and allow the team to move on, because it doesn't really matter. If they move on for a few days or weeks and it's not exactly in the perfect location, you will learn and you will observe so many things that will help them to uh catch up quicker once you can see it. So, as a leader, my advice is to step away a little bit, try and find a good vantage point and observe and really also ask more questions and uh have the questions to be more open-minded and try to normalize fail fast is good. Maybe I don't like the term fail fast, but failure is accepted. Bring that uh sense of being a safe environment for me to try things and fail. And as a leader, when you see that failure, bring it back to what did we learn? What can we do more from this insights? This is something we know. Our competitors, they don't know it. We invested in this failure. This is going to be an investment that brings us forward, and the only way to see that clearly is to slow things down from a leadership perspective. That's at least how I see it.
SPEAKER_01:Brilliant. I couldn't agree more. Thank you so much, Michael, for being here today, for sharing your experience, your insights. It's been a really deep pleasure to have this conversation because the time is requiring new levels of leadership, new levels of innovation. And uh, this is exactly what we've been unpacking and unfolding in our today's conversation. I truly appreciate you. Thank you.
SPEAKER_00:Thank you for having me, Amin. Always a pleasure.
SPEAKER_01:Thank you for joining us on digital transformation and AI for humans. I am Amy, and it was enriching to share this time with you. Remember, the core of any transformation lies in our human nature, how we think, feel, and connect with others. It is about enhancing our emotional intelligence, embracing a winning mindset, and leading with empathy and insight. Subscribe and stay tuned for more episodes where we uncover the latest trends in digital business and explore the human side of technology and leadership. If this conversation resonated with you and you are a visionary leader, business owner or investor ready to shape what's next, consider joining the AI Game Changers Club. You will find more information in the description. Until next time, keep nurturing your mind, fostering your connections, and leading with heart.