Digital Transformation & AI for Humans

S1|Ep65 Envisioning the Future of Humans in the AI Era: will Humans become the New Neanderthals?

Dr. Alex Heublein Season 1 Episode 65

In today’s episode, we are going to envision the Future of Humans in the AI Era. Are we evolving, or being left behind? Will Humans Become the New Neanderthals? Let’s find out with my brilliant guest Dr. Alex Heublein from Atlanta, USA.

Alex is the Chief Innovation Officer at Netsurit, bringing extensive leadership experience from senior executive roles at Oracle, IBM, HP, Red Hat, and more. In addition to his corporate career, he serves as an Adjunct Professor of Business & IT at Clayton State University. As a human-centric visionary, Alex is deeply engaged in advancing how Artificial Intelligence can drive meaningful and sustainable business growth.

Netsurit has earned international recognition for its ability to stabilize, innovate, and optimize complex IT environments. With a strong belief that shared purpose and core values form the foundation of great partnerships, Netsurit is committed to Supporting the Dreams of the Doers — both within the organization and across its global client base.

🔑 Key topics we discuss:

  • AI is accelerating, but are humans evolving fast enough to keep up?
  • We are either empowering humans with AI or quietly replacing them - which future are you building?
  • Leading companies are redefining what only humans can do
  • Without reinvention, we risk becoming the Neanderthals of the digital age
  • The teams that will thrive aren’t the most technical - but the most adaptable and aligned
  • In a world run by algorithms, human intuition, empathy, and creativity are your greatest edge
  • Ethics and emotional intelligence aren’t soft skills - they are survival codes in the AI era
  • The future belongs to leaders bold enough to center humanity in the heart of innovation

🔗 Connect with Dr. Alex Heublein on LinkedIn: https://www.linkedin.com/in/alexheublein/
🔗 Learn more about Netsurit on LinkedIn: https://www.linkedin.com/company/netsurit/
🔗 Explore Netsurit: https://netsurit.com/

Support the show

Support the show


About the host, Emi Olausson Fourounjieva
With over 20 years in IT, digital transformation, business growth & leadership, Emi specializes in turning challenges into opportunities for business expansion and personal well-being.
Her contributions have shaped success stories across the corporations and individuals, from driving digital growth, managing resources and leading teams in big companies to empowering leaders to unlock their inner power and succeed in this era of transformation.

📚 Get your AI Leadership Compass: Unlocking Business Growth & Innovation 🧭 The Definitive Guide for Leaders & Business Owners to Adapt & Thrive in the Age of AI & Digital Transformation: https://www.amazon.com/dp/B0DNBJ92RP

📆 Book a free Strategy Call with Emi

🔗 Connect with Emi Olausson Fourounjieva on LinkedIn
🌏 Learn more: https://digitaltransformation4humans.com/
📧 Subscribe to the newsletter on LinkedIn: Transformation for Leaders

🔔 Subscribe and stay tuned for more episodes

Speaker 1:

Hello and welcome to Digital Transformation and AI for Humans with your host, emi. In this podcast, we delve into how technology intersects with leadership, innovation and, most importantly, the human spirit. Each episode features visionary leaders who understand that at the heart of success is the human touch nurturing a winning mindset, fostering emotional intelligence and building resilient teams. In today's episode, we are going to envision the future of humans in the AI era. Are we evolving or being left behind? Will humans become the new Neanderthals? Let's find out with my fantastic guest from Atlanta USA, Dr Alex Hubline.

Speaker 1:

My fantastic guest from Atlanta, usa, dr Alex Hubline. Alex is the Chief Innovation Officer at NetShared, bringing extensive leadership experience from senior executive roles at Oracle, ibm, hp, red Hat and others. In addition to his corporate career, he is an adjunct professor of business and IT at Clayton State University. As a human-centric visionary, alex is deeply engaged in advancing how artificial intelligence can drive meaningful and sustainable business growth. Nature has earned international recognition for its ability to stabilize, innovate and optimize complex IT environments. With a strong belief that shared purpose and core values form the foundation of great partnerships, natured is committed to making a real difference, supporting the dreams of the doers both within the organization and across its global client base, both within the organization and across its global client base. I came across a powerful quote on Natura's page and I couldn't resist sharing it with you. Surround yourself with the dreamers and the doers, the believers and thinkers, but most of all, surround yourself with those who see the greatness within you, even when you don't see it yourself.

Speaker 1:

I love this phrase I do too. It's a great one.

Speaker 2:

I totally agree. Welcome, alex, great to have you here. How are you?

Speaker 1:

Good, good, thanks for having me back. This is going to be fun, I think Definitely. Let's start the conversation and transform not just our technologies but our ways of thinking and living. Interested in connecting or collaborating? Find more information in the description below. Subscribe and stay tuned for more episodes. I would also love to invite you to get your copy of AI Leadership Compass Unlocking Business Growth and Innovation the Definitive Guide for Leaders and Business Owners to Adapt and Thrive in the Age of AI and Digital Transformation. Find the Amazon link in the description below. Alex, to start with, could you share with us your story, your journey, your life path?

Speaker 2:

yeah, no, absolutely yeah. So you know I've had, I've had kind of an interesting journey. You know, I, I, uh, I've always been into it. So I I think I got my first computer when I was nine or ten years old and, um, and this is, this is a long time ago, right. So these things couldn't do very much back then, but even as a kid I thought, wow, this is the neatest thing, this is the coolest thing I've ever seen in my life, and so I just sort of had I've had this lifelong journey.

Speaker 2:

In doing that I started out my career. I spent the first 20, 25 years of my career as a software engineer. You know I wanted to, I wanted to build things. I wanted to take this technology and do some really interesting things with it and so built a lot of software, led development teams, architecture team on a lot of big projects, a lot of very, very cutting edge stuff, and loved it.

Speaker 2:

And I got to sort of the point, sort of maybe in the middle of my career, where I decided you know, what I really want to do and what really excites me about technology isn't so much the technology itself, it's what it can do and it's all about creating new things, and it's all about thinking differently. And so, you know, I made a little bit of a pivot. I moved over more towards the business side, more towards the people side of the equation, more towards the human side of it, and so I've spent the last 15 years or so managing businesses, creating teams of great, brilliant people to go off and kind of scale up that ability to build things. So it's been quite a journey and I feel very fortunate that I've been able to do a lot of different things in my life.

Speaker 1:

It sounds truly amazing. You have had such an exciting journey. I've been in IT home my life myself and I love leadership, so it resonates a lot with me, and probably that's one of the reasons why we met in this big world through this podcast. So I'm so looking forward to our today's conversation through this podcast. So I'm so looking forward to our today's conversation. As a C-level leader, you have seen the evolution of technology firsthand. How do you view the accelerating gap between human capability and AI capability today?

Speaker 2:

Yeah, well, you know it's changing every day, isn't it? We're seeing? You know, if you just even go back a year and a half, you go back 18 months. You know we had, we had some pretty, pretty amazing ai capabilities, but they were still at sort of that childlike level. They could do some things and they could deal with problems and a level of ambiguity that traditional systems couldn't deal with, right, I have an eight-year-old son and I call him my little AI experiment because it's been fun.

Speaker 2:

Having been involved with AI for almost 30 years, it's really interesting to see a little kid I never had a child before. I'd never spent a lot of time around children since I was a child and watching him grow up and watching him learn and watching him evolve and grow intellectually um, you just you see these massive parallels between the way children learn how to think and learn how to do things and the way we're teaching ai systems, uh, how to do things today. So it's been really fascinating. It's been uh enlightening not just uh to have the little kid running around, but to see how his mind gets better and better and it starts thinking about more and more complex things and now he's moving into this era of being able to think abstractly, he's being able to sort of this abstract, symbolic reasoning. It's been fascinating, and so I think, you know, ai is kind of on a similar trajectory right now. 18 months ago, we had large language models that could reason about as well as a six-year-old child, and now we have these models that, in some respects, can reason as well as PhD students. And that's happened in 18 months. And so you say to yourself, wow, how far can this go? Will that trend continue, and will it continue to a point where we have these, you know, artificial super intelligence? You hear, you know, you hear a lot of these people talk about this stuff. You know, I kind of view it maybe a little bit more realistically, and part of it is because I, at least, have a pretty good idea of how these systems were, having been doing it for a long time. You know, I of these systems were, having been been doing it for a long time. Um, you know, I think, I think the gap between what humans can do and what ais can do, that gap is shrinking very, very rapidly. There are, there are things that, honestly, I didn't, I didn't think we'd get to at this point. I would have thought they were going to be another 10 or 15 or maybe 20 years off. So I think that that gap is, uh, narrowing considerably.

Speaker 2:

We've got ais now that can reason very logically, they can think through problems, and so one of the big things we've seen over the last six months is this introduction of chain of thought reasoning. So before chain of thought reasoning we would deal with large language models and we'd ask them a question or give them a prompt and they would just spit out whatever came to their heads. Right, because there's like talking to a stream of consciousness kind of person. Just whatever came to mind initially is what they, what they would give you. And now we've taught them to actually think about what they're saying. It's kind of like that old saying hey, think before you speak. Right now they're actually thinking before they speak. Now they're actually reasoning through it and they're going back to themselves and saying, how should I break down this problem? How should I think about it? And then I'm going to go think about it. So it's added a tremendous amount of structure to what they can do and that's, and that has been a huge revolution. The funny thing is we haven't really seen it yet in most businesses, this notion of chain of thought reasoning, these intelligent agents. That's really just come to the forefront in the last three or four months and it's really just now coming to the forefront in the business world. So I think that this gap is closing.

Speaker 2:

I don't think my personal belief on this is that we won't get to true human level reasoning anytime soon, but we'll get closer and closer. I think there's still a lot of things that we can do that artificial intelligence can't do. I think that will be the case for quite some time. So I view that gap as it is definitely narrowing. But I don't think that, you know, I'm not in that camp that thinks that these things are going to take over the world, you know, next year.

Speaker 2:

And there are plenty of those people out there. Right, there are plenty of people that are saying, oh it's, we're doomed. I don't. I don't really view it that way. They're not that smart and I don't think they're going to get that smart anytime soon. But that doesn't make them any less useful, right, the things we can do today. If all progress in AI stopped right now, we would spend the next 10 years trying to figure out how to use this technology effectively. So right now I don't view the limitation of AI being a capability limitation. The limitation right now is human imagination for how to use these things effectively and how to use them in a way that enhances what we do. We're still working out all the cool things you can do with this stuff and I suspect again, if these things stopped progressing today, we'd spend the next 10 years working that out. So it's an exciting time for sure.

Speaker 1:

It is an exciting time and, as you mentioned, aj, and the Doom Day side, many things who are seeing it that way. What do you think about singularity?

Speaker 2:

Do you think it is impossible or is it sort of involved, this sort of linear extrapolation? And what you find out about every technology revolution is the linear growth, the exponential growth eventually heaters out, it eventually plateaus, and it's happened in every technology revolution we have ever seen in the history of humanity. You reach a certain point and then you start to plateau off. So to give you an example of that, when I was three years old this was a long time ago, three years old I could have gotten, or my parents could have taken me, on a supersonic airplane that could have crossed the Atlantic ocean in three and a half hours. They could have gotten, or my parents could have taken me, on a supersonic airplane that could have crossed the Atlantic Ocean in three and a half hours. They could have bought plane tickets, got on a plane in New York and flown to London in three and a half hours. And this was over 50 years ago. Today no one can buy a ticket like that because there are no supersonic commercial aircraft in the market today.

Speaker 2:

And if you go back and read the press from the time, everybody thought everyone is going to be flying around at thousands of miles an hour across the earth in a few years and we'll be able to get from city to city in a matter of minutes or hours or whatever. And that didn't happen at all. In fact we went backwards. So so when I look at sort of this, this exponentially growing technology idea, I don't buy it. And the reason I don't buy it is that it's never actually panned out that way, nothing in the in the history of humanity, nothing in the history of any of the technology revolutions that we've ever had, that exponential growth keep going forever. So so I don't buy the singularity thing. Is it possible? Maybe, uh, maybe. I'm not smart enough to tell you, but what I am smart enough to tell you is that it's never happened before, and that makes the odds of it happening now probably a far less likely than some people think.

Speaker 1:

I agree it sounds not possible, not probable, but still it might happen, and only time will show us. Because I think AI is something very different and it impacts us as humans at such an incredible pace and we are changing as human beings and the world around us is changing very fast. So it is interesting to see what will happen just in a few years, because when I'm asking my guests about how they see the future in five to 10 years, they avoid answering that question because it's really difficult to extrapolate. It's impossible, it's impossible.

Speaker 2:

I mean people at Kitsugami may say well, here's my five-year plan for the business. I'm like I don't care what your five-year plan for your business is. There's a 100% chance that that plan is wrong, at least to some extent right. It's just a matter of how wrong that plan is right. Now. You go back 100 years ago and you say here's my five-year plan for my business. You probably could have counted that That'll be pretty accurate. But today, yeah, no one has any idea what's going to happen five years from now. I think that's a fool's errand at this point.

Speaker 1:

I totally agree. That's how it is. What indicators are you seeing that suggest we're either empowering humans or, conversely, making them obsolete in the process of?

Speaker 2:

AI integration. Well, you know, I think both are happening right now and, and you know, I read an article pretty recently it was either that or a podcast and you know somebody was saying that, look, if you, if you think about humanity, if you think about our species, you know we spent a couple of hundred thousand years differentiating ourselves, mostly through our bodies. Right, we were farmers or hunters, hunter gatherers or farmers. We relied on our physical body to provide for us. And then, you know, at some point in the 20th century, that balance shifted, right, we depended less on our bodies and our ability to do sort of physical labor and the emphasis got put on our minds, right, and our analytic reasoning capabilities, and we started to value those things more and more. And you can see that now, because you know, 60% of the people in the world 100 years ago were farmers. Like 2% of the people now are farmers. Right, the number of the jobs there in farming have gone down. Right, and that was the most common profession in the world a hundred years ago or 200 years ago was. You were a farmer, most likely. Now, I don't know about you, amy, but I've never been a farmer and I don't want to be a farmer. Like that's not my career path, like that's hard work. I'm used to sitting in front of my computer here in my nice climate controlled office and I do not want to be out there trying to farm by hand.

Speaker 2:

So we saw that shift in the 20th century from us deriving most of our value as a species, economic value from our bodies, and it became more towards deriving economic value and progress through our minds and in particular, within our minds. It was about analytic reasoning capabilities right, we could do things, we could invent things and create things with our minds. It was about analytic reasoning capabilities right, we could do things we could, we could invent things and create things with our minds. And it was largely around how do we use our brains to do things that drive more economic value than we can drive with our bodies? So what I think we're seeing right now is a shift, uh, again from being a sort of having our analytic capabilities be what drives progress, and I think we're about to move into an era where our creative, the creative part of our mind is what's going to drive progress. It's about the ideas now, it's not just about well, we can go off and we can crunch the numbers here and we can do the analytics here and so on and so forth. That's all great, but the machines can do that now, or certainly will be able to in the next five years. So I think a lot of those skills that involve sort of that, that analytic reasoning and that ability to sort of think through not super complicated problems but deal with some ambiguity and do things like that um, that's not going to be the high value add anymore. The high value add is going to be creativity and ideas for new inventions and creating entirely new things and new ways of thinking. I think that's where the, the economic power, is going to shift to, so.

Speaker 2:

So when that happens, you're definitely going to make some people obsolete, right? Or and I don't think it's so much people obsolete, it's just going to be certain roles and certain tasks that we just no longer have to do. Right, the, the machines, can do them better than we can, right? You know, nobody sits around with abacuses and slide rules anymore, because we invented calculators back in the fifties, right In the 1950s, right? People were like, wow, this is way better than using an abacus or a slide rule, and so. So those inventions go. You can only see them in museums now.

Speaker 2:

Um so, so I think it's less about our computers going to replace humans. I think it's more about what tasks and what, what things that we do and what roles we play. Those will definitely, some of those will definitely get replaced, right, those will, and some of those jobs will get eliminated, but at at the same time, that frees us up as humans to do things like create new things, to have ideas, to be able to think in very different ways. So I think, overall, this could be very, very powerful for us as a species. But there are definitely some dangers to it. I'm not gonna I'm not gonna lie there's. I don't think we're gonna end up in the terminator situation, but nor do I think we're going to end up in a situation where, uh, you know, people's lives aren't affected, uh, by this, by this technology revolution. They almost certainly will be.

Speaker 2:

I think the good news about people is that we're very adaptable. Of all the species that we know of in this entire giant universe, we are the far and away the most adaptable. Like there's not even a close second to our level of adaptability, and part of the reason we have we're so adaptable is because we can change the world around us, whereas in most species, adaptability is a function of them adapting to their environment, them adapting to their environment, them adapting to their circumstances. With us, we can actually change the environment we live in, we can change the circumstances we live in, and I think that's the big difference between us and literally everything that's ever come before us. So I'm optimistic in that regard.

Speaker 1:

I love your positive approach and I'm very curious what are the dangers you are seeing? The worst is Well.

Speaker 2:

So one of the dangers that I'm seeing right now and I think we talked about this last time is that whenever you see disruptive technology hit the market, it usually isn't as good as what's out there. It's usually just a lot cheaper. So if you think about humans, we can do almost anything right. We have this vast repertoire of skills. We're very adaptable. We can deal with entirely new circumstances very, very effectively and with a lot of adaptability. So I think what we're going to see here is we're going to see people and roles, in particular within businesses and governments and organizations, start to get replaced from the bottom up, and so what that means is you need fewer and fewer of the junior people the quote unquote sort of junior people, people that are just starting their careers. You need less of those people because the machines will start to take over the tasks that they can do, and the thing that worries me is that, like let's take computer programming, for instance, the machines can the AIs can develop software pretty well. Right, they can code pretty well. They're still not software engineers. They're still not software architects. They can't see the big picture. They can't. You can't just say build me this app and it builds you this giant ERP application or something right Like that's. That's probably beyond their capabilities and will probably be beyond their capabilities for a long, long time. That's a very difficult endeavor, but if I need a program that does a very definable thing, I don't write code anymore. I just have one of the large language models build me the code for it, right, and so.

Speaker 2:

So the concern I have is that we have a lot of senior software engineers in the world who will retire, uh and and ultimately pass away. The danger is we don't replenish those ranks right, so you hope that the ai gets better and better. We start to actually lose capabilities as a species. We start to lose those because those people will never go into the people that would have become the senior superstar software engineers. They're not even going to go into that profession Now. They're going to go do something else for a living.

Speaker 2:

So I think that's one of the big dangers I'm seeing right now is that we're hiring, we're starting to hire fewer and fewer sort of entry-level, junior level people for the sort of intellectual whatever you want to call it knowledge worker type tasks. And if that continues to happen and you fast forward that 30, 40, 50 years now you end up in a situation where no one knows how to do this. The only ones that are really good at this are the AIs, and so I think we have the potential to go backwards as a species without replacing those people, without sort of building those roles and those skills from the ground up and teaching people how to do these things. So I think that's a big danger that I'm seeing right now. It's just starting to happen, but if it continues to go that way, I could definitely see us being in a position where we start to institutionally forget things, forget how to do things, and that's dangerous.

Speaker 1:

This is super valuable and from this perspective I agree with you. We might create a gap which is going to impact humanity's development overall, just because it is very easy to believe that it won't be needed anymore and choose something else. And the more we'll think that way, the bigger the gap will be, and then it will be super difficult to go back to a situation where humans can compete or collaborate with ai solutions after being out of the market for many years and the thing is it's happened before, all right.

Speaker 2:

So if you think about the space program right, you know the, the, the us put a person on the moon before I was born. That blows me away when I think about. Like, before I was born, we as a species figured out how to take someone, stick them on the moon and then bring them back safely right. And then at some point in the 1970s, we stopped investing in the space program right, and we forgot how to do that. As a species, we forgot how to put people on the moon.

Speaker 2:

So you see that sort of thing happen, and it's happened throughout history. That's not the only example. You can go back thousands of years and you can see things that humanity forgot how to do. That's worrisome. That does keep me up at night a little bit is that if we forget how to do these things and we put all of our faith and our capability in these machines, if something goes horribly wrong there, we're left where we were hundreds of years ago, right, and we can actually regress as a civilization. And so that's definitely it's not unprecedented for that to have happened in the last few thousand years.

Speaker 1:

And another danger our tendency to degrade. Actually, as I see it, when you use those AI solutions for a few days, a few weeks, a few months, and then suddenly you are out of internet, you are out of your gadgets, then it becomes really more difficult to get back into those thinking patterns, into everything what depends only on you as a human being, without being codependent on those digital solutions Indeed. So it feels like if your brain is freezing and you really can't be the same powerful and impactful person you've been while you had access to all those technologies. So that's another danger, because I see that people are sometimes navigating like if it's enough to Google, it's enough to use ChatGPT or any other similar solution and you actually don't need to know something on your own or be able to perform on your own without addressing those sources of information. But sometimes the application is where you really do need that knowledge independently.

Speaker 2:

Absolutely.

Speaker 1:

From your experience across major tech corporations, how are companies redefining the role of humans in future forward AI strategies?

Speaker 2:

You know, I think it very much depends on I don't think there's a one-size-fits-all answer for that, right. I think that it very much depends on the roles that people play. Those are definitely getting redefined, right? We're starting to see more and more organizations and we're still in the very early days of this they're looking at humans to fill the gap between what AI can do and what actually needs to be done, right? Because you know, in most solutions that you're going to build with artificial intelligence, they're probably only going to be able to effectively solve 80 to 90% of the problem. That last 10 or 20% is really really hard and we are really really good at it as humans, right? So I think that's one way that we're starting to see the role of people be redefined is let let the ai do 80 of the work, or 90 of the work, and then the humans are there to to do the hard part, right? That last 10 or 20. That tends to be where humans are still uniquely qualified to to do a lot of tasks. So I think that's that's one way, right? I think a second way that is sort of redefining the role of people is sort of having people that can look at the results that these things are producing and make sure that they actually make sense. So almost like error checkers, right? Like, is this thing spitting off nonsense here? Is it just hallucinating or is this real? So I think there'll be more and more of that, that sort of human oversight, and I think that's a good thing, right?

Speaker 2:

Uh, I think we talked about this last time. I'm not a huge believer in fully autonomous systems, uh, driven by ai at this point. Maybe someday we'll get there, uh, to where you're gonna be truly 100 autonomous, but I don't think we're there yet for most, most situations. Uh, so it's. I think that's another way. And then I think you know the third way that I see it sort of changing the role of people to grow food so we can all survive.

Speaker 2:

You know, 60 of the population are farmers, going down to whatever it is today two percent at least in the us, uh, and probably in europe, probably only two, two or three percent of the people are farmers. We got rid of most of the farmers and that gave us the that. Then that freed up our time to go off and do really interesting things. Right, we could go put people on the moon or whatever, right? Whereas before we didn't have the time to do that, nor did we have the economic wherewithal to go do those things. So so again, I kind of view it optimistically If we could free up people from doing the boring, mundane, routine and time-consuming tasks that don't add a lot of value. Free them up to actually invent, free them up to create, free them up to have great ideas and go implement those great ideas. I think we'll all benefit from that. But it's definitely going to redefine a lot of roles as we go forward.

Speaker 1:

I think it sounds like we need to redefine the human being for definition, because it's going to transform so drastically. And today I've been listening to one very interesting interview and the a little bit scary idea there was that we humans today are like a better version of what we are supposed to be today, are like a better version of what we are supposed to be, and hence the old, and I thought, gosh, if we are the better version, you know, it feels really creepy in a way.

Speaker 2:

Yeah, what does that mean? That is a little bit scary, isn't it?

Speaker 1:

Exactly. It is even very scary. I would say yeah, Alex. Do you believe there is a risk that, without reskilling or reinventing ourselves, humans may fall behind like the Neanderthals did? What are your thoughts on the potential outcomes of this AI4 versus humans expansion?

Speaker 2:

Oh yeah. So I mean, there's certainly that risk, right? I mean I can't rule that out. I can't say that we will not go the way of the Neanderthals, and you know, you've probably heard this old story, but you know there's this thing called the fermi paradox, and the fermi paradox is pretty simple we live in a universe that should have a lot of intelligent life forms in it, and yet we haven't heard from a single one of them, right? So so if there's life out there in the universe, why haven't we heard from anyone, right? Why can't we find it right? And so one of the hypotheses behind that is well, eventually every species goes extinct for some reason, and we could be looking at the reason people go extinct, right, could be AI. Maybe every intelligent civilization eventually develops AI, and maybe AI just kind of puts everybody out of business, right, and so they don't ever evolve as a species after that.

Speaker 2:

Now, I'm not that pessimistic about it, right? I don't actually believe that's going to happen, but it's sort of one extreme that you kind of see there is that we could go the way of Neanderthals. But I think we're definitely going to have to reskill. If we don't reskill and we don't reinvent ourselves and we don't adapt, it's almost guaranteed that we'll go in that direction. The other piece of this, though, is how far will artificial intelligence go and can actually make systems that are cost effective, that can outperform humans? Now, in some tasks today and in some roles, the answer is a definitive yes. We already have them, right. We have machines that can do tasks that previously only humans could do, and do them far better than we can. The question is, how generalizable is that? And I don't think we know at this point, but I don't think this is going to continue forever. I don't think we're going to see this rapid pace of change still happening 20 years from now.

Speaker 2:

That we're seeing today could happen, and, and I think one of the big dangers with ai and a lot of people have pointed this out is that the minute these ais get smart enough to make themselves better, you have the potential for sort of a runaway freight train at that point. Right, a runaway expansion in their capabilities. Right, when they can make themselves smarter, then they get smarter, which means they can make themselves even. Then they get smarter, which means they can make themselves even smarter, and that just keeps going. It happens very quickly, at least in theory. Right, that could happen in a matter of months. We could go from AIs that aren't quite as smart as people to AIs that are way smarter than humans. So I think there certainly is that danger. I don't think we're anywhere close to that right now, but certainly something that should probably keep everybody up at night at least a little bit.

Speaker 1:

Totally, and actually I believe that we are not the only one civilization in the universe. There are many, and we are coexisting and actually we're in a closer touch than some believe. But it is a very exciting topic and probably a topic for one of my episodes in the future. However, the science didn't discover enough, but there are other sources of information as well, so it is very interesting to learn more about the universe, about ourselves as species in this universe, and it is something what goes so close into this conversation about AI capabilities and how artificial intelligence can enhance us as human beings or push us to the brink of something completely opposite, and that outcome depends a lot on the leaders and their choices, their way of running this business. So how can leaders future-proof their teams not just with technical skills, but with mindset, adaptability and deeper human intelligence and connection? How can they make sure that they are moving technologies and opportunities in the right direction, in the human-friendly direction?

Speaker 2:

Yeah, that's a really good, that's a great question, and so my thoughts on it are. My first thought on it is that I don't think you're going to future-proof your teams by having technology skills. I don't think focusing on skills, learned skills is the way to future proof your team, right? I think one of the ways to future proof yourself and your teams and everything else is that you've got to focus on on the people that are creative, right, the people that have ideas, and and what's really funny to me is that you you know, if you go look for a job and you've looked for a job in the last hundred years, okay, and you go read a job description, it's a list of skills and experiences is what they're looking for. I want this much experience with this skill, this much experience with this skill, and I think that's a very 20th century way of looking at things today.

Speaker 2:

What we don't tend to value, at least on paper, and angles that can come at the world from a very different perspective and be extremely adaptable. You will never find those things in a job description. I think I've never seen a job description says 10 years of thinking differently, right Outside the box, thinking for five years like. Nobody puts that on on job descriptions. Very rarely will you see those things, but I think those are actually far and away the most valuable traits that we'll want to hire for as we go forward.

Speaker 2:

Minds yes, that's important, but also people that can attack problems from a different angle, that can think differently, that can think outside of the box, that can have ideas and creative thoughts and then take those and make them a reality. I think that's what we need to be hiring, for. Those are the people that are going to succeed and prosper in the future, as opposed to people that have 30 years of SAP experience in the ERP market. That's gone man.

Speaker 2:

Those skills are going to slowly erode and become irrelevant as machines get better and better at those skills. Where they struggle today, and I think they will continue to struggle for the foreseeable future, is having ideas, being creative, attacking problems in a unique way. That's what still makes us uniquely human, and so I think that's the best way to future-proof your teams and your company. Start hiring those people right. Start making that a criteria or a set of criteria for the people that you hire, and be explicit about it right. Don't just say, well, it's nice to have someone that has good ideas. Put that in the job description right. Make that a priority, and then I think you can future-proof your company and your teams as well.

Speaker 1:

And still, even today. This is a brilliant advice and I so, so much support this idea, but practically I still see leaders who avoid having freely thinking spirits around them, those who can truly think outside the box and push the mindset in the direction where they can navigate in a brand new way, direction where they can navigate in a brand new way, in a much more powerful way which corresponds to the requirements in the world today. So you know, it feels like if it's not such a straightforward process, even if it is absolutely required for survival of any business and for our own survival, actually, as human beings. But it is not as easy as it sounds no, it's not.

Speaker 2:

And if you go to big companies right, and I've worked for a lot, you know you kind of rattled off some of the companies I've worked for, like, I've worked for some of the biggest technology companies in the world and you would think they would value that stuff.

Speaker 2:

You would think that that that those are the kinds of people they would just be dying to hire. And I know I promise you it is not. Um, I guarantee you it is not. In fact, those people, when they do get hired, are often labeled as the misfits and the troublemakers and the, you know, like people that don't, that, just aren't going to follow along with the process and just go okay, yep, got it, I'll do this and this and this, and and the funny part is that that's how you make progress. Making progress, by definition, is challenging the status quo, challenging what the way the world is today, so we can create a better future, going forward, right. So having people that not only can do that but are willing to do that and have the mindset to do that, they'll be the most valuable people in the world 20 years from now.

Speaker 1:

Totally, and it requires not only learning the new mindset and getting new skills, but unlearning, a lot of unlearning. So it's's really interesting and we're living in exciting times. We are the blessed generation who got access to all those new technologies and it's like a candy store for me. You know all the opportunities and the unknown and just the space where you can unfold in a completely different way as a human being. It is very powerful, but it requires a lot.

Speaker 1:

Indeed, indeed it does critical human-centric capabilities and values we must protect, elevate or reawaken to avoid becoming irrelevant in an AI-dominated world.

Speaker 2:

Well, yeah, I mean and I think we just hit on a couple of those, right, I think creativity and I joke with people, and this is kind of a joke, but it's kind of true you know, I've told people for a long time now there's a future in which there's only going to be two kinds of people in this world there's going to be laborers and there's going to be people with ideas, right. So, so this notion of ideas, creativity, adaptability, et cetera, I think those are characteristics that we need to value more. And again, sometimes those people get written off as troublemakers, right, and I've. It's happened to me in my life, like I can't tell you the number of times I got written off as a troublemaker or a misfit or whatever for not following the rules and not following the process and not following the you know standard operating procedure. I still, in this very day, get yelled at it, sometimes by my boss, right, it's like why didn't you do it this way? And I'm like, well, because I didn't think it was a good way of doing it, man, so so, so, but I think we need that right, we need those those people, and we need those skills, and I think that we need to.

Speaker 2:

We need to have more of a focus as we go forward on, on this notion of teaching creativity creativity, if you go into most school systems and this is probably acutely true in the us I think you all in sweden do a much better job of this than we do in the us, because we do a terrible job of this. We've beat the creativity out of our children, right, I mean we. We do it institutionally in the united states, all right, and in a lot of countries. By the way, uh, china is just as guilty as the united states, is right. You go to a lot of countries and it's all about conformity. It's all about learning this curriculum, learning exactly these facts and figures that we want you to learn, with the notion that you're going to become this productive human being. Well, guess what those productive human beings they're trying to create with these rigid curricula and all of these facts and figures and things you need to go memorize, those people aren't going to have jobs, right? The machines can do that stuff, right. So what we need to be encouraging in education is A how to think about problems creatively, like this.

Speaker 2:

Creative problem solving is probably the single most important skill humans will need in the future. How do I not just solve a problem, but how do I solve a problem creatively? How do I attack it in a different way? How do I come up with novel ways of thinking? The AIs are terrible at that stuff. You give them a problem. They've never seen or seen anything like it. They just fall apart. They have no idea what to do because it's not in their training. They weren't trained that way. So I think that notion of being able to attack novel problems, do that creatively, is something that we need to be fostering, and I think the way you foster that is not just to teach people sorry, I'm doing math and whatever. You teach them art, you teach them music, you teach them to have that creative mindset and then teach them to apply that creative mindset to business problems and the other problems the world has.

Speaker 2:

I think that's got to be a shift in the way we educate children as we go forward. Or we're going to come up with a bunch of little children that come out of school and they're basically little worker robots. But those worker robots we've already got the worker robots. That's what we call the AIs. So I think we have to radically rethink education all the way from the very first years of education, all the way through to university level education, if we're going to adapt to the future. But that takes time. It takes time for a child to grow up. It takes time for them in their earliest educational experiences all the way through university. That's 20 years, right. So it takes time to go build that species level capability and increase that capability, and we're way behind the curve right now. So I think that's a critical priority for every country, for every school system is to be thinking that way and teaching in a very, very different way than we have in the past.

Speaker 1:

This is a very serious problem and I'm happy that you mentioned it because I totally agree with you and I had another interview recently, even more than one, discussing the importance of STEM approach and all those questions connected to the future of education and knowledge and how it's going to develop. But I also feel that we need a broader knowledge base database for children, because you mentioned that probably Europe is a little bit stronger from the perspective of creativity or freedom in the educational world, but you know, it is difficult to find the good versus the bad because it has to go hand in hand. And that creative approach and learning different subjects which don't belong to certain programs, subjects which don't belong to certain programs like, for example, technical programs. They don't include and don't suppose including at all anything what enables creativity and and the broader approach, right, because in my practice and in my past, fortunately, I was pushed and forced to learn, even on the university level, such subjects which I don't think many IT engineers needed to learn really in their life.

Speaker 2:

Oh, yeah, no, I did the same thing. I learned a bunch of stuff. I was like, well, I mean this is cool and all, but when am I ever going to use this? Right, this is, this is uh, it's crazy for me to be spending my time on this, yeah, but but and I think you're making a really good point Like for the longest time, at least in the U?

Speaker 2:

S, we've emphasized the last 10 or 15 years, we've really emphasized the STEM curriculum, right, Science, technology, engineering and math, and I don't think there's anything wrong with that. But what we've done is we've de-emphasized the creative elements of it. Right, we've taught people how to be good engineers, good scientists, and, again, there's nothing wrong with that. You still need those skills. Those skills are still valuable. That's how I was educated, right in the humanities, being educated in music and art and history, and I think that those things are actually now, ironically enough, after 15 or 20 years of teaching, of saying no, no, no, we got to teach the STEM curriculum. This curriculum that we left behind, or at least pieces of it, this creative element of it, is now actually coming full circle and I think it's becoming more and more relevant and more and more important as we go forward.

Speaker 1:

Totally Absolutely, and I so agree. I couldn't agree more. However, the question is how can we, as leaders, help the educational system and the humans overall to re-evaluate the importance of those subjects and pieces which I'm missing, to put them on top in addition, not instead, because usually, you know, the history goes in two ways. It's the one or another one. You have to squeeze out one in order to push in something else, and in this case, we really need more balance, we need more harvening and we need a deeper understanding and broader knowledge around how we can match those artificial intelligence capabilities and co-create and collaborate together. But AI has such a wide, huge database and we humans we tend to be very narrow into something, so it is about time to also get more generalists, who can understand and operate on a broader scale, to be able to connect the dots which are usually even outside of the visibility area for most of us.

Speaker 2:

Yeah, and if you think about it, right, I mean, if you think about the way that humans evolved, right, and this goes back hundreds of thousands of years, right, there were the Neanderthals, for instance. I mean, we kind of talked about that earlier, right, and the big difference between Homo sapiens and Neanderthals. I mean, think about the Neanderthals, right, they were bigger than we were, they were stronger, they were faster and their brains were just as big as ours, right, every bit as big as ours. They had the same sort of cognitive horsepower that we have. But an evolutionary leap happened at some point where one of those Neanderthals or one of the other hominid species, developed the ability to dream, to envision a world that did not yet exist. And that's actually what's made us so successful is our ability to dream, the ability to create in our heads a world that does not yet exist. Right, that's, that's what, what's one of the only things that makes us uniquely human is our ability to do that right. And that cognitive leap is why we're here and the Neanderthals aren't, because, again, they were bigger, stronger, faster. They had every advantage, but we had the advantage that we could dream up things they couldn't dream up. So we dreamed up inventions and we dreamt up all kinds of things that gave us this competitive advantage, right.

Speaker 2:

So this ability to dream is something we need to be fostering, we need to be, and to answer your question like how do you help improve the educational system? Well, hire the dreamers. Right, schools will produce what people are hiring for. Right, they tend to not produce people that aren't useful, even though the educational system in the US does that quite often. Right, we produce, we produce educational results that aren't aligned with the market, and I think it's going to be true all the time.

Speaker 2:

But I think that's one way we can do it is reward that right. Go out and look for those people, look for the dreamers. I mean, that's one of the reasons that NetShirt, our motto, is in the dreams of the doers, because it's all about those dreams and the people that can go out and make those a reality. Right, those are the people we want to support, those are the people we want to hire, not just the people that are just the doers. We want the dreamers as well, and I think you hit the nail on the head. You need that balance between people that can dream up new things and people that can actually go make them a reality. Right, and having those in one person is ideal. So I think we have to reward that economically. I think we have to reward that and really almost force our school systems to think that way and encourage them to think that way and produce those types of people as we go forward.

Speaker 1:

that level of education 100%, and I hope that it will happen more and more. It would make me so happy seeing it really happening around. What role do ethics, emotional intelligence, individual and collective purpose play in shaping the future where humans and AI thrive together?

Speaker 2:

That's a big one. Together, that's a big one. Let me kind of start with the end of that around purpose and sort of meaning within people's lives, because I think that one is actually really at the heart of the matter. Here's what I've observed about humans, including myself. I find myself doing this right, we're built to solve problems. We do not do well in environments where we have no problems whatsoever. Um, and I think that's a big challenge, did he take away all the problems from someone? Uh, it turns into a really, really difficult situation, right? They don't know what to do with themselves, right? And and what they do, amy, is they create their own problems, right, because we have this deep-seated need as humans to need to solve problems, right, and so one of my fears is that if we take all the problems away, do we have any more meaning in life? Do we have purpose in life at that point, or are we in a situation where we've lost that meaning? We've lost that purpose? So I think that's one very, very important aspect to all of this is you know, can we make sure that we are in a position where we've got people that are really thinking that way and really thinking about what is their purpose giving people purpose. So that scares me a little bit. Right, in terms of the ethics piece of it, I mean, again, I think that's one of those skill sets that's going to become more and more relevant as we go forward.

Speaker 2:

When you think about the ethical implications of AI, you know AI only has the ethics we've taught it, and I'm not sure any's even that any of these systems have even really internalized what ethics are. They're just, it's just more stuff to them, right? I don't think they have this notion of that. There are some things that are just intrinsically more important than other things. So so that worries me a little bit is that we're dealing with machines that that don't have a moral. They don't think like we do. They appear to think like we do, but the reality is they don't actually. They think in a very, very alien way to the way we think.

Speaker 2:

So I think that's part of our role as humans is to make sure that we have the ethical capacity to continue to manage these things and use them in a way that benefits people, that benefits all of us, as opposed to just benefiting a very, very small number of people out there in the world. So that's kind of my thoughts on that. I think emotional intelligence is going to be more and more valuable. The types of roles that I think will thrive in the future are not only those creative and idea-oriented roles, but they're the people-centric roles, the salespeople of the world, the counselors of the world, the things that you'll probably want to ever get a machine able to do, because you have to create that sort of human connection with people. I think those jobs are going to be safe. I think those people will prosper significantly.

Speaker 2:

But I think those are some of the things I think we need to elevate in this sort of AI-centric world that we're about to move into.

Speaker 1:

I absolutely love your approach and your way of thinking around it. It contains so many golden nuggets Everything you just mentioned. It is so critical and so important. It is something to keep in mind while we are moving forward and developing our future all together. Alex, I absolutely love this conversation. This is one of my favorite episodes, for sure, with the deep visionary and philosophical incline. But unfortunately it's time to take our last question. So what helpful advice would you give to forward thinking leaders who want to ensure that humanity evolves towards a brighter future, with humans front and center in the age of AI?

Speaker 2:

Yeah, well, I mean, I think the bit of advice or the thing to bear in mind is that we, as humans, need to focus on what we're good at right. We need to focus on the things that make us unique. You can't beat these machines, right? You can't beat the AIs right. They're just going to get smarter and smarter, right. They're going to do more and more things that only human could do a few years ago. I do believe that I don't think it will be this exponential growth where it just keeps going on forever and we have this super intelligence. I don't buy that at this point, or, if that does happen, I think it will happen very, very far into the future, but the reality is right now we need to be thinking in a very, very different way about what we're good at Right and what are the things that we, and only we as humans, are uniquely qualified to do, and only we as humans are uniquely qualified to do, and that where we have the advantage, where we have capabilities that are very hard to replicate with machines or with artificial intelligence, right. So I think that's one piece of it is focus on what makes us unique, focus on the human characteristics that will be very, very difficult to duplicate mechanistically with AI and machine learning and other technologies, right? I think the second piece of advice is you know, we have to be careful with this stuff. Right, we're charging into a future like full speed ahead. There are no roadblocks in the way on this stuff, other than we can't buy enough AI chips to do this faster and faster. Right, that's literally the only limiting factor right now. So I think we just have to be careful with it.

Speaker 2:

I think we have to. We need to have some checkpoints along the way to where we can say, hey, if we hit this checkpoint, we need to go do these other things. Like, we seriously need to think about this, because this could evolve into a bigger problem. I don't think we have those things. I don't think we have those sort of circuit breakers along the way that will force us, as humanity, to reevaluate our position on this. I'm all for progress, I'm all for going full speed ahead. I'm a full speed ahead kind of person, right, I love this. But I do think we need some checkpoints, at least along the way that, if we get to this point, we all need to take a step back and say, okay, what does this mean, and at least think about it before we continue to move on. So we'll see what happens, but I don't. I don't have a lot of faith that that's going to happen, but that's what we probably should do as humanity thank you, Alex.

Speaker 1:

I so appreciate you and our today's conversation. It's been such a pleasure having you and discussing all those important critical questions and the way forward for us as leaders and for the humanity overall. Thank you for being here.

Speaker 2:

Thanks for having me. It's been a pleasure being here.

Speaker 1:

Thanks for having me. It's been a pleasure. Thank you for joining us on Digital Transformation and AI for Humans. I am Amy and it was enriching to share this time with you. Remember, the core of any transformation lies in our human nature how we think, feel and connect with others. It is about enhancing our emotional intelligence, embracing a winning mindset and leading with empathy and insight. Subscribe and stay tuned for more episodes where we uncover the latest trends in digital business and explore the human side of technology and leadership. Until next time, keep nurturing your mind, fostering your connections and leading with heart.

People on this episode