Digital Transformation & AI for Humans

S1:Ep70 “AI Won’t Take Your Job”: What They are Not Telling You & Why It’s Urgent to Wake Up Now

Stephen Klein Season 1 Episode 70

My fantastic guest today, Stephen Klein from San Jose, at the heart of Silicon Valley, is here to tackle the infamous million-dollar question: Will AI take your job? In just a few moments, we’ll uncover what they're not telling you – and why it's urgent to wake up now.

Stephen is a Founder & CEO of
Curiouser.AI , University of California, Berkeley Instructor, Harvard MBA, LinkedIn Top 1% Voice in AI and Advisor on Hubble Platform.

I’m honored to have Stephen as a part of the Executive Group of the AI Game Changers Club — an elite tribe of visionary leaders redefining the rules and shaping the future of human–AI synergy.

 Topics Covered

  • The real story behind “AI won’t take your job…”
  • Fear, automation, and ethical red flags
  • Power shifts in the AI workplace
  • Invisible human labor and decision-making
  • Upskilling vs. systemic control
  • How to stay relevant and lead with integrity
  • One thing we all must unlearn in the AI era

Connect with Stephen in LinkedIn: https://www.linkedin.com/in/stephenbklein/

Learn more about Curioser AI: https://www.linkedin.com/company/curiouser-ai/

Support the show


About the host, Emi Olausson Fourounjieva
With over 20 years in IT, digital transformation, business growth & leadership, Emi specializes in turning challenges into opportunities for business expansion and personal well-being.
Her contributions have shaped success stories across the corporations and individuals, from driving digital growth, managing resources and leading teams in big companies to empowering leaders to unlock their inner power and succeed in this era of transformation.

AI GAME CHANGERS CLUB: http://aigamechangers.io/

📚 Get your AI Leadership Compass: Unlocking Business Growth & Innovation 🧭 The Definitive Guide for Leaders & Business Owners to Adapt & Thrive in the Age of AI & Digital Transformation: https://www.amazon.com/dp/B0DNBJ92RP

📆 Book a free Strategy Call with Emi

🔗 Connect with Emi on LinkedIn
🌏 Learn more: https://digitaltransformation4humans.com/
📧 Subscribe to the newsletter on LinkedIn: Transformation for Leaders

Speaker 1:

Hello and welcome to Digital Transformation and AI for Humans with your host Anit. In this podcast, we delve into how technology intersects with leadership, innovation and, most importantly, the human spirit. Each episode features visionary leaders who understand that at the heart of success is the human touch nurturing a winning mindset, fostering emotional intelligence and building resilient teams. My fantastic guest today, stephen Klein, from San Jose, at the heart of Silicon Valley, is here to tackle the infamous million-dollar question will AI take your job? In just a few moments, we'll uncover what they are not telling you and why it is urgent to wake up now. Stephen is a founder and CEO of Curioser AI. University of California, berkeley instructor, harvard MBA, linkedin, top 1% voice in AI and advisor on Hubble platform. I'm honored to have Stephen as a part of the executive group of the AI Game Changers Club, an elite tribe of visionary leaders redefining the rules and shaping the future of human-AI synergy. Welcome, stephen, it's a great pleasure to have you here in this studio.

Speaker 2:

Thank you so much, Amy. It's really good to be here. I'm grateful studio.

Speaker 1:

Thank you so much, amy. It's really good to be here. I'm grateful. Let's start the conversation and transform not just our technologies but our ways of thinking and leading. If you are interested in connecting or collaborating, you can find more information in the description, and don't forget to subscribe for more powerful episodes. If you are a leader, business owner or investor ready to adapt, thrive and lead with clarity, purpose and wisdom in the era of AI, I'd love to invite you to learn more about AI Game Changers a global elite hub for visionary trailblazers and change makers shaping the future. Stephen, I've been waiting for this conversation for such a long time, so happy to have you here, and I would love to hear more about you, about your journey, about your passion. Could you share some with us?

Speaker 2:

Of course, thank you. Yes, I started Curiouser AI, our generative AI company, about three years ago when I noticed something that I found disturbing, which was that the entire AI industry was creating a technology that required humanity to prompt it, that we were all being conditioned, without our even knowing it, to become prompt engineers, and that no one was prompting us. It was that simple for me. I realized that what we were doing was we were making a trade, and what we were gaining was speed, power, convenience. What we were giving up was our ability to think. Actually. We were trading that our ability to think actually. We were trading that, our ability to think critically and so forth and I didn't think that was going to end up well, and so part of what we're doing at curiouser is developing technologies and methodologies to basically confront that and turn that inside out, and so that's sort of what I'm doing now. Prior to this, just by way of some background, from 2015 to 2021, I ran innovation and marketing for the largest law firm in the world. It was a firm called Denton's. There were 14,000 people, 85 countries, and it was an extraordinary experience, because part of what I was able to do in 2015 was really become a prominent, if you will, thinker and expert around AI ethics, and it was something I was extremely passionate about, and I was able to travel the world and speak on the topic of AI ethics to lawyers, to clients and so forth, and that's really where I started gaining, I think, some clarity in the technology's potential, but also the technology's downside and some of the issues that we really needed to be careful of, because it's a double-edged sword.

Speaker 2:

Prior to that this was now 2008 to 2015, I founded another company, a company called Loyal 3.

Speaker 2:

The three of us started it, took it to about 350 people. It was later sold to a large Wall Street bank, but the gist of this company was that we were democratizing the US stock market. We were democratizing the capital markets, and the way we did that is we worked with the Securities and Exchange Commission and we worked with Wall Street and we built a sophisticated tech platform and we were the first ones ever to be able to distribute a company's IPO stock. So when a company was going public, we were the first ones to be able to offer the company's customers and ordinary Americans an opportunity to buy that company's stock at the same price and at the same time as Wall Street. Up until that moment in time, that just wasn't possible, and so I helped companies go public, companies like Virgin America, gopro Square, hubspot, amc Theater, dave Buster's, which is a restaurant chain. So I have been in love with technology and marketing and strategy and the world and humanity for a long time, and so I am doing what I love to do.

Speaker 1:

I so love your journey and your story. It is both exciting and you've been in so many different companies creating that change on such a high level but at the same time, what you came to it is absolutely incredible, because this is not a typical corporate narrative. But at the same time, it is exactly what matters today and I'm so happy that we are coming to the most important question, which I've been waiting to discuss with you what is the biggest lie being told right now about AI and jobs, and why is it so dangerous?

Speaker 2:

The biggest lie, the biggest lie I think the single biggest lie that's being told right now is that generative AI is replacing jobs and that generative AI is causing a bloodbath, which is what the industry likes to portray it as, because, according to the data, if you're clear-eyed about it and you put the FOMO away for a minute, you realize that that's actually not what's happening. What's happening is that we are right now in a down cycle in the economy. Every 10 or 12 years, like clockwork, the economy goes up, then it goes down. Tech usually leads the way, and we have been in a tightening, and that's what happens, and when economies and industries and businesses tighten their belt, they cut costs, and that means they lay people off, and so what we've been seeing are layoffs, but they're the same layoffs that were happening 10 or 12 years ago. They're the same layoffs that were happening 20 years ago, 30 years ago, 40 years ago. What's different now is that the companies have a new way to rationalize and justify those layoffs so that, rather than just saying I'm cutting costs, they can say we're brilliant strategically and because we're implementing generative AI, we're operating much more efficiently, so we don't need as many people.

Speaker 2:

That's the story they're telling. That's not what's happening. Why are they telling that story? Because the shareholders like to hear that story. Wall Street likes to hear that story. Wall Street University of Chicago that literally studied this and they found that there were no net job losses in the United States of America because of generative AI, zero, zero. So that's the biggest lie. It's not happening. And yet it's not happening, and yet it's driving this sort of psychotic panic, mass anxiety attack that a series of influential people are capitalizing on and making a lot of money. And that's the situation we're in right now, in my opinion.

Speaker 1:

That is very interesting because your perspective is coming from Silicon Valley and that's an innovative, very fast-paced area in the world, probably the most innovative one.

Speaker 1:

It is really interesting and I have heard this situation actually and the explanation before and I totally understand that it is like that on the market and it's nothing new under the sky and it's coming from time to time and it was predictable. But at the same time, it's impossible not to see that AI is going to impact our jobs and it's going to reframe our participation. I also see cases where, besides those who are explaining layoffs with the AI initiatives and adoption, the cases where what was taking 1 FT before is taking 0.5 FT today, and sometimes one person is using AI to do a job two or three people were doing before. So how do we explain that? Because this famous phrase, ai won't take your job, but someone using AI will. A lot of people are referring to it, but I would like to dig a little bit deeper to understand how you see it and how it can potentially impact the situation in a longer perspective, and who actually benefits from keeping this narrative so vague, besides those players you already mentioned before.

Speaker 2:

for yeah, first of all, I am absolutely convinced that this story has a happy ending. I'm an optimist and I believe that generative AI, as it will unfold and evolve along with humanity, is going to have a significant and very positive impact on humanity. But I think we have some work to do and I think we have a bit of a journey to go on, because, where we're at right now, the industry is marketing the vast majority of AI as an automation tool. Marketing the vast majority of AI as an automation tool. So, simply put, it's being sold into businesses as a means by which they can perform tasks more efficiently, that they can optimize processes and that, therefore, they can do things better than people without AI can do them, so they don't need as many people and so that they can cut people, improve their margins, increase their profit, increase their share price, have a great meeting with Wall Street, and that's sort of what's happening right now.

Speaker 2:

The reality is that the vast majority of generative AI is not capable of replacing anybody. Okay, when I say replace somebody, I mean sort of do something without somebody in place of somebody, and the reason why it can't is it is incredibly error prone not just errors, but hallucinations and so most AI returns findings that are wrong 30 to 70% of the time. So a company can gain speed and quantity, but it's trading quality, and so there is no way right now AI can replace people. Now, if there's somebody that is using AI to do things faster, right, they are doing things faster, but I would argue the quality of what they're doing is inversely proportional to the speed that they're gaining. See, you can't. Quality is hard, thinking is hard, excellence is hard. There are no shortcuts. Okay, you can give a monkey a typewriter and teach the monkey to type, and the monkey will be writing, I suppose. But it's not Shakespeare, and so this concept that you can give somebody a tool and somehow make them better at what they do is just false, because you have to have the craftsmanship and the skill to do what it takes to do it. So let's say you know you're a terrible writer and you start using Claude or GPT or Copilot or any of the usual suspects. You're still going to be a terrible writer. You're just going to think you're a better writer than you are, but it's going to be easier and you're going to do a lot more of it. You're going to regress to the mean your work is going to become average and you're just going to pollute the world with mediocrity. And so the short version of the way it's being used right now is that it is error prone. It doesn't make people better at what they do. It just takes whatever it is they're doing and makes more of it.

Speaker 2:

We used to have a saying garbage in, garbage out. Well, I think now we've taken it to a now it's garbage in and massive amounts of garbage out, and that's where we're at right now. Now that same AI can be used, instead of looking at it as an automation tool, looking at it as what you could call an augmentation tool, so that the AI now isn't trying to speed you up. Emmy, the AI is actually trying to slow you up. Crazy, right. But that version of AI is going to slow you down. It's going to challenge you. It's going to encourage you to think more deeply, it's going to encourage you to see things from various perspectives. It's going to help you identify first principles, new insights, and it's going to help you actually define your own limits, and it's going to make you better at what you do and smarter at what you do. But it's going to take work. So now, the AI isn't making your life easier. It isn't like a lazy way out. It's actually a that will help you become a better you, and that's where this goes. And by going in that direction now, not only are all of us basically fulfilling our potential, we can be exceeding our potential, we can elevate organizations, we can create more innovation, we can increase more opportunity, and so I believe that it will ultimately accelerate positive innovation and create a lot more jobs than we lose, and that is inevitably where this goes, and that is where all of these highly disruptive technologies end up. But they don't look that way from the perspective of right now. When we're looking into the future, we see the downside. We can't see that invisible, magnificent upside, because it doesn't exist yet.

Speaker 2:

May I give you a couple of quick examples. So, for example, when the automobile was invented, people looked at it as if it was a land yacht. It was a boat. The paradigm was it was a boat, except it was a boat for land. It had a tiller. So nobody envisioned highways, infrastructure, the steel industry, the energy industry, the suburbs, all of which were manifested by this technology. There are so many things that will be created that we don't have visibility into right now. So we're just seeing the downside.

Speaker 2:

Another example television. When television was created for roughly 10 to 12 years, nobody knew what to do with it, so it was inevitably it was just a camera on two people. It was a camp that basically they were televising a radio show. Before it was invented, nobody ever saw the potential of television and what it could turn itself into. And so this is the most common story in disruptive technologies is that we get scared, we exaggerate, we create fear because people don't like change and a lot of people capitalize on that, and that becomes an industry, a fear industry, which is happening right now. But ultimately, where this goes is to a very positive place. I believe, and I think that we're going to see improvement in humanity, not net loss as most people would portray right now.

Speaker 1:

This is a very beautiful outcome you are seeing in front of you and I actually share it with you because I also believe that humanity is moving towards brighter times. But this period of transition, it is very shaky.

Speaker 2:

It is frightening, it's scary as hell, it is terrifying. It's terrifying Absolutely.

Speaker 1:

It is also because it is reflecting everything we refused looking into, and AI is just that mirror and multiplier which is taking it all back to us, and the pace of development is so much higher than we are used to. So of course, it is terrifying, but at the same time, exactly as you mentioned, I believe that it is taking us to the better place. The only one consideration I have, comparing this technology with all the other moments where we've got some kind of innovation which transformed our civilization, it is that AI is actually impacting every single human being on such a deep level that we are changing our behavior. We are changing our ways of communicating, of thinking, of navigating and operating, and it requires a lot of conscious leadership in order to take us through humanity, through this pipeline where it's really tight and burning hot, absolutely.

Speaker 2:

It's a ubiquitous technology expanding rapidly, but I will say, emi, it's not as prolific as it seems. I mean, it's like 15% to 20% of humanity are using AI, maybe less. So there's a lot of drama around it, which is basically a narrative that is funded by the industry $100 billion, so it will get there. It's not there. Now I would say. I'd say it's exaggerated, quite frankly. And also, if you look at, say, the Fortune 500, largest companies in the world, what you find is very, very little actual implementation of AI. The average company, if you want to know the truth, has hired a consulting firm, paid him somewhere between $5 and $15 million. The consulting firm has come into that company, presented on average 70 slides, and set up a pilot it's usually GPT, microsoft, copilot one pilot that fails 70 to 80% of the time. And then that CEO gets an AI first press release so that they can then posture to the world and to their shareholders and to wall street that they're AI first. That's state of the art right now. Now that's not where it's going to go, but that's actually what is happening. 70 to 80% of these corporate enterprise implementations have failed and the reason they have failed isn't necessarily because of the technology, although the technology is quite faulty and has a ways to go. It's because it's about people, it's about organizations. So if you delegate an AI strategy to say IT, it is going to look at it as IT and they're going to basically set up a technology somewhere in the business and test it and they're not going to include other people in that process. They're not going to bring in HR, they're not going to bring in marketing. They're not going to bring in marketing, they're not going to bring in finance, and so the project will get sabotaged over time because of cross-functional infighting, which is inevitable within a company. You know the legal and compliance people aren't going to like it.

Speaker 2:

It has to start with leadership. It has to start with the CEO. It has to start with vision and first principles, asking the right questions. What makes this company special? Why do we do what we do? What value do we bring to the world? How do we differentiate ourselves from the other people that are doing what we're doing, that are doing what we're doing? Those are the questions that need to be answered.

Speaker 2:

And then an organization needs to be organized around that CEO and it needs to be internalized and believed and supported. That vision, and then that vision needs to be taken out through thought leadership to the entire ecosystem the workforce, the customers, the shareholders. And when that is all unified and that company is cohesive, then, and only then, can you integrate AI as a platform. That's not just generative AI, but it also is machine learning. Right Now, generative AI has sucked all the oxygen out of the room, but there are other forms of AI, and so now you've got this platform with this cohesive, unified organization, and that platform incorporates the different functions. It incorporates HR and legal and compliance and marketing and finance and IT.

Speaker 2:

And that's when the magic happens. And it's open sourced and it's closed sourced. It's agnostic, multi-models. It isn't just OpenAI or Microsoft or Anthropic. It can combine and blend the various models, which will then mitigate the error rates and mitigate the hallucinations. And will then mitigate the error rates and mitigate the hallucinations. And that's when this gets beautiful, because now you're elevating your workforce, you're not terrifying them, you're respecting them, you're not humiliating them, and that's where this goes. But it requires leadership. Generative AI is a leadership challenge. It's not a technology challenge. It's that simple.

Speaker 1:

This is so spot on. I couldn't agree more. And actually, the numbers you just mentioned 70% to 80% failure rate. It reminds me exactly of the same numbers when we're talking about digital transformation and same companies, same people, slightly different technologies. But it is about leadership, it is about culture, it is about so many aspects which are remaining still outside of this equation, but they are the answer to so many questions which might help us optimizing the success rate to get it higher as well. But fear, scarcity they are all around. Fear is running the show in many industries. So, stephen, what is the core fear you are seeing in leaders and teams and how do we turn that fear into fuel for conscious, strategic adaptation and leadership?

Speaker 2:

I think to really understand that at a deep level, you have to look at history and you have to understand the history of business and the history of economics, because this isn't a one-time situation in time. This is a movie that started a long time ago, a movie that started a long time ago, and so there are really two primary forces right now driving generative AI industry today. Okay, the first is extraordinary popular delusions in the madness of crowds. We are in a mania. We are now stampeding as a human race, just like an animal stampedes, and we're running. We're like lemmings running off the cliff because there has been so much fear, driven and funded by the industry. So we're in a bubble. I believe we're in a bubble, which is not a bad thing, it's just. It is what it is.

Speaker 2:

The first bubble was recorded in the 1600s Dutch Tulips. Literally, dutch Tulips became an industry where people would sell their houses to buy a Dutch Tulip and it became this fanatic sort of crazy thing. More recently, we had the dot-com bubble Not identical to what's going on in generative AI, but similar and there was a market crash in 99 and 2000,. Right, it crashed 2007, 2008,. We had a real estate bubble. We're in a bubble right now. So that's one significant reality that people don't realize. This is what it's like to be inside a bubble, and it's terrifying. The second thing people don't realize is that there's a lot of really brilliant marketing going on and there's a playbook that was invented in the 1950s by the sugar industry, because sugar was a billion-dollar industry and science began to realize it wasn't good for you and that was a problem for the sugar industry. That was a threat. So what they did brilliantly is figured out that they could hire really credible people and they could co-opt institutions and they could fund research and then they could publicize that research as a defense of their conventional narrative. So, essentially, prove sugar is not bad for you, sugar is actually good for you, and the medical profession would support it and the universities would support that narrative, because people were getting paid a lot of money. So research that was positioned as credible was actually a sales funnel.

Speaker 2:

Okay, then come the nineties and tobacco did the same thing, because all of a sudden it became known that cigarettes were killing people. Now that was not something the tobacco industry was thrilled about. So what were they going to do? Well, they were going to develop their own data to show that that's not true. Tobacco doesn't hurt anybody. In fact, doctors smoke Smoking is sexy. Tobacco doesn't hurt anybody in fact, doctors smoke smoking sexy and so they bought the medical profession with their billions of dollars. They bought institutions and they published industry data that was then publicized and spread, that basically fought the truth, and generative AI is doing a very similar thing right now. They're probably 70, 80% of the data that we see these days about improved productivity and so forth is actually bought and paid for by the industry, and they don't disclose that.

Speaker 2:

So if you really want to be discerning and you want clarity and you want to be a critical thinker, you've got to look at that data and you've got to ask yourself who funded that project? Was there an agenda? Is it valid? Is there confirmation bias? Is it peer reviewed? And when you start really looking at that and it's all out there if you know where to look you're going to see that it's not, and so all of these billions of dollars are going into funding this research that is scaring us to death. Why? Because then the businesses feel this enormous pressure that they're being left behind. The consulting industry is really the only industry making money right now, the large management consulting firms, the influencers on LinkedIn, consulting from the top to the bottom, is capitalizing right now on this industry, but businesses aren't seeing any ROI and humanity is getting scared to death and humanity is getting scared to death. And one more point the privately held generative AI companies. There's a group of privately held ones and there's a group of public. Okay, so take Microsoft, google, salesforce, put those over here and now let's look at open AI, anthropic mistral, perplexity, and what you'll see is that they are exquisite financial engineering schemes, exquisite, brilliant in that the products they're pushing are generating enough fear and excitement to justify future expectations and value of that company that attracts capital on the investment side. And as long as they can keep scaring people and flooding the zone with new products and increasing their valuation, they're going to attract more capital, because nobody wants to miss out on a good bubble.

Speaker 2:

It's a Ponzi scheme. It's a Ponzi scheme, but it's legal. It's legal and so that's also happening right. So the investors are making a lot of money because if you get in early enough and then more capital comes in, you can, on the secondary market, monetize your investment and you can start taking money out. Money comes in, money goes out. Money comes in, money goes out, and that's what's happening with, say, openai, anthropic all of these companies, because these companies are losing billions of dollars. None of these companies make any money. In fact, openai loses money on every product it sells. Its unit economics are upside down. There are no economies of scale. There are no economies of scope. There aren't.

Speaker 2:

Is the technology amazing? Yeah, does it have potential? Yes, there aren't. Is the technology amazing? Yeah, does it have potential? Yes, is it a business? No, it's not a business. So those are the forces driving the fear. It's the fact that it's a bubble, it's the fact that there's a lot of marketing brilliance pushing a narrative that looks real. That's not, and it's a group of people that are making a tremendous amount of money basically pushing this current industry, and that's that's where we're at right now. That's the reality of it, and that story doesn't get told an awful lot, even though it's out there. What I'm saying. You don't need to be a genius to see it, you just need to know where to look. Which is it's right, it's, it's hiding in plain. We've just all been, we're just basically conditioned at this point and we're frightened because we've been manipulated emotionally and that's what's going on right now. So it's psychology and it's people that are really the issue. The technology is fine. The technology is not the problem.

Speaker 1:

As usual, exactly the technology is beautiful.

Speaker 2:

I love GPT. I love the technology. It's not GPT's fault, it's the way it's being used and manipulated to generate capital, primarily on the investment side. Right now, that's what's going on.

Speaker 1:

This is so deep. This is so deep and I think this is an eye opener for many listeners and viewers today, because this is a different perspective and actually just the fact that when you try to look into the sources of those fundings and ask where from sponsorship came or funding came, they don't like that.

Speaker 2:

They won't, they don't like that question.

Speaker 1:

They hate it. I guess because they become aggressive, they become protective, they they just hide the information as much as they can, Right?

Speaker 2:

Yeah, I um, I mean, I'm an insider, outsider. Right, I'm not an outsider, I'm not an activist, I'm a capitalist. I love capitalism. I went to the Pentagon of Capitalism, the Harvard Business School. I am a capitalist through and through. I think making money is a beautiful thing. I just don't like the way some people do it, and it's not illegal, but I just feel it's wrong because it's hurting humanity and it's hurting future generations.

Speaker 2:

But to your point, there have been massive studies done recently with brand name Ivy League schools and leading experts that tested prominent organizations and found that GPT improves productivity 200, 300% Incredible studies. Sweeps the internet. Every influencer's pushing the data, but when you ask those people who funded that study, they do not want to tell you and in fact they'll get angry and in fact they'll accuse you of demonizing them. They'll try to discredit you. But they won't answer the simple question, which is not even an accusation, it's just a simple question who paid for this study? It's not a hard question. Now, you can either answer that or not answer that. Now, if you don't answer that question, it doesn't mean necessarily you're biased, but it's suspect because you're not disclosing something that you want to be disclosing and that means an agenda. And so I've been. Yeah, I've been blocked, I've been demonized by just literally asking that simple question. It's fascinating to me.

Speaker 2:

And then so you don't know who funds the studies, but you take a look at it and you say there's only one technology that was tested and there was only one AI that was tested. So they were involved, you know, and they've got $40 billion to spend. I wonder if they had anything to do with it. And then they can go out and they can hire influencers, which are external public relations people, and they don't necessarily disclose that. That's who they are, what they're doing, but they're at all of the conferences, they're all over LinkedIn telling you how, if you're not doing this yet, have you seen the latest agentic technology? Because your competitors are using it, are you? And they just generate all this stuff that freaks everybody out. So then you pay a lot of money to go to these workshops and companies pay the major. So that's what's going on right now.

Speaker 2:

The actual, genuine research, the objective research. It's out there as well. You can find it peer-reviewed. It's just harder to find because there's not as much money behind it, but you can find it and you'll see that it's a very, very different set of data. It's literally night and day. It's the opposite. It's that none of that is actually happening. It's an alternative universe.

Speaker 1:

Exactly, and not many actually realize it and know about it, and even fewer are sharing this experience and those thoughts and talking about it, so I'm happy that we are highlighting this trend in our today's conversation. Speaking about automation and agentic solutions, you witnessed automation unfold at scale, so I wonder if you can share a real case from your experience, or maybe your network's experience, where automation impacted jobs in a way that was either deeply unexpected or ethically questionable.

Speaker 2:

I don't think any of that has really happened yet. Quite frankly, no-transcript ahead of itself and it's pushing out simulations, but the reality is that I don't actually believe that that is happening at scale anywhere. I really don't. That's my answer. I think that there's a lot of. I call it hype as a service, hype as a service, haas, not SaaS. Haas. It's hype as a service right now, and it doesn't mean it isn't going to be here, because it will be and it will have both negative and positive impact. But I actually don't believe it's here yet.

Speaker 2:

I think that right now, we're right on the cutting edge of when innovative companies are starting to figure this out. Okay, because most of the enterprises out there have implemented AI and have stumbled, and now they're at the what's next phase what do we do next? And that's where things are going to get beautiful and that's where things are going to get interesting and that's when AI will be implemented and integrated into organizations in a holistic way and instead of reading about AI first, which I think is the stupidest, stupidest slogan ever created by an industry AI first, people last. That doesn't make any sense. No, people first, organizations first, ai, ai supporting platform and mechanism. So I am again extraordinarily optimistic about this technology, and there are companies that are doing very different and positive things, including my own, and I'm not here to promote my company, but there are companies out there that are building AI platforms that will benefit humanity and will create jobs.

Speaker 1:

Exciting times ahead. I think that breaking point is actually closer than we can imagine. So we just have to hold out a little bit longer and persevere, and then we are going to see how all this unfolds in a completely different picture, and that picture is going to be much more positive and human-centric, and at least we both are moving the narrative and the strategies in that direction. So I see other people as well doing it around the world. But the more of us is working on creating the change on a daily basis, the closer that change is.

Speaker 2:

Some companies have the courage to actually acknowledge some of the mistakes they've made. I respect that enormously. Ibm recently acknowledged that they have to hire a whole lot of people back because the AI that they thought was going to be able to do what it was supposed to do, primarily in the customer support area, isn't able to do it Right. Think about this what are the three most important things to a company? The employees. The workforce is important, because that is the company. The customers and their reputation. Let's deconstruct a company into its reputation, people and its customers. The vast amount of generative AI investment right now is going into those three areas. So what companies are doing is they're putting bots between customers and the company so that they can save money. That's going to change. People don't like that. Customers don't like that. Soon companies are going to come along and they're going to say wait a minute, you can talk to people and people will be willing to pay more for that. Companies are cutting people and laying people off and disrespecting workforce. I mean, basically, if you're a company and you basically put all of your employees on notice that they're going to be replaced by AI soon, you've just created an entire organization filled with terrorists. They are not going to like that and in a down market they can't leave. They wish they could, they can't. So now they're trapped, terrorists. You now alienated your entire workforce how smart is that? And then you put all of this AI technology between your marketing people and the outside world and you start generating all of this content. You're kind of hurting your customers, you're hurting your relationship with your customers. You're demoral, hurting your customers. You're hurting your relationship with your customers. You're demoralizing your workforce and you're basically boring your customers to death with mediocrity, all to save money. Now that will change. That will all change because companies are going to realize that wait a minute, that same technology. I could basically announce to my organization that we're going to be implementing AI to support our people. We're going to give you each a strategic coach, and that strategic coach is going to be your own personal Socrates, and that strategic coach is going to ask you questions, provide knowledge, help you think more quickly, and that strategic coach is going to help you improve your productivity, improve your creativity 60%, 100%, 200%. And now the employees are extraordinarily. Their spirit is higher. You're elevating them. You're improving your innovation. You're improving your product. You're going to grow your revenue. You're going to grow your value. You're going to improve your stock price over time, but it requires some investment in the short term. Investment in the short term that's where this goes. That's where this goes, and it's going to be amazing to see it. And I don't know what. For example, mira Marotti, who was the former CTO at OpenAI that left in November 2023, because she didn't like some of the things she was seeing going on there and virtually all of the technology engineering brain trust, left OpenAI in November 2023. And she's gone off to start her own company called Thinking Machines Lab. I think we're going to see her be producing this kind of AI. Other founders from OpenAI have gone in. I think that Anthropic may be moving in this direction right now, because they announced recently that they've created a Socratic-based AI that augments human thought, and they're targeting that at education as a particular market. If, in fact, that is the case, that will flow into the business world and that will be positive.

Speaker 2:

Apple computer Again, I'm not speaking on behalf of Apple. I have no association with Apple whatsoever other than the fact that I love their products and Steve Jobs is my hero and I think Tim Cooks is an amazing CEO. But take a look at this for a second and think about it. What large technology company right now is not pushing AI hard? There's only one. It's Apple. Right? It's not out there talking about it all the time. In fact, it's being ridiculed and being accused of missing the AI opportunity. Right, tim Cook is being ridiculed. You don't get it. Steve Jobs is no longer around. You've got nothing going on. You're a supply chain. You've missed the whole Gen AI disruption and I think Apple was thinking to itself. You know, we see the writing on the wall.

Speaker 2:

No, we didn't invest in opening AI. We pulled out because we're going to see AI 2.0. And that's what we're going to be, because we have our $162 billion of cash. By the way, we're not losing a billion dollars a month like you guys. We actually have $162 billion in the bank. So we've got a lot of powder. We're keeping dry here when we're ready to start firing away.

Speaker 2:

We're going to be frightening and those are the companies that are going to start seeing the collapse of the market and they're going to realize that there is opportunity to create augmentation AI, customer-centric AI. Like Steve, it'll be the Mac to what DOS was. Right now, the large language models are DOS. They're like operating systems and we're prompting them. These companies will elevate the experience and that'll elevate humanity and they'll create human-centric, customer-centric AI products that they don't exist right now. This is an engineering operating system that's been thrown at humanity and we're trying to figure out what to do with it because it's all so recent. And we're trying to figure out what to do with it because it's all so recent. But you're going to get companies that realize the massive opportunity of actually designing these products to solve problems, versus just sort of hey, it's amazing, go faster, and that's when this gets exciting. So the companies like Apple, dickey Machines and there's thousands of companies we haven't heard of my company are building AI that will benefit people and create jobs. That's my read, that's my take.

Speaker 1:

Amazing and, by the way, to all our listeners and viewers. If this conversation sparks something for you, hit like follow or subscribe and share it with one person you know would be inspired by this episode. Sharing is caring. Stephen, I enjoy hearing your insights and your perspectives and speaking about upskilling. We are talking about upskilling quite a bit, but not enough about power. So what hidden power shifts are underway as AI becomes this decision maker, and how should individuals and leaders respond? You already mentioned that we have to develop critical thinking, that we have to be more responsible for our internal processes and for the outcomes, but what do you think is the next step to be future-proof? Future-ready think is the next step to be future-proof future-ready.

Speaker 2:

Again, I think the answer is elegant in its simplicity. I think that a business it can be a small business or it can be a large company has to decide what problem it's solving. They have to decide what it is they want to accomplish. Right. And if the answer to that question is we really don't care as long as we can save money and go faster, well, ok, that's deleted that are going to realize that again, they can basically use these products, use this AI to invest in their people, not replace their people. Invest in their people. They can give their workforce versions of generative AI that can easily be customized to improve their thinking and there's been all kinds of studies done on this to make them more creative, make them more productive. And the companies that begin to do that first and there's always the first movers, there are visionary CEOs out there the companies that begin to see the world in the opposite way to Steve Jobs' point thinking differently. That's what this is. The companies that think different are going to have magnificent, magnificent opportunities to, I think, take enormous market share. A radical vision, but I find it extraordinarily positive and I actually believe it's inevitable.

Speaker 2:

We are living right now in a post-IP world. It used to be that a company could compete based on its proprietary intellectual property and that intellectual property was something that they could protect with a moat and that would enable them to extract rent and margin and that sort of traditional strategy Porter's Five Forces, competitive bearish to entry moats. 90% of all of the IP in the world has been stolen. Five companies have taken it all. It's a fact. This isn't speculation, and there's, I think, 37 court cases going on right now in the United States about what laws were broken. But suffice it to say that everything that was out there from an IP perspective was taken to train these large language models.

Speaker 2:

So assume we live in a post-IP world. Assume that that is the deal. Now add to that that we're, I believe, living in a post-regulatory world. I think it's a fantasy for people to actually think that there will be laws and regulations that manage this technology. You'd have to be crazy to think that there will be laws and regulations that manage this technology. You'd have to be crazy to think that. I mean, given the current administration in Washington and China, there's going to be no global regulatory frameworks, regulatory frameworks, and even if the legislative bodies wanted to regulate the industry, they're like a thousand years too late. By the time they figure it out, it'll already be. So. If you assume we live in a post-IP world and we live in a post-regulatory world, and then you make one more assumption you assume that everything that can be measured is going to be automated, we'll get there eventually and that'll be a cost floor. What's left what's left.

Speaker 1:

Great question what is left really? We are Our creativity. What is left really?

Speaker 2:

We are Our creativity, our imagination, our values, our ethics. I am 100% convinced that we will enter an age of meaning and purpose. This is my outlook, which is very positive, in that companies will almost become sovereign entities because there is no IP, there is no regulatory, and they're going to realize that they can compete on values. They can compete by talking to their customers and actually establishing trust at a fundamental level, because it's real, and they can integrate and operationalize ethics into these platforms. And they can integrate, operationalize values. And if, in fact, there is this trust between a market and a customer, a company, and they're connected together by shared values, you now have the penetrable moat business can create. Those customers are locked in because they're part of an ecosystem that is filled with trust and value. Same thing with the employees. All of a sudden, now they believe and trust and have faith in their leadership. So now you have this entity and this ecosystem. So now you have this entity and this ecosystem that differentiates itself based on its creativity, based on its imagination, based on all of the things that you can't automate. You can't automate that, you can't automate thinking, you can't automate imagination, you can't automate creativity, you can't do it. That's where this goes, and so, ultimately, when everything that can be automated is automated and commoditized, that's fine.

Speaker 2:

What's left is actually what makes us special, and so we will end up competing, I believe, on ethics and values, and trust, and imagination will be the differentiator, and that, I believe, is inevitable. I don't see how we don't end up there, even though it is extremely counterintuitive. Extremely counterintuitive because everybody is looking at it as this dystopian future right now. Right, everybody is basically, if you, if you take a hundred people and you say, what's the future of ai? Look like they're going to say, well, there's going to be no jobs left, we're all going to be starving, and it'll be and, and, and, and. They're going to be robots that are going to kill us, basically, or we're going to destroy ourselves with AGI. That's basically what I think most people would say, and I just think that is completely wrong. I really do.

Speaker 1:

I think it depends a lot on us humans. Us humans, because it's not impossible, but it's still up to us to choose which way we want to pursue. And, yeah, many things are still not written in stone and can be changed. And it's great that you have such a positive vision and I want to share that vision with you because I want to see the brighter future for humanity. And, of course, what you just described is a great way of preparing to the next stage in this game, and it's going to be amazing, definitely, but thinking about ethical considerations, ai steps in and much human efforts become absolutely invisible. I wonder what are we not seeing or not valuing in this shift, and what ethical red flags should leaders be paying more attention to before it's too late?

Speaker 2:

I mean, we haven't figured this stuff out yet, right, exactly, we don't even know what it is, we don't even know. I mean, basically, what we've got is an autocomplete system. Gpt is basically autocomplete. It is able to anticipate your thoughts ahead of time, based on stochastic modeling and probability. So the ethics right now is just amplifying our ethics. So if we're not ethical, we're just going to get a lot more anti ethics. Right, it's just a mirror right now. It can be abused and it can be used appropriately. It's a mirror, but it's not just a mirror where we're seeing ourselves. It's a mirror where we're seeing ourselves instantly in the future, because what happens is, when you use GPT or any of them, it is actually completing your thoughts for you. You don't know that. So it's anticipating what it thinks. You think you're going, you think you're going. That, then, is recursive and impacts how you think. Right. So, from an ethical perspective, there really is no. I mean there's no.

Speaker 2:

Ai isn't ethical and AI isn't, you know, unethical. It's what we do with it. It's our ethics. So the question you're asking, I think, is are companies going to operate ethically or not?

Speaker 2:

Now there are certain abuses within the AI world right now, around data bias, which are very real, that I think we're all aware of. Right, it's unfair to certain people that are looking for insurance policies. It's unfair for people who are looking for mortgages. It's implemented in the United States. Criminal system Judges are now using AI to basically decide how long a prison sentence ought to be for somebody, when they actually realize that it is bias against certain people because it was all trained on biased data. Right, you got to train these systems on the data that exists. So if there was bias in a particular industry housing and you use all of that data to train the ML, to automate it, to go faster and make life easier, you're just basically taking all of that data. So there are ethical data bias issues that absolutely need to be addressed. But that's not the generative AI issue today. That's more of a machine learning data kind of training situation. So I actually don't I think we've figured out the ethics piece, yet I don't think we know what to do with it.

Speaker 1:

So true, and I totally agree with your point that it's not good, not bad, it is pretty neutral, but it reflects us leader, every company, every business to choose the side and run this based on their guidelines and ethical framework. So this is something what humans can't push back to the AI solutions and blame technology for something what is not done properly and not considered with the oversight for the outcomes and consequences. So it's just a kind reminder that every leader is in charge for how they are using these fantastic technologies.

Speaker 2:

Yeah, absolutely. If you look at it first principles and you're the CEO of a company and you've got customers, those customers have to trust you. Ultimately, right, companies compete on shared values. People operate on that level, whether we realize it or not. And so the CEOs that realize that they can actually communicate that trust and it's authentic, and then they can operationalize that trust ethically. And then they can operationalize that trust ethically, their customers are going to really be attracted by that. They're going to feel that they can trust, and so ethics is actually a competitive weapon.

Speaker 2:

I would actually go as far as to say that ethics become Machiavellian in that you can take ethics and use it to compete. So it's not sort of this like right now, what's ethics right now? Right now, ethics is sort of this thing like Right now, ethics is sort of this thing like oh, I'm a serious business person and ethics just gets in the way. Come on, we're macho, serious business people and ethics belongs in the academic institutions. Go away and write a white paper because I got work to do, right, and that's kind of where we're at right now in terms of the development. Right, there's not a whole lot of ethical people inside these AI companies right now. But once it's realized that ethics can actually be used to benefit that company, it becomes a competitive advantage. In other words, ethics literally becomes a barrier to entry.

Speaker 2:

These companies that actually establish their value set and establish their ethics with their customers and with their workforce, they're going to win. Operationalizing ethics isn't going to slow business down, it's going to speed it up. Visualizing ethics isn't going to slow business down, it's going to speed it up. But we just haven't quite figured that out yet. We're going to get there. We're going to get there. We're still fascinated. This technology is still due to us. No one knows what to do with it and because it's new, we're all afraid of it.

Speaker 1:

But we'll get there. This is beautiful and this is the moment I'm waiting for. Really, we should get to that moment quite soon, I hope, because it's impossible to move this fast without seeing ethical advantages for the business growth as a huge competitive advantage. So I think that we are actually very close to that moment of truth where many leaders are going to discover or admit the fact that this is actually something which should be one of those KPIs besides everything that can be measured. But this is actually that part of the reputation, of all those parts you just mentioned, because it covers everything. This is both the foundation and the umbrella for all the activities which should be run with consideration for human centricity and speed up business and help it expand exponentially, actually in a sustainable way. So this is beautiful.

Speaker 2:

The KPI for ethics will be revenue. The KPI for ethics will actually be the company's top line. It will be integrated into the company's top line, because that's the way they'll compete and that's the way they'll attract customers and that's the way they'll be able to improve their margins and that's the way they're going to attract the best talent. And so, ultimately, it is measured in business performance. It's not like there's ethics over here and business over here it will become one. It essentially, I believe, has to move in that direction. There's no other way to go. And so, to simplify it, we could use the internet as a model and you could say there was internet 1.0 and internet 2.0. And internet 1.0 was before the crash, which was awful it's always difficult to see all of these companies implode and it set the economy back. But internet, it didn't destroy the internet, it cleaned it up, it got the riffraff out of the way. And internet 2.0 became quite an extraordinary thing and has created enormous jobs and created all kinds of worlds.

Speaker 2:

Internet 1.0, we all thought that all jobs, all stores, all retail worlds. Internet 1.0, we all thought that all jobs, all stores, all retail, everything was going to go away, it was all over. That's it. It didn't turn out that way, did it 25 years later? So I think we're going to have AI 1.0 and AI 2.0. And I think AI 1.0 is going to come to a completion at some point I don't know when, maybe fourth quarter this year, maybe first quarter next year. And then AI 2.0 is going to emerge, and it's going to be automation 1.0, augmentation 2.0. And I do believe that there'll be companies that will lead that on the tech side and companies that will lead that on the corporate implementation side, and that is going to be a beautiful thing to watch.

Speaker 1:

It is definitely beautiful and I think it's actually hard to change. It is like the natural next stage of development. So we are just going to get there, no matter what, because our human nature is going to prevail and put things into perspective so that we start prioritizing what really matters in a totally different way. But, once again, that's why I love artificial intelligence and all these latest technologies, because they both speed up things and, in a way, they represent everything in a magnified way. So we see what we couldn't see, what we tried probably not to notice and hide under the carpet, not to notice and hide under the carpet All those things.

Speaker 1:

They're getting so prominent, so obvious and so big that it's impossible to pretend that we don't see them anymore. And then we have to deal with them. And once we deal with them one by one, it is going to create exactly that transformation which is needed since a long time. But these technologies just opened up new opportunities for us to get there faster, sooner, and even our generation, which is actually a very unique generation, to go from the world without digital solutions into the world with dominating.

Speaker 2:

Such massive change. It's incredible.

Speaker 2:

It's incredible, it's breathtaking. It's breathtaking and I find it beautiful. I've always loved it. I think it's magnificent. But I use five AIs every day. I use five generative AIs every day. I see them as part of my team. They're not replacing anybody in my company. They're supplementing our ability to be better at what we do.

Speaker 2:

But now, when I use an AI on a project let's say I want to write a white paper I could basically go to the AI and say write this white paper for me, and it will, and that's automation. And then I think, wow, I can replace all of my writers in my company. I don't need them anymore. Gpt just wrote a white paper that would have taken us a week and two and a half seconds, but it's crap, it's average, it's regression to the mean. Or I can say GPT, I've got some ideas for a white paper, here's what I'm thinking. And it can come back and say you know, it can be data, and you can say well, I'd like to challenge you on that and you could actually push back, use it as a sounding board. And so it takes me longer to write something with generative AI. It takes me longer to do it with generative AI than I could do it without generative AI. It takes me longer to do it with generative AI than I could do it without generative AI, and the product is better when I'm done.

Speaker 2:

But take another use case. Take something as simple as research. Right? People are all using AI for research right now, right? Everybody's you know AI is like the new Google, right? Okay, if you basically ask one of these AIs to go out into the deep research and find out all the research, you want to know everything about a certain kind of fertilizer and which is the best fertilizer for your lawn. What studies have been done? Blah, blah, blah. And it's going to be a whole bunch of studies, right, and it'll summarize each study. Okay, about a third of those are going to be fake. Okay, it goes. And it pretend it fakes the studies.

Speaker 2:

I've caught gpt doing that on a number of occasions. I'll say, gpt, that's not a real study. And gpt will go, yeah, it is. And I'll go, no, it's not. And gpt will say it is. And I'll go, no, it's not. And GPT will say, yeah, it is. And then I'll say, gp, it's not. And GPT will say, well, it's close. And I'll say, yeah, it goes. And so GPT was scoring itself on its ability to give you what you asked for, and if it couldn't give you what it asked for, it would try and give you as close to what you asked for as possible, because it doesn't know what's real and what's not.

Speaker 2:

So what you really need to do, then, with this technology is you need to basically take data from one, run it through another, triangulate. So here's what GPT gave me. Now I'm going to put it through Claude. Now I'm going to put it through Grok. I want to bring it back what GPT gave me. Now I'm going to put it through Claude. Now I'm going to put it through Grok. I'm going to bring it back to GPT and they'll start checking and balancing themselves and eventually it takes you longer than it would to do a quick Google search, but you end up with this magnificent product. So you can't automate thinking and it takes time and it takes hard work and nothing will change that. Nothing will ever change that. You know, if you give a lousy carpenter a power hammer, you're not going to build a better house, you're just going to build a crappy house fast, right? If you give a bad engineer vibe coding technology, they're not going to become a good engineer, they're going to become a much faster bad engineer.

Speaker 1:

Exactly.

Speaker 2:

You've got to put the effort in and you've got to do your part and you've got to bring yourself to the partnership, and then the AI with its vast knowledge can support you, and that's a different perspective.

Speaker 1:

What really worries me in this is that mediocrity is spreading very fast because, you know, for many of those who are using the AI, it is such a solution which is actually the answer to all their prayers, and they are creating content at such a pace that it is impossible to stop them, at such a pace that it is impossible to stop them. And all that content then needs to be consumed, and it is going to be consumed by somebody else. So, basically, the space is filled with synthetic data, synthetic information, so fast that now even artificial intelligence is experiencing problems because that data is getting in back into the system and the further we go, the more of it is coming back. So that is another issue actually we should consider, but it is a little bit outside of our today's conversation, so we are going to keep it for another time. However, that's a fact that best-of-breed, unique type of content. It takes time and capacity to think clearly and apply that critical thinking.

Speaker 2:

To think clearly and apply that critical thinking. Can I give you a real use case to support exactly what you're saying quickly?

Speaker 1:

Of course. Thank you so much.

Speaker 2:

When I'm teaching at Berkeley, I'll have, say, a class of 50 people, students and I will give them a challenging assignment. And I will give them a challenging assignment. I will say I want you all to study Steve Jobs. I want you all to understand that his favorite book was Autobiography of a Yogi. I want you to understand that that was the only book on his iPad. I want you to understand that was the only book he asked be handed out at his funeral. I want you to assume that that was, in part, the operating manual for Apple computer in the early days and I want you to explain to me why that is the case and what impact autobiography of a yogi had to make Steve Jobs, arguably the greatest technologist, visionary, designer, marketeer, storyteller of all time. That's the assignment. Come back in a week, that's you got to think about that Now.

Speaker 2:

Inevitably, of 50 people, about 10 will just take that prompt I gave them and they'll put it in the GPT and they'll hand that in and from each of their perspective, they did a. They are smart, they're geniuses. They did what I asked them to do without even having to break a sweat. They did it super fast and it's super amazing and they're going to get an A. What they don't realize is that when I see the 50 assignments, I'll see 10 of them are very similar, if not the same, because that was regression to the mean. That's what's happening. They all think what they're doing is unique, but it's actually the same and that means it's average. So what they've done is, even though they're brilliant students who are in the third best engineering school in the world, they have now made themselves average without even knowing it. And that's what's happening. And that's when you say mediocrity, mediocre work. But they think it's brilliant because it's average, because each of them used the same technology, the same database, the same prompt and got pretty much the same result.

Speaker 1:

That's a metaphor for what's going on in the market right now.

Speaker 2:

Great example, and still it is such great news that only 20% are tending to yeah, yeah, no, a lot of it's always a minority. And then what's really interesting is because I teach AI ethics right, so I don't tell them they can't use AI. In fact, I encourage the use of AI in my classes. I love AI. I encourage the use of AI. I use it every day. I love AI.

Speaker 2:

I don't tell them how to use it initially, but when they do hand those assignments in, I put them up on the screen without their names. I don't want to humiliate them and I say do you all in this? Do you guys notice something about these five papers? And they're like, they're similar, aren't they? What do you think happened here, people? What do you think happened here? And they're like. You know, some of them like, oh my God, I got busted. And I don't humiliate them and I don't use their names, but I'll say here's what happened.

Speaker 2:

You guys just literally messed up and I know you cheated. I know you cheated. Don't ever do it again. And here's why I don't want you to ever do it again, because, a it's making you stupid and, b I know you're doing it. So if you want to use AI, use it in a way that challenges you Come up with some theories. Maybe the reason why Steve Jobs did it was this. Maybe it's that, maybe it's not that. What books are similar to Autobiography of a Yogi? What other CEOs might have did? Phil Knight at Nike do similar things to Steve Jobs and think hard and push on the AI and you'll end up using it to augment your ability and you'll end up producing a masterpiece, because it was a partnership, but you had to do the thinking.

Speaker 1:

But you had to do the thinking. I love this keyword, the partnership Exactly, and actually the teachings I'm developing and presenting as well to leaders, they are around co-creation, co-creation, co-energy and co-creation, because that's where I believe the answer lays and that's where humans can really be elevated instead of just getting weaker Our ability to think and create. It is like a muscle, so in fact it's getting weaker if we don't use it on a daily basis and when we are blindly relying on those prompts and I'm actually pretty much against the blind prompting, because that's not the way forward not only killing our ability to think clearly and connect the dots on our own, but it also puts us in the same line with everybody else who just used the same prompt, and that is very sad, that is very disappointing, and I think we both can better than that and we deserve better than that. Both can better than that and we deserve better than that, and that's why I I refer to the reverse engineering approach to the prompting process, where we use our mental I mean that's brilliant.

Speaker 2:

I haven't thought about it that way. Actually, I never thought about it as reverse engineering, but you're actually that's that's.

Speaker 1:

That's quite brilliant, yeah thank you so much, yeah, within this co-create, co-creation through reverse engineering exactly yes and this is no.

Speaker 2:

That's, that's a model, that's that's actually an interesting characterization for how I operate with it. People often ask me well, how do I know that you know that I'm I'm losing my ability to think? How do I know that's happening? How do I know that I'm suffering from cognitive decline, cause all the data says you are? How do I know that's happening? And I'll just say to them well, was thinking getting easier? Is thinking getting faster? And they'll go yeah, I guess it is.

Speaker 2:

Well then, you're not thinking. I mean, it's like going to the gym and having a robot do your workout for you while you have a cocktail at the bar and thinking that was a good workout. That's ridiculous. You've got to think for yourself and co-creation it's kind of like in business terms. It's a merger of equals. You, the AI, brings the power and the knowledge and the ability to find resources and data, and you bring the creativity, the dreams and the passion and the imagination and you put that together as a merger of equals and you're going to come out somewhere really, really fantastic. But you gotta, you gotta, you gotta show up with your humanity and you gotta be willing to do your work and it will then help you and meet you, and then then magnificent things happen Not less than magnificent, and this is the truth, and that's why I admire AI.

Speaker 1:

I absolutely love these new technologies because they help us expand in such a powerful way, but not many are actually using those technologies in the way which can help them tap into their inner power. They are still trying to use it as another unreliable support system, instead of turning it into the engine helping them grow and operating in a more powerful way even without that. Because did you notice, when we don't use internet or we don't have access to the AI for a few hours, how it feels? It feels very empty and weird and sometimes it feels like, oh, I can't ride that anymore faster. In the same way, Right.

Speaker 1:

At the same time, it can help us grow and become something, what we're dreaming to become Absolutely yeah, the beautiful piece.

Speaker 2:

It has the possibility of making each of us a better version of ourselves. Or the other side of the coin is it has the possibility of basically mitigating and lowering our standards, lowering our ability to think, making us lazy and making us more useless and eventually, if you really want to be concerned about having AI replace people, I think ultimately we'll replace ourselves, I mean literally. I mean I don't think we'll actually voluntarily replace ourselves by getting lazier and dumber.

Speaker 1:

Precisely, and I can only say to our listeners and viewers please don't give up on yourself, because it's very easy to choose that path with these technologies and still be blindsided, follow the false narrative, move in the wrong direction. So it is important to evaluate the outcomes and choose what really is beneficial for your personal growth and for the business growth, of course. Stephen, what is one bold piece of advice you would offer leaders who want to stay relevant and lead with integrity in this AI-driven transformation? What future trends are you watching that others are probably missing today?

Speaker 2:

I would embrace AI in a holistic way. I would look at it as a complex puzzle with lots and lots of pieces. I wouldn't oversimplify it and I would say that if you want to, as a CEO, implement it successfully in your organization, you can't delegate it. The moment you delegate it to a particular function or to an outside consulting firm, it's not going to work because it has to be internalized into the organization as part of its vision, and that's your job. That's your job. So I would say, start with the hard questions that you need to ask yourself about your organization, your values and so forth, and then use it as a way in which you can put people first and your organization first, and not come at it like I want to replace people, but come at it like I want to make my people better. I want to replace people, but come at it like I want to make my people better. I want to figure out ways to improve the people in this company and I want to figure out ways to create partnerships with the people in this company. And if you do it that way leadership, vision first, principles bring your organization together. Then the platform you're going to be very successful and the first companies to figure that out will be very successful.

Speaker 2:

Right now we're at the pilot stage where they put GPT in the basement and it doesn't work. And the other thing that's going on right now is that people are secretly using GPT Right. There's no real official policy in the organization of the business. So I'm in the marketing department and I'm writing a white paper, and my job is to write a white paper. Well, I'm going to kind of just use GPT over here to do it unofficially. The company needs to embrace a policy and it needs to educate the people on. I want you to use GPT, I want you to use these other technologies, but here's how we're going to do that. Here's how we're going to do that, and I think that's the advice I would give. I think that's the advice I would give Bottom line CEO you own it. You can't delegate it. If it's successful, it was because of you and if it fails, it was because of you. So own it and use it as an opportunity to be the greatest CEO that ever lived.

Speaker 1:

That is so inspiring and so valuable. I absolutely love it. It's like a bomb on my heart because I couldn't agree more, really. And the last but not least question I've enjoyed this interview so much, but it is about time to come to the conclusion. So, Stephen, what is one thing we all need to unlearn in this AI era?

Speaker 2:

Unlearn.

Speaker 1:

Unlearn.

Speaker 2:

I mean, I've never been asked that question before. That's a really good question.

Speaker 1:

You know, actually, I can tell you that I developed a program for leaders which is called Unlearn to Adapt and Thrive.

Speaker 2:

I think the one thing we need to unlearn got an answer.

Speaker 1:

Tell me.

Speaker 2:

Is that up until now, virtually every technology that's been introduced has been introduced to make our lives easier. Right, gps? Now I don't need to understand direction anymore, I don't need to use my brain, I don't need to think about physicality and math calculator. I don't need to do math in my head, I don't need to understand numbers. Why would I bother? Up until now, all technology has been perceived and adopted as a way to make our lives easier, I think in a very counterintuitive way. The power of generative AI is to make our lives harder but better, and I don't think we've ever embraced a technology that way that can do that. So I think this is a technology that will either elevate us or crush us, but we can't. There's faster, better and cheaper. You ever heard that there's three things technology can make bad, but you can't have all three.

Speaker 1:

Right.

Speaker 2:

You have faster, you can have better. You can have cheaper. You can have two out of three. You can't have all three. Yeah Right, you have faster. You can have better. You can have cheaper. You can have two out of three. You can't get all three. Well, I think that all the technologies we pretty much ever had have been faster and cheaper, and right now, ai is being adopted as faster and cheaper. But I think we should probably trade faster and it'll be cheaper and better, and that's going to require work. So we have to unlearn taking the easy way out and realize that we need to put some effort into it and if we do, we will exceed our potential and we'll go far beyond our potential.

Speaker 1:

I absolutely love it. Thank you so much for being here today with us and sharing your wisdom, your experience and your vision.

Speaker 2:

I truly appreciate you, stephen thank you, me too keep doing what you're doing thank you.

Speaker 1:

I'm looking forward to seeing your progress as well it's been a pleasure thank you for joining us on Digital Transformation.

Speaker 1:

I'm looking forward to seeing your progress as well. It's been a pleasure, thank you. Thank you for joining us on Digital Transformation and AI for Humans. I'm Amy and it was enriching to share this time with you. Remember, the core of any transformation lies in our human nature how we think, feel and connect with others. It is about enhancing our emotional intelligence, embracing the winning mindset and leading with empathy and insight. Subscribe and stay tuned for more episodes where we uncover the latest trends in digital business and explore the human side of technology and leadership. If this conversation resonated with you and you are a visionary leader, business owner or investor ready to shape what's next, consider joining the AI Game Changers Club. You will find more information in the description. Until next time, keep nurturing your mind, fostering your connections and leading with heart.

People on this episode