The Promise and Peril of AI | Howard Getson
Transcript of Howard Getson interview by Alan Olsen, Host of the American Dreams Show “The Promise and Peril of AI”:
Alan Olsen: I’m visiting here today with Howard Getson. Howard, welcome!
Howard Getson: Hi, Alan, how are you?
Alan Olsen: Good. So Howard you’ve been a guest on the show before I wanted to get a little update. You know, we’ve had a tremendous amount of change in the world of AI technology. And what’s been a few minutes talking about how does the person manage change in this world? And, and what impact you see happening right now with as AI unfolds, the impact it has on individual jobs and displacement.
Howard Getson: Yeah, what’s really interesting is, how much of the discussion about AI focuses on the promise, but then the parallel, it’s like two sides of the same coin. And I actually don’t use the word AI in any of our marketing speak, because so many people have dystopian scary thoughts about it. One of the things that I believe is that AI or what I call amplified intelligence, the ability to make better decisions, take smarter actions, and continually improve performance is kind of the next 25 years compass heading for the world. And it’s like electricity, or the internet, it’s going to literally change everything, but probably not in the way you anticipated. But I go back to the mid to late 90s, when Steve Case was sending out AOL discs in every magazine, and I was the CEO of a tech company, I knew the internet was going to be big, I absolutely positively inevitably knew what was coming. Except I couldn’t have predicted almost any of what had happened. Who would have thought CompUSA would have gone out of business, or that movie theaters would be displaced, or that cable TV would be displaced or that record stores, or that you could buy a car online and people would choose to do it or now Tesla’s using the internet to update cars and add new features. It’s just crazy how what was a capability becomes a product becomes a platform. And, it changes everything. So AI or amplified intelligence is going to take away a lot of the discretion that humans used to have, except it’s not a zero sum where it gets the discretion and we lose discretion. We actually are freed to do something more. Think about the agrarian economy when everybody was a farmer, right? I mean, you woke up when the sun rose and you went to bed after the sun set and you worked. And as electricity and machines, the Industrial Revolution changed things. It’s not like everybody is out of work. It’s Maslow’s hierarchy of needs as, as people have food and shelter, they can focus on things like affiliation and learning. And then when they have food and shelter, and affiliation and learning, they can focus on higher level things like love, and then self actualization. This is actually going to free people to truly focus on their purpose. I know you’re in Strategic Coach, Dan Sullivan has this great tool called the lifetime extender. And you try to imagine the last couple years of your life and you think about who are you going to spend time with and what will you focus on? What do you want your legacy to be? And then once you’ve got that done, and you think, Alright, that was pretty good, yeah, that’s right. I feel good about it, then the trick is, all right, well, there’s a major technological innovation, and you get to live a whole lot longer. How much is that extra time and now what are you going to spend it with? Ai is not only going to give you more time to live because it’s going to dramatically change healthcare and the ability to fix things that are broken within you. But it’s also going to give you a lot more time and a lot more agency to determine what really matters to you and how you spend your time in pursuit of those things.
Alan Olsen: You know, it’s interesting and then the pandemic comes along and COVID and, and I think it challenges foundations. In other words` a lot of people were thinking, Well, my foundation is I want to grow my company. 30% or want to make a boatload of money and all the sudden something comes along and says, well, maybe not how does a person navigate change in the face of adversity? And you covered it a little bit before, but I’d like to circle back.
Howard Getson: Well, you know, sometimes necessity is the mother of invention. And, and so many times in my life or, or as I look at other people, things that look like the worst thing that ever happened to you is actually one of the greatest gifts, there’s a hidden silver lining. And I think the pandemic is going to be looked at like that for generations because it created new possibilities. And it’s not that the possibilities weren’t there before, it wasn’t there for you, because you didn’t consider it. And some of it is simply the reliance on remote technologies like zoom. If I think about how many zoom meetings I had, in the beginning, I had to develop a new muscle, you know, it’s almost like figuring out how to do a video or a podcast. On the other hand, it’s really tactical empathy. How do you stop focusing on what you’re gonna say, and actually pay attention to who’s on the other side, so you still get those micro cues, the facial expressions, so that you can respond organically, rather than just two people having a conversation that sounds like they’re talking to each other, but they’re really two conversations on different levels talking at each other. The same is true for our remote teams. So much of my company’s culture was based on strategic stumbling, you know, the ability to bump into people in the hallway and check their energy signature, their facial expression and say, what’s going on? Or, Hey, why don’t you and I go to lunch. And, and so it’s actually freed me to do that with a lot more people. But it’s smaller, more targeted, more intentional, you actually have to become more intentional. And I actually think that ties back to AI as well. AI is not a technology that’s currently good enough that it’s a general intelligence, I can’t press an AI button and say, I’m going to AI my company. On the other hand, it’s, it’s now so good, that it’s good enough to do almost anything specifically. So if I look at a company, one of the first things I do is, I try to imagine there’s a beginning, a middle and an end to every process. So what process do we want to make better? What is the first thing we do? And, and like any good recipe, it’s not just the ingredients, it’s the order and the intensity. And so part of what AI can do is it can break these functional components apart. And it can say, in this functional component, instead of trying to get an answer. An answer, not the answer , is actually the beginning of a much better conversation. AI is a lot like what in spiritual disciplines you call mindfulness. It’s the perspective of all perspectives. So as I look at a functional component, we could solve it arithmetically, statistically using game theory, behavioral economics, and common sense. And then you have a different layer of AI looking at those different solutions and saying, but which one really accomplished what you want. And then this becomes a lot like a person. Because how do you determine what’s best? Was it the thing that was easiest? Was it the thing that had the highest return, the thing that rang the bell, the loudest? Was it the thing that had the best risk adjusted return? Ultimately, most people come down to three common ways of looking at it, its efficiency, effectiveness, or certainty. So getting something done with less effort, less time or with more certainty, and AI is great at once you figure out some of those things to say. And then this is the one that has the highest real time expectancy score to do that. But the cool thing is, as you start to use AI, you realize, but it changes and so even though I said this is the best way, right now. It’s not necessarily the best way as things change. And it’s really a great almost parable for life. Because so many times you’re optimizing based on what you thought was right, but that was for who you were. There’s a saying that a man can’t bathe in the Ganges twice because the man’s never the same and neither is the river. So as you reevaluate what you want from a different perspective, as you become more enlightened or, or more filled or or wise, you realize you might want different things. And that’s really where AI shines is being dynamic and adaptive and turning things on and off situationally. It actually makes me realize some of the, the promise in the parallel of AI is, is that if I were to compare our very best AI technique, it looks like magic, it looks like cheating, it looks, it’s so good that people would say, Oh, my gosh, and yet, if I compared it to a human, it would be profoundly autistic. And I say that because it’s not empathetic, it doesn’t recognize what’s happening around it next to it, above it below it. It only does what it does. And so I believe one of the next steps of evolution here is where you make it more humane. By teaching it to communicate, coordinate, collaborate, you’re going to actually have to start to build in tactical empathy, forgiving, forgetting the concept of altruism. A more evolved older intelligence should spend some of its time training the younger intelligence, which may have more skill, but less experience. And you’re going to find out that things get better. Evolutionarily, almost every successful species has a leader. The same is true for companies for religious groups. Obviously, it’s going to have to be true in AI as well, there’s going to have to be a governing principle, there’s going to have to be things that are more true. And as we start to find things that we believe are more, and I’m using air quotes here true. How do we save them in a way so that every new system doesn’t have to be rediscovered, in a sense, it’s incredibly inefficient to have every new system relearn everything in the world. There has to be some way to share some of what we know. So that it’s focusing on the edge of new discovery, the manifest destiny of what’s possible. Anyway, it’s really interesting, but it actually says that, unlike most businesses, there’s tons of green space almost everywhere you look, there’s opportunity. And the question is, what do you intend that opportunity to be fuel for? What is it going to feed? There are some countries and I’ll just use China as an example, that have a totally different moral or ethical set of standards. And they’ll do things that we would think are wrong. But they can train AI by having hundreds of cameras look at a prisoner 24 hours a day, 365 days a year, so that the AI starts to say the prisoner is getting antsy, the prisoner is about to urinate, the prisoner is getting sleepy, we would find this as a horrible invasion of privacy, even of a prisoner. But as they start to train models, they’re going to use them differently than we would as well. The same is true for some of the bioethics where they might use CRISPR, for things that we would never consider using CRISPR. For, because in their culture or their set of rules. It’s just not wrong. And as any of these exponential technologies grow, you’re going to have some people who want to use it to spread peace and love and you’re going to have others who are going to look at how do I monetize it in the quickest and easiest way? And, and I think the promise and the peril has way more to do with human nature than it does the technology itself.
Alan Olsen: This has been terrific. The wisdom that you have. How does a person reach out to you and say, Hey, Howard, I like to connect about AI and everything going on in this world. And
Howard Getson: yeah, so we have a website, www.capitallogics.com and there’s a blog there blog.capitallogics.com We’ve just invented a number of frameworks that we’re sharing, to help people understand how AI is going to get adopted and in in like 30 seconds. You don’t have to predict technology, you have to predict human nature. And so it starts with a capability as you get a new capability. It’s basically saying, does it help me do what I already do? Just better? If it doesn’t? Nobody cares. But if it does, stage two is getting a little bit greedy. And you say, not? Does it help me do what I already do? But what could I do? or What should I do? So stage one’s a capability. Stage Two is a prototype, you’re taking like a Lego each of its capabilities. And as you start to stack or combine multiple Legos, it’s the 8020 rule, except it’s really the 2080 rule, where 20% of capability is going to get you 80% of the benefit. Each one of these stages doesn’t fulfill a desire, it actually fuels the next desire, if I get a capability, I’m not satisfied, it makes me want more. So if you can imagine the capability that’s coming, you think, what would they want? Next, you think about what would be evidence of success, you think about what would be the constraint that’s preventing it. And each one of these stages, so from capability to prototype, next is product. And after that as a platform, the distinction is stage one is about your use. Stage Two is you and your team. So people, you know doing things that you know, stage three is a product, that’s people you don’t know, doing something that you do. But stage four, where a lot of the money comes is a platform, and that’s where people you don’t know start to use it for things you never anticipated. And the platform has to be simple enough and valuable enough that people want to multiply on it, build on it and start to do cool things. But we’ve got a bunch of free content that helps people think like that, but I really enjoy American Dreams. I’m so inspired by what you do, not only for your community, but educationally. And it was an honor to be here. Thank you so much.
Alan Olsen: Thank you, Howard.
To receive our free newsletter, contact us here.
Subscribe our YouTube Channel for more updates.
This transcript was generated by software and may not accurately reflect exactly what was said.
Alan Olsen, is the Host of the American Dreams Show and the Managing Partner of GROCO.com. GROCO is a premier family office and tax advisory firm located in the San Francisco Bay area serving clients all over the world.
Alan L. Olsen, CPA, Wikipedia Bio
A special thanks to our sponsor, GROCO.com
Howard Getson is the founder and CEO of Capitalogix, an AI trading systems company that essentially provides its users a “hedge-fund-in-a-box.” Prior to forming Capitalogix, Howard had an active corporate legal practice which he worked in from 1987 to 1993.
Howard has a passion for helping people define purpose in their life and presented “The Time Value of a Life Worth Living” at TedxPlano 2014: https://www.youtube.com/watch?v=MrRl0iKu88E
Howard attended Duke University where he earned his undergrad in Philosophy and Psychology. He also obtained both an MBA and JD from North Western University.
Alan is managing partner at Greenstein, Rogoff, Olsen & Co., LLP, (GROCO) and is a respected leader in his field. He is also the radio show host to American Dreams. Alan’s CPA firm resides in the San Francisco Bay Area and serves some of the most influential Venture Capitalist in the world. GROCO’s affluent CPA core competency is advising High Net Worth individual clients in tax and financial strategies. Alan is a current member of the Stanford Institute for Economic Policy Research (S.I.E.P.R.) SIEPR’s goal is to improve long-term economic policy. Alan has more than 25 years of experience in public accounting and develops innovative financial strategies for business enterprises. Alan also serves on President Kim Clark’s BYU-Idaho Advancement council. (President Clark lead the Harvard Business School programs for 30 years prior to joining BYU-idaho. As a specialist in income tax, Alan frequently lectures and writes articles about tax issues for professional organizations and community groups. He also teaches accounting as a member of the adjunct faculty at Ohlone College.