Innovation At The Intersection of AI and Blockchain

I’ve been invited on The Birdhouse, a weekly conversation hosted by Blubird, as a guest speaker. We discussed the innovation happening at the intersection of Artificial Intelligence and Blockchain, with an eye towards the internet of things, the machine economy, and father out applications in space exploration.

Listen to the recording, or read the edited transcript below.

Mike Romei: I want to welcome our special guest, David. David is from Beyond Enterprises. Beyond Enterprises offers strategic and technical leadership, advisory and support capabilities to projects in all stages of blockchain and new technology implementation and development.

David, welcome to the Birdhouse. Can you give us a brief overview of what Beyond Enterprises is, what you do as a managing advisor, and what initially drew you into the crypto blockchain Web3 space?

David: Thank you for having me. I've been in AI since the mid-1980s. We're talking about the previous millennium. I started very early being interested in how technology changes society and the way we live, how it enables us to address the challenges that we see around us. 

For a lot of people who were already tuned to looking at new solutions emerging, it was relatively easy to recognize Bitcoin when it was born as something that could matter. In the 90s, I was also part of the cypherpunk movement that promoted advanced encryption when it was a very restricted military technology, almost impossible to adopt at a broad level in the US. So for me and many others, Bitcoin, blockchain, and everything that came after was impossible to resist.

It was especially fascinating to see how there has been since then almost a rhythmic dance of talent, ebbing and flowing, moving between blockchain and AI, as people get very rightly excited about what is going on in both fields.

At Beyond, we love helping projects address their challenges, whether they are deepening the development of their roadmap as they implement their technology, analyzing their business model including tokenomics, and how they reflect the principles of their white papers in smart contracts, or building and supporting communities as their project is adopted more broadly worldwide. 

We've been doing it for over 10 years with clients in many various vertical fields adopting blockchain technologies and tokenization. We like to measure our impact yearly by looking at the combined market cap of the clients we've worked with. Last time, this was a little more than $4 billion, just to give an idea of the kinds of activities that we do.

Mike: Besides both being technologies that have received a lot of hype, from a fundamental and technological standpoint, what's the relationship between AI and blockchain right now?

David: There's a lot of sound and important philosophical grounding in both of these technologies. I think it's a bit disingenuous to believe that the people passionate about solving certain challenges with a given technology are neutral about what they're doing - that they go with an open mind without any conception of their position before and after and what the impact of what they're doing could be.

If we realize that, then it's easier to recognize how fundamental the passion is driving founders of disruptive world-changing projects. In blockchain, we have the desire to build a world that is transparent, accountable, that empowers people who are otherwise excluded from the benefits of advanced financial solutions. 

AI is similarly driven by the desire to deliver the benefits of advanced intelligence everywhere. There are a lot of AI solutions today that are centralized, but more and more, either because of open source or new types of architectures, advanced AI capabilities can also be implemented and delivered in a decentralized manner.

Mike: Can you expand on how AI can be delivered in a decentralized manner just a little bit more?

David: Let me give you a couple of simple examples. When you use ChatGPT, you're using a leading AI model - very capable, in many ways better than many humans. I'm happy to admit that it's better than me in so many ways, especially in how vast its abilities are, covering all kinds of different areas. But it's also eminently centralized. Whether it's running on a given data center or not, whether you're using it in a browser or on the app, you depend on it in a central location. If your account is suspended or closed, there's very little recourse from the point of view of a single user or consumer of those services.

On the other hand, if you're using more and more interesting and smart functionality on your phone, that's increasingly delivered by the phone itself without going out to the cloud, without relying on the centralized data centers of Google, Apple, or anyone else. 

It used to be the case that speech-to-text, when you have the courtesy of actually using the microphone and dictating a message that the other person can read much more rapidly, rather than leaving a long rambling voice recording on WhatsApp - that conversion from your speech to the text of the message happens on the phone. So it's millions and billions of people who are already experiencing these decentralized AI applications.

Intersecting this with blockchain means we can improve its security, ability to be coupled with smart contracts, providing all kinds of additional features that neither of these two technologies can independently make available.

Mike: One of the things I've been joking about, but I'm rather serious - I've been in technology for over two decades now - is that AI really needs to stand for "more accessible" rather than artificial intelligence and accessibility. 

All these topics you've brought up - blockchain for control of data, finance, security by users; AI; and IoT with sensors and the ability to get information at the edge, at the data center, and everywhere in between - I'm curious about your thoughts on the importance of accessibility.

David: It used to be the case, and still is, that emerging technologies are the most expensive, the hardest to use, and the least functional at the beginning. Some of you may remember, or if you watch an old movie from the 90s, recognize those ridiculous bricks that people were holding against their head, frying their brain. At the time, those were mobile phones, and they would cost the equivalent of $10,000 or $15,000. They were a symbol of Young Urban Professional success, and very few people had them.

Today, the same is happening with other technologies. Another example is speech recognition and speech-to-text functionality. In the 90s, the first of those programs, like Dragon Dictate, needed hours of training to recognize a few hundred words, and you had to speak with each word separately. Today, no training is needed. Everyone can speak very naturally, and this is world-changing.

At the beginning, what happened? Only those who really needed it - whether Wall Street traders for mobile phones, or people who were paralyzed, quadriplegics who could not use the computer otherwise - made the investment in money, time, and effort to use that technology. But today, it's available to anyone. Mobile phones are literally ubiquitous. Even the poorest person in the world sets aside money to buy a phone.

A current example of leading-edge technology being inaccessible, hard to use, and expensive at the beginning, but rapidly becoming inexpensive and accessible to everyone, is brain-computer interfaces. Neuralink has already implanted two people with a brain implant and interface that enables them, and these are paralyzed people, to use the computer at a speed that beats me and you and everyone else. They are world record holders of certain tests in clicking around the screen, chasing a given dot. They can do it with their brains faster than anyone else.

Of course, today, that process and procedure is risky and expensive. Whether it's going to take five or ten years, there will be millions of people with these interfaces. A lot of people who are skeptical or even horrified today will change their minds and recognize that now that particular technology is something they definitely want.

Mike: It's interesting how much life is changing beyond just the screen. I'm not sure if you watched any of the Olympics, but in the opening ceremony, the person who had that exoskeleton machinery on so they could walk - it really is changing lives.

David: Yes, and we're seeing a lot of these technologies benefiting people. What we need is to find a way, on one hand, to really extend the limits of our adaptability because of how rapidly the new technologies are arriving and changing the world around us. It cannot take two generations to adapt to the new thing. It cannot even take one generation. We have to get our hands dirty, experiment, make mistakes, keep or regain the childlike curiosity and risk-taking that was natural to us when we were in our teens or twenties. 

At the same time, we need to paradoxically slow down because the frenzy and frantic exploration that many of us feel, whether we're looking at innovations in blockchain or AI, is impossible to keep up. I have friends who say, "Well, yes, I was happy to come and be your guest in this live stream or be on your podcast. But now I'm desperate because in the hour or two that we spent together, the field advanced to a degree that I don't know if I'll be able to catch up." And they're only half-joking.

Being able to step back and look at the big picture is equally important to understand what the slower but important waves are and understand your role in both riding those waves and realizing that you can hardly control what is happening. This is especially difficult for politicians and regulators because that's what they're supposed to tell us they can do - they know what needs to be done, they can put various policies and regulations in place, they can foresee the effects of unexplored technologies. Unfortunately, it's not true. They're not in a nice or comfortable place.

Mike: I agree. I think a lot of them are trying to gain that control back, so they're trying to put the genie back in the bottle. But ultimately, something that it seems you're focused on is not just propelling one emerging technology, but connecting the dots and integrating them so that they help reinforce each other.

David: They definitely do. At the title of this episode, we mentioned various things, and at first sight, they could appear disconnected. What does IoT or AI or blockchain and the emerging machine economy have to do with each other? Let me give you an example. When self-driving cars soon become a day-to-day reality, wouldn't it be much better to give people the ability to organize in networks rather than having to rely on a single provider like Uber? That's where the ability of machines to communicate with each other reliably, to transact with each other in micropayments that don't require paying the 2-3% to credit card processors, comes in. This is just a very simple example of how these different components are tied together.

Mike: I think it's really interesting, especially from the AI perspective. The perception that so many have been given through Hollywood or media is of the fearful nature, especially when these things are very new and unexplored. But the reality is that even the internet back in the early 90s was somewhat misunderstood by a lot of people and questioned - how will that ever work? Yet there were people who had been building those protocols and using that system for years, if not decades, prior to public consumption.

I think you unlocked a couple of things here in explaining how AI and blockchain interface with each other, complement and support each other. But I think one thing I'm hearing is that it also creates somewhat of a check and balance system where they can be controlled and utilized in a controlled environment for benefit, but still have some human level of interaction.

What are the potential risks or ethical concerns that you see from AI in a decentralized system?

David: Being fearful, even paranoid about risks, is natural. We are the descendants of those ancestors, Neanderthals or whoever they were, that freaked out when they heard something making noise in the shrubs or on the savannah. The worst that could happen is that they were wrong and there was no saber-tooth tiger. But those who were telling them, "Don't worry, you don't have to panic. Why are you always freaking out?" - they were eaten. That's why we're programmed to pay attention to risks much more than potential benefits when we see something new coming towards us. It's a very healthy, evolutionarily fit and adaptive behavior.

But there's also a smaller but more impactful group of ancestors - the wizards, the explorers, the risk-takers, those who go from valley to valley and taste every mushroom, hoping not to die but to discover whatever has the most powerful psychedelic effect or whatever else they wanted to achieve with those incredibly courageous tests. We inherited their daring to explore in order to find what actually works. Maybe it won't, maybe I will die, but I cannot stop trying.

Today we're seeing the clash of these two points of view, and each broader continental society or aggregation will find its own balance. For example, in the US, it's very likely to go ahead and make as many mistakes as possible and then find out what doesn't work, either through the legal system or cleaning up some, hopefully not too tragic, mess that they create.

At the other extreme can be something like China, which has already published several hundred pages of AI regulations that put the people developing AI tools on a very clear path of what they're not allowed to do. 

In the middle, maybe Europe is trying to preserve an ever-diminishing technological base of startups and enterprises that operate on the leading edge, while unfortunately incorporating a principle, the so-called precautionary principle, which is almost impossible to fulfill. You have to prove there's not going to be any negative implication, no harm will come from using the technology you're introducing to the market. It's an impossibly high barrier.

These are three examples of how the balance between healthy paranoia and curious risk-taking is being settled in different societies.

Mike: Corey, I want to know which sectors you feel have the most potential for disruption from AI and blockchain currently, as you see it from your perspective.

Corey Billington: I've got a very unpopular opinion. I go against the grain for the most part when it comes to AI and blockchain. That stems purely from our venture capital side. We've been pitched an absolute plethora of AI blockchain fused projects over the last two years, and I can honestly say not one of them has impressed me. Not one use case went beyond scientists, essentially, when it came to data models, the way they were structured, things like that.

As for what areas are going to get disrupted with AI and blockchain coupled together, I haven't found one yet that excites me. I know that's probably the worst thing people want to hear, but that's just what I've seen personally. Don't get me wrong, I'm a fan of all the advancements in the AI sector. I just haven't seen the requirement or need to bolt on blockchain to it yet. I don't know if we will see a lot of disruption in the blockchain space natively that AI will bring, apart from basic automation stuff, which is really nothing special. It's been going on for a very long time anyway. It's more pseudo-AI than anything else. 

So that's my unpopular take on it. I don't see it doing a whole lot at this point, but I'm happy to be proved wrong. David, if you've got any thoughts on what I've currently seen versus what you've seen, I'd really love to hear that take because I've been rather disappointed with what I've seen so far.

David: Rather than giving specific examples of this theme or that theme, this project or that project - even though I am an advisor to SingularityNet, which is an ecosystem of dozens of different AI projects, all decentralized, blockchain-based in many various fields - I would like to offer two ways of finding ways to answer the question for everyone who is listening.

One is to look at the approach where obviously the talent needs to be there, and these two complementary talents in a founding team who are saying, "Okay, we can make it happen. We have the skills, we have the vision," and they identify some particular problem and start working on it and acquire users going through the traditional motions of MVPs, minimum viable products with a simple set of features, and go from having 10 users to 100 users to 1,000 users and so on. That's one way of looking at it, and it can be in so many different areas, especially now that a lot of barriers to entry for developing code are being lowered by fascinating new AI-driven development tools. Millions of new teams will enter the scene, and it will be a fantastic way to see how from India to South America new talent will enter to prove themselves.

Corey: I guess I'll just rephrase my answer very slightly as well. I believe that AI can bring value to the blockchain space, yes, but I don't believe that every AI project requires a utility and a utility token. They should be standalone. That was more so what I was trying to get at. Sorry, I just thought I'd clarify there.

David: No, I totally agree, and I hope I didn't give the impression that there is no role for centralized AI. Absolutely there is a perfect role for centralized AI. If you are a nation-state, you want your AI to be absolutely sovereign, self-sovereign, and centralized under your total control. It is guaranteed anyone working on centralized AI will have captive customers in the 200 nation-states that will, just like they have to have an autonomous and sovereign electric grid, have to have an autonomous, centralized, and sovereign AI.

Similarly, in finance, 500 million Europeans are served by the current financial institutions, by banks, credit cards, and whatever else. There aren't a lot of unbanked Europeans. But the promise of the new generation of DeFi and blockchain-based products empowering people to transact peer-to-peer among themselves is targeting the 2 billion people, or maybe more - let's say 4 billion people - who don't have reliable access to modern banking services.

So back to the other part of the answer, the biggest disruption actually could come in the areas that are most difficult to attack and are regulated at the highest level. I'll give one example, which is healthcare. Data flows in healthcare, issues of confidentiality, issues of risk - these are very complicated. IBM was fined a very large sum in the UK because they mishandled health data, and all the projects they were working on were frozen.

But today, with advanced cryptography, zero-knowledge proof techniques, you can both deliver patient data as well as apply advanced AI models on the data without actually handing over either the data or the models to the researcher or the hospital or whoever is running the system, even though it is open source. We have not become sufficiently accustomed to the power and magic of what these advanced encryption and zero-knowledge-proof approaches can actually provide.

The way that we do end-to-end encryption on our chats, which used to be very arcane and difficult to set up and now is in everyone's hands, the same way ZKP (zero-knowledge proof) systems are going to be ubiquitous as well. And the promise of that kind of disruption that overcomes the limitations that today's regulations put around healthcare is tremendous. So that is an example that I would offer.

Corey: Definitely, that was a very well-developed answer that you put forward there. Thanks for explaining that. While we're on the topic of AI, I'm curious to know your view on more of the tinfoil hat side of AI. I'm not talking about the Skynet side, because I've already seen that we're going to have robots in our houses by the end of this year, apparently - AI-powered robots to do your dishes and laundry and whatnot. 

So not so much the Skynet side of the tinfoil hat, but more so the fact that there's a lot of people out there calling and saying that the singularity has already happened. It's happened behind closed doors, it's happened with the military because generally speaking, the military is around 20 years ahead of everybody else, and this is already being used to do various things. So what are your thoughts and opinions? This is purely speculative, I know, but I'm just curious what your thoughts are on that side.

David: The technological singularity was introduced by Vernor Vinge in a conference at NASA in 1993 as a concept, when he actually compared it with the physical singularity of black holes. The self-reinforcing nature of machine intelligence, rather than taking millions of years to develop like biological intelligence, can take a much shorter time, is almost explosive in nature, and creates a barrier beyond which it is very hard to explore what happens.

Since then, a lot of concepts around the technological singularity have evolved. Ray Kurzweil, who is one of the founders, together with Peter Diamandis, of Singularity University, where I'm an investor, an advisor, and a member of the faculty, published his book "The Singularity is Near" in 2005. Just a couple of months ago, he published the sequel, very appropriately entitled "The Singularity is Nearer." He looks very deeply, with a lot of data and research, at what is going on and how it is going to change the world, and when it is going to happen.

Today, we don't believe that we will not be able to understand what is happening in a post-singularitarian world. Or to be more specific, those of us who are courageous enough to at least some degree merge with the machines are going to be able to actively participate and understand what is going on. Those that give up this opportunity will have a harder time for sure.

In that sense, for some communities, we are already definitely in a post-singularity world, because if you don't use phones, you don't use the internet, you don't use electricity, or whatever is the cutoff of a technology that for you is too advanced and you refuse to embrace it, well, the world is going to be hard for you to interact with, to create value in.

The example I always give is imagine that you hire someone and you select 100 candidates and find the perfect one, and you're about to sit down with them to sign the employment agreement when they very openly and without any embarrassment tell you that they cannot read or write. There will be no way for you to hire them. It will be just impossible. You will be sorry, and you will tell them, but they cannot participate in the world.

So what we can now ask ourselves is, when is it going to be the time when society will expect everyone to have a brain implant? When society will say, "You know what, you would be a fantastic candidate, but you just told me that you do not have a brain-computer interface. Sorry, I cannot hire you." And when it will be perfectly legal to do so, just like today you can say, "Sorry, you cannot read or write. I cannot hire you."

In terms of what advanced AI has or hasn't been able to already do in labs, I am not privy to knowledge that is not shared on AI Twitter, which I very happily consume all the time. But it is definitely the case that AI is scaling in its capabilities and keeps improving as more hardware, more data, and better algorithms are thrown at it. We haven't plateaued. We haven't reached a zero or negative ROI situation yet.

Whatever the name is going to be, whether it's Q-Star or Strawberry or Orion or GPT-Next - these are all names that have been thrown around - or whatever Anthropic or Plama is going to be in their next generations, they will likely be able to keep reasoning for a very long time in their own subjective time. Whether it takes minutes or seconds depends on the hardware, or maybe milliseconds. But the chain of reasoning will be really very long.

A direction that is going to be taken, hopefully very carefully, is to make them agentic. So we will be able to have them do things on our behalf, rather than us being the last and least reliable link in a chain of events. We will have the ability to tell the system, and we will trust the system pretty rapidly, to say in the morning, "Listen, respond to my emails and just send a couple my way so that I take a look at the most important ones."

Corey: I think you touched on something very important in that explanation, and I think it's something that is very - and I'm going to appeal to the younger audience here and quote a game that came out around 2005, early 2000s, called Deus Ex: Mankind Divided. It touches on a lot of those subjects of, you know, you get left behind if you don't go with the flow with implants, with transhumanism and all the likes. You're a subhuman, so to speak, because you're fully organic in that sense. And it really portrays the social divides that can come out of those scenarios.

It's probably one of the best-written titles to come out in those early 2000s. So if you're not familiar, it's well worth a little peek into it. You'll see that it's just mirrored exactly what you've just said. It's very interesting on that front, and I think it's something that everybody's on a subconscious level very aware of as well.

The fact that these are not necessarily something that they might want to get involved with, like the Neuralink or anything like that, because it's still very controversial. I personally would probably never get it because I just don't want to be at that level merged with a machine. That's just my personal view. It might become an antiquated view, but it's definitely one that I'll always hold, and I don't see that changing anytime soon.

But there's going to be a very big division between those who accept it and are all for it, and those who are more like myself who recognize certain dangers that are not worth the risk. Or even when it's not the dangers, it's just the self-sovereignty of our own body and not having to rely on outside or external forces just to be better. It's about honing your own skills.

So I can see both sides, the appeal to both sides. But yeah, I definitely think, like you said, there's going to be very much a divide and very much a lot of discrimination, I think, on both sides. "Oh, you're one of those cyborgs. No, you can't work for me." Or, "Oh, you're not a cyborg. You cannot work for us because you're not going to have high enough output." So I think there's going to be a rather turbulent, perhaps, time ahead as these things equalize. But very interesting nonetheless.

David: I agree. In the past, things were relatively simpler. When Columbus discovered the Americas - in quotation marks "discovered" - they were inhabited by approximately 50 million people. A hundred years later, 5 million. The Europeans committed one of the largest genocides in the history of humankind, killing via war or disease 95% of the native population. And no one much cared about it. It didn't make the news.

Hopefully, we will be able to behave differently when the economic output of native AI companies and native AI economies and native AI nations and native AI populations is potentially a hundredfold larger in terms of GDP increase than today. So rather than increasing one or two percent per year, increasing one or two hundred percent per year, doubling every year or doubling every six months the entire economy of a nation, it will be very difficult to resist. It will really take over. Or the only way to resist is going to be to adopt the same kind of approaches in other parts of the world as well.

However, there are already surveys that agree with you, where 50, 60, even 70% of the people are against advanced AI. So how we will find the right balance is going to be very challenging. We will have to be mutually much more tolerant than we have proven to be able to in the past.

Corey: Absolutely, and I feel like there's going to be a lot of maybe knee-jerk reactions to some large catastrophic events that might occur as well, due to quote-unquote "unshackled AI" to quote more of a movie genre speak of an AI that is not bound by any laws or rules. I think that's going to be quite scary because there's going to be people doing it regardless. They'll do it in their basements, they'll do it in their garage. It's not out of reach for anybody at this point because soon, very soon, if not already, you're going to be able to get AI that self-teaches and can grow and learn itself.

So that is something that's going to be very hard. And I guess it's easy to see the possibility of a lot of the movies and a lot of the science fiction that's out there. And, you know, corporate giants using it for their gain and all of the crazy aspects of where it can be used for maybe not so ethical purposes, and there's no way to know that it's going on unless they get busted for it.

It could be very subtle. Maybe you might look at BlackRock, for example, and go, "Wow, they've just come out of nowhere, and all of a sudden they are one of the largest asset holders in pretty well every major company in the world." So that in itself raises eyebrows, but nothing's provable. It's all speculation at that point.

So yeah, I think we're in for some pretty wild rides when it comes to AI. Some of it I'm not really looking forward to, some of it I am. But we could probably fill up this entire talking panel on AI as a whole for hours and hours. I noticed that we've got some other topics in our title here, so I just wanted to touch on the space colonization side and the IoT side and how it all gels together. I'd be keen to hear your take or point of view on that.

David: Well, for me, really, space is a beautiful opportunity, as well as a metaphor for so many of our challenges. I'm really excited to see the new wave of initiatives that are not operating in the stale paradigm of government contracts and nation-states that have kept space for decades from becoming the thriving frontier that it deserves to be.

I often, for the past 10-plus years, when I speak about blockchain or Bitcoin or cryptocurrencies at conferences where the audience is not specialists, encounter skeptics saying, "Well, what is it really for? Why would I need it? We have so many payment options, so many financial solutions." And I tell them that I have friends who are working on swarms of smart robots to be built in space with the task of exploring the mineral deposits or the water deposits in the asteroid belt.

Imagine a few thousand or maybe a few million of these robots in the asteroid belt doing their thing, AI-driven, being able to plan what they need to do to achieve their goals. Obviously, they need to, for example, produce and then give each other propellant or transport raw materials. Or they need communication bandwidth to be reserved or prioritized as a given economic value.

So I would ask them, do you think that these transactions will be paid and settled in banknotes or via credit cards or paper checks, or they will make a wire transfer with a bank in the asteroid belt? No, none of those. They will use something that is native to their needs. It's completely machine-readable, it's completely verifiable, it is decentralized. It is something that may or may not be called blockchain or Bitcoin, for sure, but it resembles or it will inherit what we are working on today as we are building these new systems.

There are very interesting coincidences because I've been talking about this for a long time. The company that I was referring to, without necessarily naming them, was called Planetary Resources. Three years ago, Planetary Resources went bankrupt. The assets out of bankruptcy, including the founder CEO, were acquired and hired by ConsenSys. To me, it was fantastic. It was exactly what I have been talking about.

So I called the CEO, because I know him, and had a conversation with him. "Hey, I'm so happy you found a new home. So what are the new plans?" And he said, "I have no idea. I don't know why they bought us. I don't know what they want to do." I hadn't checked with him since. But it was interesting. It was such a counterpoint - at the same time, the realization of exactly what I have been talking about for years and the complete and transparent admission that it wasn't necessarily very meaningful.

Mike: Would you like to share something about Beyond Enterprizes?

David: I was happy to follow your questions and the remarks that led us to a more philosophical area sometimes. But obviously with Beyond, we are very down to earth. I will be happy to talk to anyone who wants to explore how we can help them analyze and address their challenges as they are bringing their projects to life in blockchain, AI, and emerging technologies in different ways. I'm very easy to find online and I'm happy to interact and answer questions. So I'm looking forward to both continuing these conversations and to engaging with projects that can benefit from the kind of strategic advice that we provide.

Mike: David, it's been a pleasure chatting with you today.

Next
Next

AI Adoption in the Enterprise: Lessons from the Smartphone Revolution