
In this conversation, Stephan Livera and Kevin Cai discuss the ongoing Bitcoin spam debate, exploring the various camps within the Bitcoin community, the distinction between consensus and policy, and the implications of transaction filters and dust limits. They explore the role of librerelay in facilitating transactions and the economic dynamics of transaction fees and orphan rates in mining. The conversation highlights the complexities of Bitcoin’s governance and the subjective nature of what constitutes spam in the network.
Kevin Cai discusses various aspects of Bitcoin, including the consolidation of UTXOs, the dynamics of mining and decentralization, and the implications of spam transactions on the network. He emphasizes the importance of understanding the motivations behind transactions and the cultural perspectives that shape opinions on what constitutes spam. The discussion also touches on the impact of BRC20 on Bitcoin’s fee market and the potential for future developments in the ecosystem.
The discussion then focuses on the unsustainable nature of hype in the crypto space, particularly focusing on the BRC-20 token standard and its implications. He emphasizes the importance of inscriptions as a more sustainable alternative and explores the costs associated with running Bitcoin nodes, including hardware and connectivity. Cai argues against the feasibility of blocking inscriptions and highlights the need for a pragmatic approach to Bitcoin’s growth and sustainability.
Kevin also discusses the complexities and challenges of Bitcoin development, particularly focusing on the implications of temporary fixes and the proposed Reduced Data Temporary Soft Fork (RDTS). He also touches upon the importance of programmability in Bitcoin, arguing that it is essential for its evolution and utility.
Takeaways:
🔸The Bitcoin spam debate involves different camps with varying perspectives.
🔸Consensus refers to the agreement needed for transactions to be valid, while policy is more subjective.
🔸Libre Relay aims to align consensus with policy, promoting censorship resistance.
🔸Filters can influence transaction behavior, but their effectiveness is debated.
🔸Dust limits are a contentious topic, with arguments for and against their implementation.
🔸Transaction fees are influenced by market dynamics and user behavior.
🔸Orphan rates can impact miners’ profitability and transaction acceptance.
🔸The Bitcoin network’s resilience is tied to its decentralized nature and redundancy.
🔸Subjective judgments about transactions can lead to disagreements within the community.
🔸The future of Bitcoin transaction policies will likely evolve based on economic incentives and user behavior. I have a high time preference.
🔸Stamps is absolutely spam.
🔸BRC20 is a shadow of its former self.
🔸Inscriptions are a more sustainable solution than BRC-20.
🔸Node running costs are predictable and manageable.
🔸Connectivity improvements will ease node operation.
🔸Transaction relay policies will adapt over time.
🔸The cost of running a node is not a significant barrier.
🔸Legal perspectives on data embedding are evolving.
🔸Blocking inscriptions is unlikely to succeed.
🔸Libre Relay offers a low-friction solution for transactions.
🔸The role of miners is driven by economic incentives. Temporary fixes may lead to wasted time and effort.
🔸Focus should be on features that benefit Bitcoin users.
🔸Bitcoin’s permissionless nature allows for innovation without approval.
🔸Banning certain practices may lead to workarounds.
🔸Complexity in protocols can introduce more problems than solutions.
🔸Auto-updates contradict Bitcoin’s ethos of user control.
🔸The RDTS poses risks to user transactions and programmability.
🔸Confiscatory risks arise from the RDTS’s limitations.
🔸Programmability is crucial for Bitcoin’s future applications.
🔸Arbitrary data embedding is inherent to communication systems.
Timestamps:
(00:00) – Intro
(01:49) – What are the different camps in this debate?
(04:55) – What is consensus and how is it different from policy?
(11:23) – Libre Relay and its role in Bitcoin
(15:53) – Are certain transactions strictly harmful?
(19:30) – Do Dust filters work?; Dust limits and their implications
(29:59) – Orphan rates & mining dynamics
(35:14) – What is Spy mining?
(38:14) – Can all the small miners gather to punish spam on Bitcoin?; Decentralizing mining
(43:40) – Do Bitcoin miners shape reality?
(47:18) – Sponsor
(48:13) – What constitutes spam in Bitcoin?
(56:09) – Cultural perspectives on Bitcoin and spam
(1:03:15) – Are miners short-term focussed?; Bitcoin’s robust fee market
(1:11:39) – The unsustainable nature of hype
(1:19:50) – What are the hardware costs of running a node?; Connectivity & accessibility for Bitcoin nodes
(1:29:00) – Spam – incremental costs & time of transaction confirmation
(1:36:40) – Is it cost & time intensive for spammers to run Libre Relay?
(1:42:00) – What are the legal perspectives of data embedding in Bitcoin?
(1:49:47) – Is it feasible to block inscriptions?; Dilemma of temporary fixes
(2:01:50) – Kevin’s thoughts on RDTS (Reduced Data Temporary Softfork)
(2:21:53) – Is programmability important in Bitcoin?
(2:27:40) – Closing thoughts
Links:
Sponsor:
- CoinKite.com (code LIVERA)
Stephan Livera links:
- Follow me on X: @stephanlivera
- Subscribe to the podcast
- Subscribe to Substack
Transcript:
Stephan Livera (00:00)
Hi everyone and welcome back to Stephan Livera podcast. We’re going to be talking about the whole Bitcoin spam debate and ⁓ I think ⁓ it’s very topical right now. And so I want to get someone on who I actually met recently. His name is Kevin Kai. He is the platforming engineering lead at Lightning Labs. But I will just clarify this is mainly a let’s say personal opinions conversation as opposed to an official stance of Lightning Labs or anything like that. But
Kevin is also known on Twitter as Proof of Cash. I thought he had some really interesting perspectives to share and so I wanted to get him on. So first off, welcome to the show, Kevin.
Kevin Cai (00:40)
Thank you, Stephan. Thanks for having me. You know, I first met you over at Plan B Lugano, which is probably one of my favorite conferences that I’ve attended so far. As Stephan mentioned, you know, please don’t go bothering Elizabeth about my opinions. These are just, you know, opinions that I hold. And I mainly wanted to come onto Stephan’s podcast to basically give some of the perspectives that I’ve had. ⁓ But ultimately, yeah, really excited to be here and to talk through some of these topics.
Stephan Livera (01:08)
Great. And so look, in this whole Bitcoin spam debate, there are, let’s say there are different camps, right? And to some extent, there are kind of partial alliances between some of these camps, right? So as an example, there’s the Notts camp, right? They’re kind of pro filter. They want to filter out the content spam. You’ve got Bitcoin Core and like they’re, you know, they’re sort of to some extent beset on both sides by different views. There’s Libre Relay.
Right, which I think arguably you’re kind of in the Libra. Obviously, Peter Todd is in the Libra relay camp and you’re kind of you’re kind of I’m going to put you in that camp. And then let’s say there’s the Ordinals camp, right? That’s like the Casey or retardant or as I like to say, the Casey Radama’s of the world, the Taproot Wizards people, ⁓ other people in the retardant or ecosystem. And these are kind of different camps. And, you know, they don’t necessarily all agree on everything. And obviously, even inside of these camps, there are different views. But let’s just start there. So let’s put it that way. So
Kevin Cai (01:49)
Hahaha
Stephan Livera (02:06)
Would you consider yourself part of the Libra Relay Camp?
Kevin Cai (02:10)
So that’s a great question. know, recently I was watching a interview that had Giacomo and Jimmy Song. And one of the questions that was asked to them was what implementations do you run? I realized that actually me and Giacomo are very similar in that regard. I run Libre Relay, I run Core, and I even run a Knots node. I’ll get into that a little bit later. But I would say that if I had to get put onto like a political compass, you know, an alignment of some kind, I would probably align myself most closely with what Libre Relay’s ethos is.
And the reason I have that sort of alignment is because to me, the most impressive part about Bitcoin was that not only do we attain this, know, Byzantine fault tolerance, we also managed to create an unstoppable form of currency that governments, which were arguably, you know, the groups of people that held the most amount of power among all of us would not be able to stop either.
through sanctions, either through just trying to control the protocol and change it. We’ve seen Bitcoin fend off these attacks time and time again. And I think that Libre Relay just makes sense in that it tries to align consensus with policy in a way where miners have the incentives aligned, as well as users who are just trying to make consensus valid transactions. I think that ultimately when it comes down to categorizing what is spam and what is not spam, it becomes a subjective judgment.
that we’re placing on top of a transaction. ⁓ Perhaps I would want to launch an on-chain service that’s not too dissimilar from SatSdice. I want people to try out Bitcoin and they don’t necessarily have to buy an entire Bitcoin to try out my service, but they can learn about how on-chain transactions work, maybe even make a few Sats in the process. It’s an exciting little gambling application that I’ve put together. To somebody else, their subjective view of that may be these transactions don’t need to happen at all.
Right. My sort of attempts to onboard more people onto Bitcoin by giving them a fun little application to play around with is viewed as wasteful of a common public resource. So for that reason, I think that it’s important to make the distinction between what is consensus and what is policy. Policy is a guideline. It’s a nudge that we use to try and push users to behave a certain way. Consensus is all or nothing. Right. You either follow consensus, you’re either
contingent on doing things in a consensus valid way or your consensus invalid and the network will reject your transaction no matter what it is. And so, you know, bringing all of these factors together, I think it’s really easy for people who are making these subjective sort of judgments on transactions to say, well, you’re making transactions that I view as unnecessary as spam. So you’re a bad actor. You’re only here to stir the pot. You’re only here to waste my nodes resources. And none of these transactions needed to happen. But I think what
they fail to take into account is if you replace the word spam with an OFAC non-compliant transaction, you could very well make the same sort of compliance-based argument, right? Why am I going out of my way to store your OFAC non-compliant sanctions violating transaction? And then if you continue down this logical path, the end result is why am I storing other people’s transactions at all? Why not just store my own transactions and prune away and use zero-knowledge proofs in order to
be able to ascertain that the chain state is correct without actually storing those transactions. And I think this really misses the point of Bitcoin. The entire point of Bitcoin is that by joining together all of our resources across this decentralized network, we can create a level of redundancy and reliability that no other system on earth begins to approximate, right? As soon as we start carving away pieces of that in order to benefit ourselves, we make the entire system weaker. And this is one of the reasons why Bitcoin is one of the few systems out there that is anti-fragile.
even though you try to attack it, resists it. And the more it’s adopted, the better it resists those attacks.
Stephan Livera (06:06)
I see. And so, I mean, there’s a lot of things to kind of get into. We’ll try to get into those. And again, I’ll do my best to try to make this accessible for the layman who’s maybe not like deep into the technical weeds. So I guess the big obvious thing that people need to know is there’s a difference between policy and consensus. Policy means things that it’s like you set this on your own individual node and we can all set different defaults. You can set your ⁓ setting more permissive and more liberal than I have or whatever. And consensus means if we want to stay on the same blockchain together.
Well, we kind of have to agree on certain things. so some of these arguments that get into like, oh, as an example, Luke Dash’s accusation to Peter Todd, oh, you’re a bad actor. Are you a bad actor spamming the chain? And as I read from what you’re saying, it’s sort of like you believe in the censorship resistance point, like a component of Bitcoin. I would, am I right to say then that you see it like liberal relay is a
manifestation of that censorship resistant idea.
Kevin Cai (07:07)
Exactly, that’s that’s precisely correct. So I’ve always made this argument, ⁓ you know when talking with my friends who I may politically disagree with I Don’t agree with a word of what you just said, right? But I will defend your right to be able to say those words any day of the week Even though I think you’re completely wrong. It’s the same exact case, you know, I personally don’t think that inscriptions ⁓ Ordinals are particularly interesting. I think they’re kind of a waste of time and money
But I’m also not going to go out of my way and try to unilaterally decide that I want to stop this activity dead on chain. I think it’s perfectly fine if you want to run knots, if you want to run more restrictive filters. I just don’t think that it’s constructive to portray in such a way that, okay, well, if I run this filter, I’m meaningfully contributing to the reduction of this type of transaction. I think that is where we start veering off course a little bit.
going from I’m expressing my nodes preferences, I’m expressing what I want to relay, what I want to participate in and conflating that with I am meaningfully shaping the behavior of transactions on the network. I think that is a fairly thin line and many people cross that because they’re thinking about it in terms of a system in which 99 % of the network has adopted a particular filter, right? They’re thinking it in terms of like, you know, let’s say an oversimplification. There’s only 100 nodes of Bitcoin.
nodes running on the network, and if 99 of them are running filters, then how are you going to get a transaction that’s going to be refused by every single node on that network? How is a miner going to hear that transaction? Now, of course, you’ve got your standard explanations. Okay, look at Mara’s slipstream. You can directly submit a transaction to a miner. In the past, people have even emailed a hexadecimal representation string of a transaction to a miner, and then the miner just included it as part of their block template.
I would say let’s set those aside just for a moment to think about even beyond direct minor submission, what sort of network effects can arise, what sort of, you know, small differences in policy can amalgamate into a effective tolerant minority of nodes, right? And how basically you don’t have to actually bypass the public relay network. You just have to convince enough people that you can create a path from point A to point B, not dissimilar to lightning, right?
One of the biggest things that people get confused by with Lightning is, well, there’s only so many channels. How am I going to pay an arbitrary receiver? Well, that’s the beauty of Lightning. You can take a path and make these hops and you’ve only got to find that singular path from you as a source to your destination and then your payment’s made. You don’t have to have a direct connection between you as a sender and the receiver. The same is true for Relay. A lot of people make this oversimplification. They use things like epidemic theory.
They use things like percolation theory, which are mostly meant for like graphs that have random edge weights. And they try to apply it to Bitcoin, which is a network that has highly redundant connections. So if we’re talking in the classic Alice Bob Carol example, where Carol is a minor and Alice to Bob to Carol is one particular possible route. Yes, if Alice cannot send to Bob and can rely on Bob to pass that transaction to Carol, that specific path will not be usable. However,
Because that’s not the singular path that connects Alice to Carol, it ultimately doesn’t matter. So long as Bob can find a path somewhere along that public relay network as it exists in that graph to traverse it and get to Carol, then Carol can mine that transaction without a problem.
Stephan Livera (10:41)
Okay, and so this kind of gets into some of the, let’s say, criticism or let’s say an argument that I believe I’ve seen Luke Dasher make this argument where he tries to say, well, Liber Relay is just actually just a direct to miner network. What’s your stance on that? Is it just a network to get transactions from the spammers to the miners?
Kevin Cai (11:00)
I can see why Luke would say that. Personally, I think one of the biggest misconceptions about Libre Relay is that a Libre Relay node only connects to other Libre Relay nodes. This is actually not the case. And I like to check my peers every morning when I wake up. It’s just part of my morning coffee routine. And ⁓ very commonly in my Libre Relay peers list are Core V29, Core V30. I’ve even had a Garbage Man node that’s tried to connect to my Libre Relay node, which I’ve promptly banned.
But you know the the beauty of Libre Relay is that it is a small set of patches, right? Whenever I run a piece of software, I try to do my best, you know, ⁓ attempt at looking at what changes were actually made, trying to understand them before deploying them, right? I don’t really want to just be deploying this black box that I don’t understand how it works. And the beautiful thing about Libre Relay is how little it changes, right? You’re setting this user agent so you’re identifying yourself as a Libre Relay node.
And then you’re looking around on the DNS seed for Libre Relay nodes, which you will have a higher weight of preference to connect to, but by no means do you ignore other nodes on the network, right? I still connect to core nodes. I still connect to not nodes even. Those nodes may not facilitate the relay of my transaction, but I still connect to them as part of the peer to peer network. And so for that reason, I don’t believe that Luke’s assertion that Libre Relay is
through its preferential peering system, ignoring or bypassing the public relay network, I think it’s just a subset of the relay network. It’s no different from knots in the sense that you are advertising, you’re running a particular version or a particular implementation. And if you want to preferentially connect through your knots node to only other knots nodes, you can write a 10 line bash script to do that. If you don’t know how to write scripts, you can go ask Google Gemini or chat GBT to do it for you, right?
And ⁓ one of the things I find so perplexing about this is that a lot of people in the Bitcoin space are libertarians. And one of the common basic forms of ideology in libertarianism is the freedom of association, right? That your individual liberty should entitle you the ability to freely choose who you will associate with, who you will not associate with. When I apply that same lens of ideology to Bitcoin nodes, I don’t see the difference, right? What is the line at which I’m crossing where
Choosing who I should be able to connect my node to is somehow a violation of the protocol, right? I think that would be a much more, you know, valid argument if Bitcoin as part of its spec, as part of the code base that exists today, did some sort of randomization of what nodes it connected to and tried to maybe keep like a balance of the amount of implementations within your peer list. Like for example, in this hypothetical scenario, maybe it would try to equally divide up connections to core, to knots, to Libre Relay.
If that was the case, then yes, I would absolutely 100 % be on the same page with Luke here and say that yes, you’re circumventing the very intent of the peering protocol because in this hypothetical scenario, the peering protocol wants to balance which implementations you’re connected to. That it doesn’t do that tells me that this is left intentionally open, right? You should be allowed to choose basically the conditions of how you select and vet the nodes that you’re connecting to. And for me personally,
I don’t have any interest in connecting to nodes that I know are not going to carry a subset of my transactions throughout the network. They are, in my opinion, defective. They don’t achieve what a node is supposed to do, which is remain neutral and carry any consensus valid transaction throughout the network. Now, where I do draw the line are scripts that take a very long time to validate, for example. This actually goes back to the history of why is standard was even introduced into Bitcoin. It was introduced because people were
creating scripts that took up to a minute, sometimes longer to validate, right? And so rather than moving quickly and making those types of transactions consensus invalid, Satoshi said, okay, I’m going to institute a standard check. And if the transaction does not meet these, you know, relatively permissive qualities of being standard transactions, then I will simply refuse to relay them to, to minimize the amount of impact, to minimize the harm that’s being done to the network. Where I think that the line has become fuzzier over time.
is transactions that are not strictly harmful, right? OpReturns are very cheap to validate. Even inscriptions with their envelope are cheap to validate. You’re not executing any script. The script interpreter is looking at that script saying, okay, there’s nothing for me to do here, right? OpReturn, cool. We’re not even gonna put that into the UTXO set. We’re gonna go ahead and just skip the rest of the script execution. Same thing with the inscriptions, right? We see this op-if envelope. This is unreachable code. We’re not going to do actual any execution.
even BitMEX research has done IBD studies where blocks that are consistently filled with inscriptions are actually faster to validate than blocks that are filled with monetary transactions because in the monetary case, you’re actually executing scripts. you can make the argument that because Bitcoin is monetary network that those validation requirements are therefore justified, et cetera. But my point here is that these types of transactions that we’re talking about now in 2025, the types of transactions that we’re debating, banning through consensus through a soft fork,
are not strictly harmful in the same way that the transactions that introduced the need for standardness checks in the first place were, right? You could make the argument that, you know, it’s bloating the amount of storage that you might need to have, that blocks could have been less full and you could have stored less data. My argument to that is, well, the rate at which the blockchain is growing is still linear. It’s very predictable. And storage drops at a power law price every year. Storage is getting cheaper and cheaper as manufacturers improve their ability to
manufacturer high density storage. This goes for both hard drive storage as well as solid state. You know, we’re even seeing a cottage core industry of node in a box solutions that aren’t just start nine or umbral and they’re packaging on very nice hardware. They’re packaging like nice crucial SSDs in there. They’re like as powerful as some of the nodes that I was building back in 2017, 2018, and you can just buy them as pre-made devices. So overall, I think it’s a very overblown set of risks.
that are being used to justify this very subjective approach to determining what is a good transaction, what is a bad transaction, what is good for the network, what is harmful for the network. My bottom line here is as long as what you’re doing is not harmful in the sense of being expensive to validate, I don’t care what it is that you’re doing. You’re a paying customer on the Bitcoin network. You are funding the security that I benefit from. And if I’m a user that…
Predominantly just stack stats and I hold UTXOs in my cold card I hold UTXOs on my Jade wallet and I don’t ever do transactions in a sense. I’m a free rider I’m not contributing to the security costs. I’m not contributing to paying the miners, right? I just have these UTXOs sitting here I’m benefiting from the store value characteristics of Bitcoin But I’m actually not pulling my weight in terms of making sure that Bitcoin remains secure
And for those reasons, I think that it’s just incredibly silly to try and go after paying customers of Bitcoin, right? It would be as if you were running a corner store and because you don’t like the way somebody looks, you chase them out of your store. Even though your store could use all the revenue that it could get. Maybe it’s in a high rent area. ⁓ So yeah, it’s been very perplexing to me. Just like thinking about this from a more technical standpoint where I’ve divorced and decoupled the emotional aspect.
Stephan Livera (18:24)
Okay.
Kevin Cai (18:30)
from the technical aspect.
Stephan Livera (18:33)
Okay, so few things. mean, there’s lots of different directions we can go out of what you spoke about there. But I guess this ⁓ question, like, okay, one point that I’ve seen discussed, and I believe Jimmy Song and Merch have sort of gone a little bit back and forth on this idea, which is about very small UTXOs, right? And so this is an area where, as an example, Merch from the Core Camp, now again, he’s one person, Core is not a monolith, but his view is actually using policy to discourage certain forms of
DOS in a sense, a very, very small UTXO is like one set outputs and things. He sees that as, yeah, that’s a good thing to do. ⁓ Whereas I guess if you go to the full other end of the extreme, it’s just like, if your full Libre relay, it’s just like, yeah, if it’s consensus valid, go for it. Where do you stand on that? Would that be a problem? Like as an example, hypothetical, like if somebody said, hey, Kevin, I’m just gonna like, even though it’s not policy, I’m just gonna create.
many many many many one set outputs because I know your node has to store all of them and it’s computationally maybe more inefficient or costly to have to do for the network to deal with that.
Kevin Cai (19:44)
That’s a great point. you know, I try to view these things as less of a binary and more of a spectrum. So on the one end, you’ve got the most harmful type of, you know, DOS, which is a hard to validate transaction. And then maybe on the other end, you’ve got something that’s relatively benign, like an op return embed. I think that when it comes to dust limits, we’re looking at something that’s closer to the harmful side, but not necessarily enough that it would make me worry about it, right? I think that it’s definitely good to set a semblance of
a dust limit that makes sense and that you’re not creating an output that costs more than to actually spend it and doesn’t hold as much value to justify its existence. But on the other end, I think there’s a lot of misinformation out there regarding the UTXO set that drives a lot of this panic. One of the things I see on Twitter all the time is the UTXO set needs to fit in memory for a node. This is not true. You can have a 100 gigabyte UTXO set and still run a Raspberry Pi with Bitcoin Core. It’ll do the job just fine.
Right. lot of the reasons why people end up having such a slow node comes down to misconfigurations for their hardware and not necessarily because of particular transaction history that has happened. That being said, I think if you put a gun to my head and you said, Kevin, you need to choose between making the current dust limit enshrined in consensus or making it just, you know, no policy at all. And it’s just a free for all. And everyone is making one sat UTXOs. I’d say I would lean a little bit closer.
to making it a consensus rule. But again, it’s not something that jumps out to me as necessarily urgent, right? I think I would look at this from a more reactive perspective rather than a proactive perspective. I would like to see some evidence that people are actively abusing this and then move to enshrine this in consensus and change it. Until then, I don’t really think there’s much point in changing consensus in order to try and prevent a certain kind of bad behavior.
⁓ You know the example I always use is when we look at the Bitcoin core repository There are files that haven’t been touched in years and years and years almost all of those files are consensus, right? You would have thought You know somebody at some point would have at least jumped in and made a linting change Fixed up some of the formatting there maybe refactored a function or two made it a little bit cleaner Maybe not even you know delving into the aspects of performance. However Consensus changes are so incredibly fraught with peril that
You know, we don’t even blint them. We don’t even try to fix the formatting of the file and have all of the code behave exactly the same because there’s just no point in going to the consensus level unless there’s an urgent reason to change consensus.
Stephan Livera (22:21)
I see. Now, I’m not in the not camp as you know, but this is an argument I’ve seen from the not camp, which is, look, the dust filter is an example of a filter that works. And so what’s what would be your answer on that? Is it that it’s that there’s not an economic or social reason to go around that filter? Or like, how are you explaining that? Because I guess
This comes up in the argument about, and again, maybe it’s an oversimplification of filters work or filters don’t work. And maybe it’s more like, no, there’s actually a bit of a nuance you have to explain. How do you see that around the dust filter? Is it that the dust filter works?
Kevin Cai (22:59)
Yeah, so I think that the easiest way to relate this to a lot of people is not looking at the dust filter because not much has changed there, right? And like you said, there’s not a lot of people who are trying to get around the dust limit. They’re not trying to, you we don’t see meta protocols being built with one set outputs, right? They’re they’re going with dust outputs. And I think what ends up happening here is a lot of people conflate in equilibrium with the sort of actual barriers that exist. What do I mean by that? Right. Let’s look at something that is
pretty orthogonal to the dust limit, which would be subsat fees. So subsat fees were obviously not being carried and relayed by a majority of the nodes in the network for a very long time. ⁓ One sat per V byte was the minimum. So how did that shape the development of wallets? A lot of wallets chose an integer. So one would be the lowest value they could put in, including Sparrow wallet. But then we saw a small group of people starting to put subsat fee rate transactions
Chain and that actually caused more and more people to consider. Okay, wait a second This is now possible miners are willing to mine this this could be something interesting and as they continued pulling on that thread They realized that what we’re observing and this is a graph that a lot of people in the knots camp will post a lot is You know subset summer actually proves that filters work. Look at this huge spike at one set per V by But I think what this really ignores is that all of that?
All that screenshot, all that graph shows you is that there is a cluster at 1SAT per VBite, right? It shows that wallets are still defaulting to 1SAT per VBite. It shows that there’s a latency, lag in adoption to a new sort of de facto standard. It shows that users perceive 1SAT per VBite to be the minimum safe fee. What it does not show is that that is actually the minimum safe fee, right? It shows that there is a latency in which
knowledge becomes available to the public and then that knowledge being widely disseminated. And if we follow the timeline for subset summer, what we saw is that once this knowledge became disseminated enough, Sparrow wallet had a PR that went up that would check to see what your Minreale ATX fee was for your core node. If you were connecting it to a core node and it would check it against Electrum. If you had it connected to Electrum and it would allow you to just drag that slider. suddenly. Whereas the previous day you could not do subset transactions the next day you could.
⁓ and so, you know, I think that it’s a little bit confusing to people who are looking at this because intuitively they might think if the filter doesn’t work, then we might see something that’s more akin to a uniform distribution, right? Where, where you’re looking at it and it’s like a bar graph and all of the bars are sort of equivalent. However, I think that really ignores that in reality, very, very few distributions are actually uniform.
I would expect to see more of a bimodal distribution where you have two spikes in the graph. And what do know? If you look at that graph, that’s exactly what you see. You see a spike that is subset, and then you see a spike that is at one set per V byte. ⁓ know, these clusters, they exist because human incentives and behavior aren’t so easily attributable to a predictable model. Sometimes there’s latency and adoption. Sometimes there’s wallets that haven’t updated. Sometimes there are exchanges that have hard coded
backends in terms of how they handle fee rates, right? There’s all of these factors that go into how quickly something goes from existing to widely adopted and it being observable that looking at observational data like this collapses all of those factors down to just the data that you’re looking at. It doesn’t contextualize it. It doesn’t tell you why a cluster exists at a particular fee rate. All it tells you is that this is the instantaneous snapshot in time.
I think what’s far more useful is to look at that data and plot it over time, right? Over the next 90 days, how much has have those distributions shifted? Are there more subset fee rate transactions that have occurred? I think the same would be true if we applied this to the dust limit. If Core V31 lowers the dust limit, you can now make one set outputs. I don’t believe that overnight you would start seeing one set outputs everywhere. I think that there would be a latency. I think that it could be months, if not years.
before we start to see meta protocols beginning to switch to that because it takes time for knowledge to disseminate. It takes time for people’s mental models to recalibrate to reality. ⁓ Even now, plenty and plenty of people out there are making their transactions at one set per V-byte, despite knowing that you can do transactions at 0.5, that Mara will mine 0.5 set per V-byte transactions. The reality is not instantaneously reflected in behavior. I think this is the key argument that people really just drop.
⁓ They expect that if there’s a change that there will be an instantaneous Associated change in people’s behavior, but this is just not true, right? In reality, there are all sorts of latencies and externalities that actually come into play that affect how quickly changes are made and because a lot of Bitcoiners are very risk-averse even if they know something is possible They may not be willing to hazard trying that out until more and more people have done it and they look around and they see that This is a much more commonly done thing
Stephan Livera (28:18)
Yeah, and I guess the whole subset summer thing arguably is more like a social phenomenon, right? I think was it Mononaut or Peter Todd or someone kind of noticed this and then it I think they pointed it out and then it sort of became a thing that mining pools started to drop their threshold. I guess they mean really TX fee. I guess what we’re talking about here. ⁓ Now there is kind of a point back and forth there where people said some of these mining pools they lowered it.
really low and then theoretically or arguably I think they were saying, but the orphan rate went higher. So that was like a risk to them. So therefore they had, they raised it back up and there was like a sweet spot. So in the case, as you mentioned, Portland Hodel from Mara mentioned, they set their limit at 0.5 sets instead of, you know, lower than that. Do you see that as, and maybe this ties into kind of also the Knott’s argument, the Knott’s camp argument of, ⁓ more and more people running filters hurts people, the miners who mine spam.
Right? Like kind of, because that they would have a higher orphan rate and therefore they are less profitable or get less revenue. Can you explain that? Or at least from your perspective, does, yeah, yeah.
Kevin Cai (29:15)
Yeah. Yeah, yeah.
For sure, yeah. So orphan
rate is obviously a concern for any miner, right? You do not want to be working on a block and then have somebody have already announced a solution to that block and now you’re not getting your block reward, even though you’ve spent all of this electricity and time and effort. ⁓ So yes, in regards to Mara originally taking 0.1 sap for V-Byte transactions and then doing some math, recalculating their risk and then deciding that 0.5 was more appropriate.
I think that the way that they’re calculating that is basically coming down to how much potential propagation delay could be affected by filtering. So from the perspective of a filtering node, you’re looking at about six seconds roughly for propagation delays. And that translates to approximately a 1 % chance that the network would outcompete your particular pool, saying that you’re running on knots in this example.
I think that that is obviously a real risk if you’re running a highly centralized, highly scaled, you know, benefiting from economies of scale type operation. I think it’s probably not as big of a deal to you if you’re running a smaller mining operation, like what I’m doing in my basement with a few hundred terahashes per second of hashrate. ⁓ And this actually ties in really well to the fiber network. So the fiber network was something that was launched by Matt Corallo a long time ago, and it basically used the same logic as compact block reconstruction where
You’re relying on miners having the transactions that are going to go into a block being in their mempool already. And then the actual process of reconstructing the block is you pull all of those transactions from your mempool into the block. And then you also grab any transactions that were not part of that block. ⁓ That small slice of time, I think, is what you’re referring to in regards to the orphaning risk that’s introduced. And this is something that scales with the number of transactions that you did not have in your mempool. I think this is actually fixed by having better relay policy across the network, right?
which is one of the reasons why Core V30 changed this. If you look at like the guy who runs fork monitor, he had updated from V29 to V30 and he saw his compact block reconstruction rates significantly improving. And as a miner, this does not change. This mechanic and this sort of interaction with the network has always been the same. But one thing that people leave out in their factoring of
how miners operate is that miners do not necessarily just peer with the wider peer-to-peer network and rely on them to get transactions. Miners very often will peer over high bandwidth, low latency connections to miners directly using things like fiber, which also makes use of UDP instead of TCP to ensure that you don’t have to wait until an entire sequence of data is transmitted before you find out that one piece of data was corrupted or maybe missing, and then you have to resend that data.
Point being there is miners are already going to be peering with other miners in such a way that they’ll be able to get transactions very, very quickly. So they can reasonably minimize the amount of risk that gets brought upon pulling in transactions that other nodes in the network did not have in their mempools, right? Because miners at the end of the day are not running just a vanilla core. They’re running core with patches on top, sometimes to block template construction.
sometimes by peering with Libre Relay nodes and setting more, you know, permissive policy in regards to Relay so that they can hear about all of those transactions. Not one miner out there, to my knowledge, uses Libre Relay as their block template constructor. They simply peer with one Libre Relay node. And that’s enough to get those transactions into their own mempools, right? So ⁓ as long as you have the ability to hear those transactions from any part of the network, you don’t need to rely on the greater portion of the network in order to
be able to hear those transactions. Where I think the equation here changes a little bit is with ocean miners, with people who are mining to datum, for example. Those people are generally not running a modified version of core. They’re generally not peering with Libre Relay nodes. They’re just peering with the peer-to-peer network. In that situation, yes, by running stricter filters, you are putting yourself at higher risk for being orphaned. But this isn’t the same as centralized miners being hit with additional orphan risk, right? So…
TLDR, think Mara is operating out of an abundance of caution here. I predict that in the future, if Bitcoin’s price continues going up, that that will drop from 0.5 stats per V-Bite, maybe down to 0.4, right? 0.1 is truly very, very low. They’re gaining so little, little revenue from including these transactions that risking any bit of orphaning is just not worth it, right? But if Bitcoin’s price continues to go up, then…
it’s very possible, very likely even, that 0.25, 0.3, 0.4, somewhere around that neighborhood is still worth the potential risk of orphan blocks, which is, you know, again, low because miners can peer with one another over these high speed links. And so if Spider Pool is peered with Mara and Spider Pool has this transaction in their mempool, it’s very easy for Spider Pool to compact block reconstruct very quickly and mine on top of the tip, which might’ve been Mara’s latest block, right?
at the end of the day, it’s the people with hash who are running nodes that are determining what the orphan risk is because miners predominantly care that they’re not mining on top of a stale tip. Now to sort of really drive this point home, I want to bring up spy mining. So spy mining is when you don’t actually wait to hear a block from a competing pool before mining on top of it. So let’s say that I am running my pool, which is just me and a few friends called Libre pool. And I have
a fake miner, so to speak, that is connected to Mara’s pool, it’s going to be able to get the header and the hash of that block by being connected over stratum without being able to validate if that block is actually valid. What does this give me? It gives me the advantage that if Mara mines a block right the second, that I will be one of the first people to begin mining the block that builds on top of that, because I didn’t have to wait to hear that block. I didn’t have to wait to validate that block.
Now you might say, wow, isn’t that incredibly degenerate? Right? There’s no way for you to know whether or not that block is valid, but it just goes to show that mining is such a competitive operation that some mining pools are even willing to take on the risk of building on top of an invalid block just so that they could be the first ones to kick off that race of finding the next block. And for those reasons, I think, you the incentives are clear to me. So long as the orphan risk doesn’t exceed a certain threshold,
miners will always be willing to open their back doors and let in a few of these subset fee rate transactions. And we’ve seen that as a trend. The number of miners who are willing to accept such transactions has been increasing over time, not decreasing. And I don’t think that MARA lowering the threshold from 0.1 to 0.5 disproves anything. I think, in fact, it proves that there’s staying power in subset transactions, that there’s enough economic demand for these cheap transactions.
And I also think it’s no surprise if you look at the types of transactions that are using subset fee rates, right? A lot of them are people who are consolidating thousands of UTXOs that are each dust outputs, right? They’re not going to want to pay one set per V-byte. That’s going to be forfeiting a good portion of the inputs values, right? And so this is like a very unique tranche of transactions that doesn’t really try to compete with other transactions. When I’m making a one set per V-byte channel open,
I am making that channel open with an expectation that it’s going to be done in a timely manner. I have a high time preference. If I’m trying to consolidate thousands of UTXOs because I am a degenerate that gambled a ton of Bitcoin on inscriptions and I’m left with all of these dust outputs and I just want to consolidate them and get them back to a UTXO and store them on my hardware wallet, I don’t care how long that takes. I’m willing to wait weeks for a lower fee opportunity in the mempool if need be. I’ll wait for months even.
It’s not a huge priority for me to get that consolidation through, but it is a huge priority for me to not lose a ton of value in the process. And for that reason, think unless subset fee rates begin to crowd out these more standard ones at per VBite transactions in terms of both volume as well as block space, they’re always just going to be like a nice little cherry on top for the miners. ⁓
Stephan Livera (37:36)
Gotcha.
And what would you say to the general argument then about like the not filter argument, which is, well, mining is centralized now. If it was more decentralized, then miners who mine spam get penalized by doing so compared to monetary only mining, let’s say.
Kevin Cai (37:54)
Yeah, I think that’s true. I think that’s true. If we were to wave a magic wand today and we got all of the plebs across the entire country who are currently mining at home with their S-19s, with their S-9s, even their bid axes, and they all decided to either mine to a improved version of Datum, I’ll get to that in a second, or even solo mining with like a CK pool or public pool locally running instance, yes, you would have a significantly higher ability to punish spam on the network.
Now, why do I think that that’s not something that’s going to happen in the short term? The first reason is network asymmetry. So I think Matt Corallo wrote this, that ⁓ the current probability of receiving a block in its entirety over a TCP connection without any retransmits being necessary is about 91%, which sounds pretty good, right? But again, going back to what we talking about regarding orphan risk, if you’re not the first to start working on that next block, you may as well…
have given up your opportunity to potentially get that block. Everything is very, very latency sensitive, right? So if you’re a miner and you’re operating all of your template construction nodes in a data center, not only are you benefiting from economies of scale because you can just take out a business loan and buy tons of these ASICs, you’re also benefiting from having some of the highest speed connections. You can negotiate with your ISPs to get better SLAs where they’re contractually obligated to have certain levels of uptime and performance that you’re essentially dedicated and guaranteed to have otherwise.
you’ll be able to get money from them. ⁓ It just becomes very difficult for pleb miners to compete with that. And that’s one of the reasons why I think ⁓ Ocean is pushing Datum with Datum Prime as the gateway. ⁓ The idea there being that you can somewhat benefit from having these high speed connections while also still being able to do your template construction locally. The problem that I take with Datum is that, and I’ve watched this video by one of their engineers where they talk about eventually in the future what they want to do is have a subset of miners validate
the work of other miners, I think that would be like the platonic ideal of Datum. I would be screaming from the rooftops, everybody needs to switch to Ocean right now, if that were the case. The issue that I take with Ocean and Datum as it exists today is if the Datum server goes offline and then comes online, let’s say three hours later, there’s a bit of catch up that needs to happen for all of that accounting to come up to speed, right? I think that that means that essentially Ocean is this unnecessary third party.
And until we can get to a point where the datum server can go down and operations are not affected whatsoever, that it’s somewhere in between centralized and decentralized, probably closer to decentralized, but not fully decentralized. And the other issue I take there is that, you know, obviously a lot of people, they want to reduce their variance, right? They don’t want to solo mine with CK pool solo. They don’t want to solo mine with public pool with their Bitcoin core or Bitcoin knots doing template construction because they might be mining.
and chugging away at this for years and years with no revenue whatsoever. But to me, I think that that’s just what comes with the territory. If you can’t handle the heat, get out of the kitchen. If you can’t handle paying for power bills for potentially 10 years on end without finding a block, well, that’s just how the cookie crumbles. Every single pool out there that operates operates under this assumption. It’s only the introduction of the drug of FPPS where you’re paying by share.
that has made people so averse to high variance where they want to just get a small amount of payouts constantly, even though if the pool has higher than average or expected luck, they don’t get a share of the increased earnings, right? This is how F2 pool operates, for example. ⁓ So a lot of stuff that Ocean is doing is great. I would just like to see them push a little bit harder on Datum. And then the narrative will generally be true that you can punish spammers because
The hash rate is not coming from a handful of centralized mining operations. But I also think it’s important to temper our expectations here because allowing for this decentralized group of miners to bloom requires significant infrastructural upgrades as well. You need the average person with their average internet being a lot more reliable, being higher speed, being able to peer with tons and tons of nodes, being able to use things like fiber to get blocks pushed.
in a way that doesn’t require retransmits. Everything needs to get tightened up, right? Mining is not a sloppy enterprise. It requires a lot of attention to detail and precision in order for you to be viable, right? So, you know, for that reason, I always take issue with people saying, well, I’m fixing the decentralization of mining. I’m mining to ocean, right? I’m doing my block template construction locally. Yes, that’s definitely one part of the equation.
But so long as you’re dependent on Ocean to do the accounting correctly, for you to be paid out correctly, that’s always going to be a point of failure. And until we can refactor that out, and to Ocean’s credit, they have talked about doing this, ⁓ until we’ve reached that point, I think there’s still a lot of work that needs to be done before we can truly decentralize hash power.
Stephan Livera (42:58)
I see. ⁓ Now, on the question of Libre Relay, because we’ve been talking about that, given that this exists now, and it basically means people you can get transactions mined in non standard transactions mined in. Is it is it fair to say that the filters that matter now are the filters that a miner applies on their node?
Kevin Cai (43:20)
I think perhaps to be more specific here, it’s less about the mempool policy that a miner is setting and more about what they’re willing to permit. So policy in the more generalized term, not referring to mempool policy. Right, exactly. Their policy in terms of what they’ll put into a block. Now we’ve seen that miners have unsuccessfully tried to… ⁓
Stephan Livera (43:34)
Right, as in what will they put into a block is the question.
Kevin Cai (43:44)
make more permissive certain things like the SIG ops limit. We’ve seen invalid blocks get broadcast onto the network because a miner misconfigured a portion of their modified daemon that they’re using for template construction. So we know that miners tinker with these settings. We know that they’re not just downloading the latest release of Bitcoin Core and using that to mine on top of it. ⁓ But ultimately, yes, I think the answer is what miners decide they’re willing to mine shapes the reality of
what is going to be on chain far, far more than filters because at the end of the day, a filter is simply a means of transport, right? This would be like if I said, if you’re wearing a blue shirt, you can’t get on the public bus. Therefore, it’s going to become impossible for people who are wearing blue shirts to get to the park because there’s a bus stop right next to the park, right? It completely ignores the fact that you could just hop in your car and drive to the park if you want. You could even hop onto a bus line that maybe is a
enforcing the anti-blue shirt policy, but maybe not really. There’s all sorts of ways for you to get people who are more permissive, who are more sympathetic to you getting from point A to point B to allow you to do that. And it’s just not a very robust way of preventing certain types of transactions, right? We saw that with Mempool Full RBF as well. We didn’t need the entirety of the network to upgrade. We just needed the hash power to upgrade and a small amount of people, I think it was about 10 % or so of the network to upgrade. then
regardless of the flag on RBF, you were able to RBF transactions. It was that simple, right? And because it was not so controversial, ⁓ not a lot of people really complained or made a big stink about it. But as soon as you flip the narrative to talking about things that are more culturally clashing like OpReturn and data embedding, that’s where we start to get a lot more friction. And what I find to be really interesting there is that I think that this cultural sort of divide goes all the way back to when Ethereum first came into existence, right?
⁓ I’ve always been of the opinion that you should know thy enemy. So I’ve kept my ear to the ground. I’ve even made friends with a lot of people from the ETH space just to see how things are going over there to get a temperature check. And what you’ll see is that culturally, Ethereum L2s are way more comfortable with things like rollups where you’re posting to L1 for data availability. You’re posting like either a proof or you’re posting some subset of the state onto L1 and you’re anchoring onto Ethereum, the base chain.
On Bitcoin, we haven’t really done that for our L2s, right? On Lightning, there’s only two real estate transitions that are happening, maybe three if you count HTLC settlement. You got your channel open and you got your channel closed. Those are the two times that you interact with L1. You’re not really pushing a whole lot of arbitrary data on chain, right? All of that, all of that state, all of that, you know, state machine transition, all of that is being stored off chain. So this, this like even technical cultural limitation, I think is driving a lot of the divide and causing people to
interpret the operative limit being raised as ⁓ encouraging it rather than being a form of harm reduction.
Stephan Livera (46:44)
I see, yeah. ⁓ So let’s get into the whole spam war stuff. Obviously we’ve been kind of indirectly talking a bit about it. ⁓ Let’s talk a bit about what exactly constitutes spam in Bitcoin, right? And maybe you want to distinguish, and you were touching on this a little bit about maybe we have to distinguish here between let’s say content spam per se and genuine technical DOS spam. And I guess the other kind of…
sort of thing that gets thrown in here is like, okay, there are filters that relate to what’s colloquially known as upgrade hooks. And so that’s another reason why these filters exist. So let’s talk a little bit about that. What constitutes spam in Bitcoin? How are you thinking about that?
Kevin Cai (47:26)
So I will preface everything that I say from here on with in my opinion, right? Cause that is my stance here is that everything is subjective. in my opinion, embedding data that is purely non-monetary in nature, and I’ll get onto the specifics of how I categorize that. ⁓ It’s a bit fuzzy when you get into certain edge cases, right? But I think if the intent of the transaction is purely non-monetary in nature, that is a fairly good way of adding a point. And for me,
whether or not something is spam is not a binary decision that I make right off the bat, but more of like a points based system where if you have a certain amount of points and you exceed some sort of threshold, then now the probability that you’re spam is much higher than the probability that you’re not. So let’s say that this arbitrary threshold is 10 points, right? I think if your data is purely non-monetary in nature, it’s just embedding like a JPEG, for example, like an inscription, that’s gonna be two points right off the bat, right? I’m a little bit suspicious that this is probably spam.
If your data is not facilitating a layer to scale on, that’s going to add like three points, right? So now I’m at 50 % probability that this is spam for like an inscription, for example. Then I’m going to assign a few points based on who is making this transaction, right? Did this transaction come from a Bitcoiner who’s maybe just testing out a new platform that exists? Did it come from someone who’s in the ETH space, who’s trying to reproduce something that exists in the ETH space, right?
⁓ So an example I’d give there is like the BRC 20 standard if you look at how the BRC 20 standard is Structured it’s structured in a way that more closely resembles interacting with an ETH smart contract then something that would exist on Bitcoin So for a BRC 20 transaction, I’m adding three more points now I’m at eight out of ten, right and then the remaining two points I think mostly come down to is this trying to compete with monetary transactions on a fee rate basis? Is this paying a 100 sap per V by?
you know, fee, even though the median in the mempool is like 20, right? If it’s doing that, then I assume that it is highly time sensitive and it’s trying to compete with monetary transactions, both in terms of time as well as fee. Yes, now I’ve hit this threshold. This is spam. think the blind becomes a lot fuzzier. For example, let me pitch you this idea. I have a swap provider and I have a preimage, right? And in order to settle a transaction, the preimage has to be released that corresponds to a particular hash.
But you don’t know what the preimage to that hash is, right? So I could just as well embed a tiny, tiny little JPEG or a PNG or what have you. I could use that as the preimage to the hash. And then when you go to claim the funds, I release the preimage on chain and that is now spam. However, going back to how I said, you know, if you have a magic wand and you can make some sort of outcome happen magically with no regard to how you would actually make that happen. If I had a magic wand where I could wave it,
and say any spam that is on chain instantaneously destroyed. Smithereens, right? Have I not just stopped a monetary transaction from going through? One that I didn’t know was spam until the reveal stage for the preimage. I think that I would have, right? And in that sense, you would have be committing a particularly heinous kind of censorship, at least in regards to the Bitcoin ethos, you’ve been stopping a monetary transaction because one step of facilitating that monetary transaction happened to reveal spam.
for that reason, I’m generally hesitant to try and project my own subjective view on what is spam and what is not spam and translate that to actual action. I think it’s fine to look at a transaction and be like, wow, this is spam. This really sucks, right? We should figure out who’s making these types of transactions and we should try to encourage them to do them in a better way. I don’t think that it necessarily creates enough incentive for us to try and stop them. So what do I mean by that? ⁓ I think Citria is a great example of this, right? Citria gets thrown around a lot as
the reason why the op return limit was raised. Now, setting aside that that’s completely inaccurate, let’s talk about how Citria’s justice transaction requirements play out. They could use a 80 byte op return, and then they could have two fake taproot pub keys because they have a little bit more data than 80 bytes. And that would work just fine for Citria, right? It doesn’t make any difference to them whatsoever if they use an 80 byte op return and two fake taproot pub keys, or if they use a non-standard op return. What it does make a difference to is
the common, the public resources of nodes, right? Obviously I would prefer to store a 100 byte non-standard op return because it doesn’t enter and, you know, pollute the UTXO set forevermore. Even though this is an edge case of a transaction, I still want to choose the technically superior solution out of the two, right? However, a lot of people didn’t see it that way. They saw it as, you you’re raising the op return limit. So therefore you’re putting a neon sign above your head that says, please spam me.
I don’t see it that way, right? And I think we can relate this to the real world as well. In ⁓ the late 90s, there was a flight ⁓ operated by a Brazilian airline and very tragically, somebody had smoked a cigarette. This was a flight that allowed you to smoke on board. Somebody had smoked a cigarette and they didn’t put it out fully and they put it into the waste bin in the bathroom. And consequently, the waste bin in the bathroom lit up in flames, the bathroom lit up in flames and the plane and all of its occupants on board perished in a crash.
Now, shortly after that, the FAA started requiring that regardless of whether or not you allowed smoking on your flight, you would have to include in the laboratory an ashtray. Now, if you took the approach that a lot of people from the Nazi camp have taken to the operative limit being raised, you might say, the FAA is clearly endorsing that you smoke on the plane. Why else would they put an ashtray on board? Right. But I think if you look and zoom out and look at the bigger picture, you would see that what the FAA was simply doing was taking into account human behavior.
that even if something is not allowed, it’s very possible that somebody will try to skirt the rules. And if they do, if you don’t provide them a less harmful alternative, they may either unintentionally or intentionally end up doing the worst possible thing that they could do in that scenario, which is trying to hide the lit cigarette that they smoked in the laboratory by stuffing it into the waste bin. And maybe we get a repeat of the incident that happened. So I think, you know, a lot of this comes down to what your sort of ideology is.
Are you teleological? Do you believe that an object exists because of its purpose? Or are you an instrumentalist? Do you believe that we have found a purpose and ascribed it to the object? Take this fork, for instance. Does it have these tines on the end? Because that’s what facilitates us using it to stab meat and eat food with it. Or did we look at a farming fork? Right? Did we look at other?
similar systems like a skewer that we were putting our meat on and roasting over a fire and say, wait, we could use this in a handheld format. Did the form factor of the fork arise because this was easier to manufacture than other potentially superior solutions like a spork, right? I find myself in the instrumentalist camp. I think that we create objects because of a smorgasbord of reasons, because a bunch of incentives that are pushing us and pulling us one way or another.
And then once we have something that is useful, we ascribe a purpose to it after the fact. think Bitcoin is the same, right? Satoshi set out to create a very, very resilient database that would allow you to conduct monetary transfers, right? But that doesn’t mean that Bitcoin can only conduct strictly monetary transactions in the way that Satoshi outlined. Satoshi doesn’t talk about scaling in layers. That’s not mentioned in the paper. Lightning doesn’t exist in
the white paper that doesn’t mean right that lightning is not Bitcoin or that lightning has no place in Bitcoin. Similarly, I think systems like Citria that aim to try and steal away users from Ethereum, EVM based chains and bring them to a platform that is anchored in Bitcoin is the traditional definition of being a Bitcoin maximalist that Bitcoin is the only blockchain that we actually need to make all of these systems work that we can have them all plug into Bitcoin.
and that they don’t need to exist as independent entities, right? We don’t have five internets. We have one internet and we have all of these systems that plug into the internet and use it as the communication layer that brings and connects everything together. Open timestamps is not a monetary protocol, but it’s intensely useful for Bitcoin projects specifically because you don’t want to have a piece of software that is released on a particular date and then somebody went and tampered with it, right? So being able to…
Timestamp that a particular piece of software was released and signed at a point in time is very useful to Bitcoin and I would argue that Despite being non-monetary in nature. It only earns itself a few points on my is it spam scale and therefore it’s not spam I think you know whenever somebody is selling you their you know Criteria as being a simple yes or no answer based on three seconds of thought you should be immediately suspicious that
their bias has clouded their judgment and their ability to objectively look at something and say, is this detrimental to Bitcoin? Does it compete and try to crowd out monetary transactions? Is it being made by somebody who has no skin in the game, who doesn’t benefit from Bitcoin becoming better and also similarly isn’t hurt by Bitcoin becoming worse? I think all of those factors need to be fused together into a holistic viewpoint that then informs you.
on whether or not this is spam. And I don’t think that that’s something you could do on an automated basis, right? If not, you know, has a filter in place for checking if nLockTime is equal to 21, that’s not checking the context or the intent of a transaction that says nLockTime 21. It’s just looking at the contents of the transaction in terms of that specific field, right? And it’s just a gross oversimplification of whether or not a transaction is good or bad for Bitcoin, because at the end of the day, if a transaction is paying fees,
I view it as starting out from a place of good, innocent until proven guilty.
Stephan Livera (57:31)
Okay, so as I read you then, BLC20 is spam, but JPEG inscriptions on their own are not spam in your view. What about stamps? Is stamps spam?
Kevin Cai (57:41)
Stamps is absolutely spam. think stamps actually would not exist without inscriptions having paved the road for them. If you look at Mike in space, he routinely talks about how, you know, he has this very like ⁓ confrontational view of ⁓ inscriptions being prunable data. And he says, you know, prune this, like you can’t get rid of me. Like I’m here to stay. It’s like almost a very rebellious way of going about data embedding. So for that reason, you know, I think the intent there is very clearly.
We want to embed arbitrary data on Bitcoin and we want to do it to piss you off in a similar way to how a you know Silly tourist might go to one of the wonders of the world and scribble their name on it, right? it’s it’s something that exists because they’re trying to you know, use this very beautiful public resource and Degrade it they’re trying to you know do graffiti on it, right? That’s that’s a far more straightforward interpretation of what’s going on because they’re not trying to hide their intent and there’s no
Ambiguity in terms of their intent whereas with inscriptions and like transferring, you know pictures back and forth I could just as easily ascribe that to people being stupid and wasting their money on Bitcoin You know, they’re they’re trying out this new fad that they heard about But I don’t think that the amount of damage that inscriptions can do is necessarily You know something that’s reflected by the community I think that a lot of the times the yardstick by which the community uses to determine if something is harmful or not is simply by comparing its
similarity to what happens in the ETH space, right? Ethereum kind of like blew up the whole, you know, chain with NFTs and like crypto kiddies and all of these other like stupid kind of, you know, on chain games. So when we see something like that or something that’s similar to it coming up on Bitcoin, we immediately have this like very, very visceral reaction of disgust of like, you know, we want to reject this. We want to keep this off our chain, take that crap somewhere else, right?
But I don’t think that that is a very objective way of looking at it. Especially when you consider how much revenue to miners came from inscriptions. I think Peter Todd was saying like in the PR that Luke had raised to filter out inscriptions, these types of transactions might be stupid, but they correspond to a pretty large proportion of miner revenue right now. So the incentives are not in favor for miners to go along with something like this, nor is it really going to make all that much of a difference if
KC pushes an update next week that changes the envelope slightly, or maybe gets rid of the envelope entirely, right? ⁓ It’s always going to be this cat and mouse game. And what we know is that there’s economic demand to do stupid transactions like this. So why not let this be a self-solving problem, right? If you look at a company and this company is purely unprofitable, all they ever do is make a product that nobody wants. What happens to that company in the long-term? They go bankrupt. They cease to exist. Why must we take
a interventionalist standpoint and say we must come in and destroy this fledgling ecosystem when the probability that it survives in the long term is already low. Right? If you look at the DGENs who have been trading runes, they’re all hurting badly. Many of them have lost five figures, if not more in terms of Bitcoin. They’re probably not going to come back for the next season of, know, rune mania. They’re going to be hurting. They’re going to have their tail tucked between their legs and they will probably go back to
other chains to do their shitcoining because it’s going to be cheaper. It’s going to have a lower opportunity cost, right? If you lose a little bit of Ethereum today, shitcoining, you’ve only lost a little bit of Ethereum. If you do, if you lose a bunch of Bitcoin while shitcoining, you’ve lost the hardest money, the best store of value in the process. Your opportunity cost is massive. So again, I think it’s really funny to me that, you know, there are so many libertarians that I’ve met and yet the very basic ideology of libertarianism and having this individual freedom.
to do things in a voluntary way so long as it does not result in an immediate harm to somebody else. think the common saying is, your right to swing your fist ends at the tip of my nose. Unless you can demonstrate to me that there is immediate harm that inscriptions are bringing, that we must stop, and that is existential, I remain unconvinced that it’s a big enough problem that we have to spring forward and act, especially if the cure is potentially worse than the disease.
Stephan Livera (1:01:54)
I see. Now, another argument I’ve seen from people is, okay, all this spam on the chain is making Bitcoin worse as money. And for that reason, people should try to work really hard to stop it, and should implement filters or potentially consensus rule changes to stop these things. And I guess the other they might layer on another level of argument to say, well, some of these miners are short term focused, they’re not thinking about the long term.
And it’s on them in their view as the node runners of the network, let’s say, to, I don’t know, some of them see that as teaching the miners what is appropriate monetary transactions. Other people might frame it more like you have to sort of defend the monetary use. That’s as I’m trying to understand it. Maybe people say, no, Stephan, you’re straw manning them, but try and respond to that at least as I’ve articulated it.
Kevin Cai (1:02:51)
Sure, yeah, so I’ll try
to engage with that as best as I can. So I think one of the contributing factors here is actually what ⁓
Lightning Labs has been building. I think a great portion of why we don’t see as much on-chain activity today is because Lightning has been so successful that it’s offloaded double digit percentages of transactions that ordinarily would have happened on-chain. And that’s why we’re seeing blocks that are not consistently full. That’s why we’re seeing one set per V-Bytes fee rates. If I got into Bitcoin yesterday, I think the first question I would ask is for such a widely adopted network that’s used for monetary transactions, how come it’s so cheap? Right?
Typically when you see lots and lots of demand for commodity, you see the price go up. Block space is no different. It’s a commodity. So if the price of that commodity is low, that communicates to me two things, that there is a large amount of supply relative to demand and that there is an insufficient or not robust enough fee market. I think for number two,
The easiest explanation for why we don’t see a robust fee market is simply because a lot of people use Bitcoin more as a store of value than a medium of exchange. And that’s not a coincidence, right? Only recently do we see that Square had rolled out their integration to allow taking Bitcoin for merchants. And also, prior to Lightning becoming a thing, this was even something that was a sticking point for me, which was why are merchants going to take Bitcoin when they’re going to have to wait several confirmations to know that it’s actually in their wallet and that I’m not going to go and double spend them and
know, run off with the shoes that I just bought and also the Bitcoin that I supposedly paid them. ⁓ Once Lightning got introduced, you know, it created a very different UX for merchants. And I think just as I mentioned before, with there being a latency period, a lag before features are going from existed to adopted, that latency has created a bit of a lull on chain where there is a lot of activity from people learning how to use Lightning. A lot of people who are
spinning up their routing nodes, lot of merchants who are also trying to figure out how to take lightning and therefore creating a downwards pressure for how much on-chain activity there is. And then subsequently, miners are going to be trying to look for fees in any possible way that they can. I remember one of the most prevailing conspiracy theories out there was that KC was paid off by mining pools in order to create a demand that was exogenous for transactions on-chain because there were so few that were happening. ⁓ But
In the long term, I think that this will be solved through aggregation. So what do I mean by that? Obviously, you know, Stephan, if you and I were transacting on chain, let’s just say that we lived in a world in which lightning didn’t exist and we went out to dinner, we had a few drinks and the bill ended up coming out to about a hundred dollars. And I wanted to send you that money that I owed for my portion of the bill. I could send you those $50 worth of Bitcoin on chain. And if we multiplied that by, let’s say the
Transaction volume that Venmo enjoys I would expect the on-chain fees to sit somewhere around a hundred sats per V by maybe even higher That could translate to a fairly expensive Transaction where it might even be more expensive for me to send you Bitcoin on chain then to do a wire Transfer and pay the 30 are 30 odd dollars that my bank would be charging me that in my opinion would be a failing of Bitcoin, right? However, it would also be demonstrating that the free market for block space is functioning properly and it is healthy
So what’s the solution to this? If you and I can’t transact on an individual basis and sort of recreate a Venmo like peer to peer payment system, what is there left for us to do? I think that one particular path would be ARK, right? Where you’ve got these systems that build on top of Bitcoin as a layer that aggregate people’s transactions, making them off chain with a possible unilateral exit if they need to go back on chain, but allowing them to save fees in the interim. Then it might not be so insane for me to pay one transaction
which might be expensive to board the ARK, do a bunch of transactions, hundreds, if not thousands of transactions on the ARK. And then finally, if I lost my conviction in Bitcoin or if I needed to sell some of my Bitcoin for fiat and the exchange that I’m using doesn’t take ARK or doesn’t have ARK integration, then I might pay a second transaction that’s expensive. And you might be seeing a similarity here with Lightning. I think all of these types of scaling systems…
These types of layers would allow there to be a robust fee market that allows for monetary transactions to still happen. We just have to take a more efficient engineering approach here. Rather than having individual users submit transactions and paying fees on an individual basis on a per event basis, we would instead have them get onboarded to a layer two, do all of their transactions or at least a majority of them there, saving tons of fees in the process, and then maybe paying one exit transaction worth of fees. In that sense, I think it’s very easy to compete then with traditional banking, which
can easily cost 3 % of the volume to the merchant, right? And can also cost merchants significantly more through things like chargebacks as well. I think if we’re doing transactions with merchants on systems like ARK or systems like, know, ⁓ Lightning, then it’s much more likely that monetary use cases will prevail and be protected while also having a robust fee market that naturally is like an immune system to spam, right? Because as a spammer,
You’re not here to try and compete with monetary transactions. You’re trying to find the bottom of the barrel fee rates. You’re trying to buy up the block space that is very, very cheap. You’re like a ASIC miner in terms of buying electricity, right? ASIC miners don’t go out of their way to operate out of jurisdictions that have very high electricity costs. No, they seek out very low electricity costs, in some cases free electricity costs if they can get it, if they can partner up with hydroelectric dam, if they can find a natural gas deposit and bring their own infrastructure and run their own generators and generate their own power on site.
They’re going to do that instead. Spammers are no different. Spammers are looking for bottom of the barrel fee rates. They’re looking for block space that nobody else wants. And they’re trying to time their transactions so that they can get into blocks at some point in the future. But they’re not necessarily super time sensitive. I will say that BRC 20 is the exception to this general heuristic. But we’re talking about two different kinds of trans actors fundamentally, right? The spammers are not.
looking at this as a value transfer. They’re not looking at this as something that has a deadline that they want to get done by a certain point in time. They have a very, very, very, very ⁓ long-term based approach in terms of, I just want to get this data on chain at the cheapest possible rate because storing data permanently is always going to be a better value proposition than paying on a monthly basis to AWS for cloud storage. ⁓ It’s always going to be in my favor as somebody who wants my data to be there perpetually to pay once.
you know, ideally pay as little as possible. So for all of those reasons, I don’t think that spam is going to crowd out monetary transactions. I think it’s a little bit backwards instead. I think that if there aren’t enough monetary transaction demand ⁓ in the market, then we see the inefficiency of the block space commodity market, which then ends up being filled by actors who are looking to purchase the cheapest possible block space. But if you get rid of the cheapest possible block space,
And you start sharing fees over a longer period of time, you start amortizing it, then it becomes a very small problem in the long run naturally. And you don’t need to be an interventionist for that to occur.
Stephan Livera (1:10:09)
Now, I think the… just losing my… I’m just thinking…
So one other point with the BRC20, as you said, I think it’s, and this is, I’ve seen this comment as well that inscriptions actually get priced out pretty quickly, right? Like as soon as there’s real monetary demand, inscriptions are like, it’s just way too expensive for them to do that. But maybe one thing that got a lot of people angry, at least in let’s say late 2023, early 24, around there, is that there was this whole BRC20 competitive mint process, right? Which you touched on, and my understanding is,
there was like a three stage process or two, I think it was like deploy and then mint or something like this. But the point is, like, there was a lot of people trying to get their transaction into the next block. And because they were because of so called the competitive mint process, that was bidding the fees to the moon, right? Like we were seeing like ridiculously high SATS per VBITE fees. I subjective but I think that’s what got a lot of the not camp people angry.
And I was kind of annoyed about that too, because I thought, hey, like this monetary network is being degraded. Now, of course, yes, I was using Lightning. ⁓ But there were kind of concerns that people had that, hey, it’s hard for people to even start using Lightning because the fees are so high, this kind of thing. And there were, and I’m sure you’re in the Lightning world, so there was this whole argument of like, when fees spike so quickly, a bunch of Lightning nodes just kind of force close and that causes other issues.
Kevin Cai (1:11:32)
For sure, yeah.
Stephan Livera (1:11:42)
that kind of thing. think some of that has been or is going to be resolved with like the anchor, zero fee anchor stuff, but maybe you want to touch on that.
Kevin Cai (1:11:49)
Sure, yeah. So as you mentioned, know, the BRC20 system is designed in a way that it’s ergonomically very similar to Ethereum. And I always say know thy enemy. And in Ethereum’s world, if you look at some of the periods of the highest fee rate spikes, it was largely due to very similar deployment contracts where you’d have a shitcoin that would have a mint function. Once the contract was deployed, people then slammed that mint function as much as possible. And as you know, in Ethereum, or maybe you don’t know ⁓ because you don’t use it, but I used to, you know,
a lot of friends who were in this space who did like NFT trading or whatever they taught me a few things about Ethereum and one of the things that they taught me was you know in Ethereum you can make that transaction you can submit it the transaction can be accepted sequenced if it’s on an L2 or just confirmed on base chain and then nothing happens because the contract state is entirely enforced by what’s going on in terms of ordering right so it doesn’t matter if you Stephan Livera submitted a minting transaction on Ethereum if I submit a
Transaction on a theory of that paid a higher fee and then I ended up being processed by that smart contract first before you so you you’ve paid, know Potentially high fee because you’re looking at you’re looking around you’re looking at the median fee rates that are going on in the mempool and you’re gonna pay something that’s comparable but because I paid a humongous amount of a premium over you or perhaps I even went to a private mempool Then my transaction will get confirmed and processed first and then I will have been to those tokens and you’ll be left with nothing and have paid the fee for nothing
Right? I suspect that this sort of interaction and the hype that it creates, right? Cause that’s the other aspect of this is that a lot of people will use a mempools fee, right? As a proxy for activity, right? If people are getting all excited because they’re seeing tons and tons of transactions and they’re, you know, minting this new shit coin, that’s going to create FOMO and that’s going to create a demand. And it’s a feedback loop, right? So I suspect that the design of BRC 20 was very intentional.
It was not just bad engineering. It was people who are trying to bring the same hype cycle sort of mechanism from Ethereum into Bitcoin. And yes, a lot of people did get affected by this during this period of time where it reached peak popularity. But again, I think that this is a bit of a short sighted way of looking at the problem. I think that in order to ascertain the true harms of a system like this, you have to zoom out and look at the longer term impacts. And today, BRC 20 is a shadow of its former self. It’s a husk even, right? ⁓
One of the reasons I think this is the case is because the more hype you drive and the more losers you create, losers in the sense of losing economic value while trying to interact with that system, the less people are coming back in the future, the less sustainable it becomes. And just like a Ponzi scheme or tulip mania, once it’s over, it’s over. It’ll be a long time before you have to deal with it again. In some senses, I think it’s actually better for us to get the pain over with to sort of front load that pain as opposed to if BRC 20 had been better engineered. Let’s say that
Instead of using this competitive minting process, right? There was a more deterministic way of going about it. Maybe there’s like a sequence number that you could assign to yourself during the mint process. So I could pay one set per V by get a sequence number six. You could pay 100 sets per V by getting sequence number seven. And it wouldn’t matter if my confirmation happened in a timely manner or not. Both of us could get, you know, the shit coins out of that contract. I think that would actually be significantly worse. It would give sustainability to.
the scam protocol that would allow it to persist for a much longer period of time. By contrast, with the amount of people that were burned by BRC-20, I think that inherently made it less popular. It made it have less staying power. It made it more unpopular for the spammers, for the scammers. And as a result, what we see is that today what prevails is the solution for spamming that has the lowest friction.
that has the lowest possibility for you being burned, which is inscriptions, right? And even inscriptions was not perfect. Inscriptions had a problem in which there was a nonstandard transaction that paid to a non-existent ⁓ or paid to a zero-sat output. And that broke its ability to track how an inscription would move because of its FIFO processing logic. So as a result, there was a schism in the inscriptions world where ORD had to be updated by Casey to have the concept of a cursed inscription, right?
So even inscriptions were thrown off course by non-standard on-chain activity, and there were some pain points that were introduced. But because inscriptions are a pain minimizing protocol, I predict that their staying power is significantly higher, that they’ll be around for a lot longer compared to these scammer protocols that are trying to bring in a one-to-one ergonomic ⁓ usability from Ethereum that is trying to approximate, if not outright bring that experience straight from Ethereum.
to Bitcoin, which is again, why I include that as part of my heuristics for is this transaction spam. Who is making this transaction? Who is designing this protocol? Is it a Bitcoiner? Is it an Ethereum user? What is their goal? What do they want to accomplish, right? With me, I think I look at inscriptions and I see a bunch of idiots who love to trade cards back and forth together. I have a lot of friends who are into trading cards, right? They spend tons and tons of money buying a rare Pokemon card, trading amongst themselves. Some of them are even so into it that they have cards that are worth thousands of dollars.
Do I think that that is idiotic? Yes. Am I going to go out of my way to unilaterally try and destroy the trading card market because I’m on some sort of crusade to exert my moral imperative view onto these people? Absolutely not, right? It is just a market. It is filled with willing participants who are doing this on a voluntary basis. And so long as they’re not directly creating a harm,
to the comments, I don’t think that there’s any real reason to do anything, right? Just sit back and let the problem take care of itself.
Stephan Livera (1:17:35)
Now, we’ve spoken about fees, what about other costs or impacts on node runners? Things like hard disk drive or SSD cost or increased internet, like increased bandwidth, right? Because yes, people could argue, look, in 2017, we the community or we the node runners, whatever you want to call it, we the network agreed to SegWit, which obviously allows for the possibility of a theoretical maximum block size of 4 MB, 4 megabytes, 4 million weight units, whatever. But actually in practice, it was meant to be, it would…
come out to be more like 1.6, 1.7 megs, something like that. And maybe we could argue because of inscriptions and some of this other spam, it’s now more like two and a half megs and sometimes we get these larger blocks. And obviously there’s ⁓ an increased kind of hard drive or SSD cost and increased internet bandwidth. So is that an impact of spam, a negative impact?
Kevin Cai (1:18:28)
Sure, let’s
start with storage. So storage, I mentioned earlier, follows a power law decrease in price and increase in density. So because Bitcoin has a linearly sized growth rate for
the blockchain. Number one, I think it’s highly predictable. So if you as a user are very serious about running a node, then I think that you should be thinking about what the system requirements are for running that node, not only just this point in time, but also going into the future. That it is linear and predictable means I think it is fairly easy for you as a user to prepare for hardware upgrades that you may need to do in the future. So if you’re running an unbrilled node today that has two terabytes of space, you can very easily calculate how much more space the blockchain is going to require over the next 10 years and plan accordingly.
Now, setting aside that hardware itself is getting cheaper, that storage is very accessible to the average user, I want to bring in my own personal use case and how I run my nodes. So under my desk here, I’ve got three Minisforum X300 mini PCs. They’re by no means bleeding ⁓ edge. They’re not top of the line. They are definitely slower than the desktop computer that I’m using to speak with you, but they run Bitcoin Core V30, Knotts, and LibreReel just fine, and each one of them cost $150.
I run multiple nodes because I’m a developer and I like to test different configurations. I like to put them into, you know, little reg test networks and pit them against each other and do all sorts of tests. But the average user is only going to need one node. So $150 plus, you know, maybe let’s say $200 for a relatively large SSD some years in the future is going to bring you to about $350. And that’s if you want to build a purpose built system. For everybody else, you can get a Dell Optiplex for about hundred bucks. You can get a hard drive for about a hundred bucks.
That’s going to be a $200 expenditure and you’re going to be able to run a Bitcoin node, albeit a slower one than mine, for many, many years into the future. ⁓ I don’t think that this is really a realistic problem ⁓ for the vast majority of users, especially if they live in a place where hardware is relatively accessible. Now, I know that there are certain countries where electronics are considerably more expensive, like Brazil, where there’s a significant tariff for any imported electronics. And to that, I say…
I think the solution there is that we must continue to optimize Bitcoin Core to be able to run on lower end hardware, but we must not be behest to it. We can’t let that be a sort of hard requirement in terms of we’re not going to apply optimization X because it’s not going to help out someone who’s running on a Core 2 Duo from like 2006, right? We have to draw the line somewhere. You see this a lot in Android app development. I’m a core developer for Blixed Wallet. We have a explicit cutoff for the oldest device that we’re willing to support.
And what we found is that even with that cutoff, people who are in Cuba, people who in Latin America, they’re still able to get their hands on devices that are a couple of years old, right? So even if hardware is expensive, if they’re going to be using this as a replacement to their bank account, they can afford to secure a device, a mobile phone at the very least. They can use light client protocols like Neutrino or UTRIXO. And I don’t think that it’s a coincidence that…
Vintium for example a nonprofit in Brazil is funding the development of things like floresta which embeds you Trix ⁓ to give you like a Bitcoin core like our PC locally I think that a lot of people have this like very cypherpunk manner of looking about this as I want The case to be that if there’s four million or four billion rather users on Bitcoin that there should be four billion full nodes out there I think that this is a little idealistic and romantic right at a certain point the rubber meets the road you have to be pragmatic and
you’re going to have uncle Jim type situations where a trusted member, a pillar of the community runs a full node on behalf of maybe a hundred users. And then they plug into that through light clients or Electrum wallets. And there’s a certain trade-off that’s going to be made so that you can have this wide amount of usability, even if it’s not the fully self-sovereign deployment of Bitcoin that people who are living in a first world country would easily be able to afford. So that’s my first point is
I don’t think that the hardware will necessarily be the bottleneck here. I think there’s a lot of different structures that you can use that will allow communities to adopt Bitcoin in a way that doesn’t require every single user be able to afford the hardware. The second point that I have is regarding connectivity. So there’s this law that is about internet connections and it is Nielsen’s law which states that a high end user’s connection will increase at about 50 % per year. In Massachusetts where I reside, a good old bean town, I have gone from
living in a place that had a highest speed internet connection of 100 megabits to having a 5G at home internet that maxed out at 250 megabits when there was not a lot of traffic to now having fully symmetric gigabit fiber. And I pay the same amount per month as I have for each one of those plants. So the organic growth of network connectivity over time, I think will also mean that running a Bitcoin node and having some extra relay requirements is not going to be a massive
you know, burden on people. The one thing that I have seen online that has been a point of contention is that, you know, back a few months, I don’t remember exactly what the time is for this. There was like a, you know, there’s like a bad actor, we call him, that wrote a node list tool that would fake IBD downloads specifically from Knott’s nodes that were under a residential IP block. And the goal was to basically, you know, troll people who had a data cap on their Comcast internet.
and try and deplete it so that they couldn’t run their NOTS node anymore. Now, obviously I don’t condone this behavior, but it does point out a very interesting interaction for people who are in areas that don’t have the viability of using a good ISP. ⁓ My sort of response there is that I think in the future we will see a great degree of penetration for systems like Starlink and internet providers that are not able to enjoy a regional sort of monolithic monopoly.
and that they will disrupt this enough that running a node will be fairly trivial. We saw with the recent rollout of Starlink direct to mobile handset that it is entirely possible to roll out connectivity networks that don’t require new hardware. But even if they did, Starlink has been subsidizing their dishes for quite some time now. You can get a mini dish for like hundred bucks, which is close to what you would pay on a monthly basis to Comcast if you’re renting their router.
be able to get high speed internet with no data caps, right? Even their cheapest plan is $5 a month for 500 kilobytes per second, continuous, no caps. ⁓ You may have seen on Twitter that SuperTestnet had posted a calculator for how much more bandwidth a core V30 node uses relative to knots. What you might not know is that he made that calculator after having a debate with me regarding what the worst possible case scenario was. And that worst possible case scenario came out to about
10 to $11 per year. And that was with the calculation of using an Alaska ISP where there is a monopoly and you have a data cap in place where you pay per gigabyte, right? So when you factor all of this together, the marginal cost of having more transactions on the network is really just close to nothing. And as we get additional penetration from competitors in the ISP space,
As we get better connectivity in general because cities get better connectivity, which then means that there’s more budget available to connect to places that didn’t have connectivity before. We would see that over time that connectivity will outpace the amount of connectivity requirements that Bitcoin will grow over time, if that makes sense. ⁓
Stephan Livera (1:26:03)
Right, so in simple terms, the blockchain is growing linearly, but internet and other aspects are growing faster than that.
Kevin Cai (1:26:07)
And even beyond just like the storage
aspects, I would argue that the relay amount of data is also growing at a linear rate, right? Because at the end of the day, what is the point of relaying transactions around that will never ever be confirmed, right? There will be a convergence over time where relay policy will reflect what transactions are routinely getting mined. So for example, ⁓ in a hypothetical scenario, let’s say all of the other mining pools that are mining subset fee rate transactions adopt MARA’s threshold of 0.5.
What I would observe is that over the long term, we would all converge to have relay policy be around 0.5 because relaying transactions that pay 0.1 that are never ever going to get mined would be a waste of bandwidth and people will react to that. And if people don’t react to it, then maybe Core will react to that and tune the defaults to be more in line with reality. ⁓ You know, this just goes back to what I was saying is a lot of classes of problems, don’t require
human beings to constantly be tinkering, to constantly be there twiddling their knobs and trying to figure out like, what is the optimal set of knobs for this particular situation? Sometimes it’s best to sit back, get some data, inform your decision off of a captured observation over time that gives you a better picture of what’s actually going on instead of trying to, okay, I’ll build this like potentially imperfect model in my brain. And then based off of my mental model, I’m going to try to dictate some set of rules that make sense to me.
I think a data-driven approach makes more sense, but that also means that you have to be a little bit more hands-off to gather that data in the first place.
Stephan Livera (1:27:34)
Yeah. Now, another big, I guess, argument or point that comes up in this debate is if a lot of people run, like in their mind, they see it as if a lot of people run not and filter the spam, it’s gonna not stop the spam, but it’s gonna significantly reduce the spam or make it more costly or make it take longer for the spam transaction to be confirmed into the blockchain. What’s your stance on that?
Is that true? How true is it basically?
Kevin Cai (1:28:06)
So let me address
each one of those. The first one being cost. I think a lot of people get this idea because in the past, Peter Todd and maybe a couple of other people who wanted to demonstrate that you could bypass filters would use services like Slipstream, which obviously have a premium associated with them where you have to pay them a fee in order to make them take the transaction out of band, right? And so a lot of people created this linkage in their head. They’re like, okay, filters work because we’re making spam have to go through an alternative means and they have to pay like this toll in order to use this toll road in order to get to miners.
Unfortunately, this is not accurate, right? You know, at that time, a lot of miners had not been peering with Libre Relay for one reason or another. Peter ended up, you know, sending them some DMs, writing them some emails, whatnot. And these mining pools started peering with Libre Relay, which obviously doesn’t have any marginal costs associated with it because it’s basically just a parallel relay network that exists alongside the public one. I would even say that it’s part of the public relay network because it interacts with members that are only part of the public relay network and don’t, you know, choose to peer with Libre Relay.
⁓ So that’s it for the cost one, right? There is no marginal cost. My OpReturn bot demonstrates this really obviously by setting a static 0.5 sat per VBite fee rate. ⁓ It never adjusts its fee rate paid. It doesn’t pay, you know, like what Mempool space’s fee recommendations are. It doesn’t pay any less. It doesn’t pay any more. It always pays 0.5 sat per VBite. And relative to the average person’s transaction, who’s probably still using a wallet that pays one sat per VBite that actually
represents a discount in how much I’m paying on a fee rate basis, right? Now, if you wanted it to be a perfect comparison in terms of cost to inscriptions, you would have something that is greater than 160 VBytes in size, and then I would have to be paying a 0.25 sat per VBite fee rate to account for the 4x witness discount, right? But without getting into the technical weeds there, if we’re just comparing a OpReturn from my OpReturn bot to a monetary transaction, that’s paying one sat per VBite.
I think everyone would agree that it is objectively paying less, right? So that’s an objective, empirically observable rejection of the theory that filters work to increase the cost of spam. The second one ⁓ that’s often talked about is the time to confirmation. And this is something that a lot of people get conflated with the amount of time that it takes for a transaction from my operative bot to be mined. They look at mempool.space and they say, well.
We saw this transaction one hour ago and it took like an additional hour to get mined, right? So it took two hours to get mine. That’s definitely an increase in time for how long you would have taken a transaction to get mine, ignoring the fact that it’s paying subsat fee rates, right? If the bot was modified to pay one set per V-byte. And in fact, I may consider adding a feature where you can choose between the two of them. If you want to choose to pay 0.5 sets per V-byte or one. If it were paying one set per V-byte, given that the mempool was having a median around that.
it obviously would have been confirmed sooner, right? And the second consideration for why it might take a little bit longer is that only a proportion of the miners out there are willing to mine subset fee rate transactions in the first place. And also that happens to be the same proportion of miners that are willing to mine non-standard transactions, right? So you could make the argument that because you would have to wait a little bit if one of those miners had recently mined a block just by the rules of probability, you might have to wait a little bit longer before you can get your transaction mined. My counter to that is, well,
Again, look at the trend, right? There might be only three mining pools that are consistently mining these types of transactions today, but what happens when the subsidy continues going down? And if TX fee rates don’t reflect people being willing to pay more right away, miners are going to be more desperate for revenue. So Foundry, which doesn’t mine such transactions, might be willing to let it go if you’re paying one set per VBITE and you have a nonstandard transaction, right? know, other mining pools, even ones that are
facilitated through datum that are willing to not take these types of transactions because a majority of their revenue comes from subsidies might have their Calculus here shifted a little bit once that subsidy gets smaller and smaller, right? They’ll take anything that they can get they’ll become more desperate and I think that the if you look at the trend and you you think about the incentives that are at work the most likely outcome is that This situation is the worst that it will ever be in regards to getting a non-standard
you know, subset fee paying transaction on chain. I think it will only improve from here on out. I think there will only be more miners that will join in mining these types of transactions. think miners will become more desperate for free revenue. And I think that talk is incredibly cheap when a majority of your revenue comes from a subsidy in the first place. Right. ⁓ We’ve seen this play out with the electric car market, right? EVs enjoyed a substantial subsidy for a very long time. And then the government got rid of that subsidy. What happened? EV prices dropped.
because consumer expectations aren’t going to change just because a subsidy is gone, Consumer expectations will remain much the same. They’re going to want to spend relative same amount of money to buy this vehicle. And the same way, think people who purchase block space, their expectations are not going to instantaneously adjust over time once the subsidy decreases. They’re not going to be thinking to themselves, well, the miner’s getting paid less from the subsidy, so I’ve got to increase the amount of fee rate that I’m paying. There’s going to be a latency period. And during that latency period, miners will experience the greatest amount of temptation.
to start relaxing some of the rules of policy. ⁓ And there’s a lot of talk from the Nazi camp that the way that you fight back against this is by social layer enforcement, by shaming the mining pool and saying, I’m not going to hash this mining pool if they mine nonstandard transactions, but I just don’t think that this has any real impact on an industry that has quite literally millions and millions of dollars of revenue, right? Just to really drive this point home, many Bitcoin miners actually make more when they shut off their miners.
from the deals that they’ve signed with power companies than they do from mining Bitcoin, right? So unless that changes in the long run, I think that miners are basically always going to be acting according to their incentives, which is how do I pay my electricity bill? How do I pay my employees? All of those expenses are gonna be denominated in USD. They’re not gonna be denominated in Bitcoin, right? And for that reason, people who are saying that…
A farmer’s not going to kill the golden goose because it continues to lay these eggs and it wants this golden goose to live as long as possible are ignoring the factual realities of what miners actually have to deal with. They have to keep their operation running, especially with these razor thin profit margins, especially with how hard it is to secure financing. You try to go to any bank and use Bitcoin as collateral.
What they’re gonna tell you is unless you’ve had that Bitcoin sitting in an exchange and has never been touched never been withdrawn to ⁓ a Hardware wallet or any form of self-custody they’re not willing to touch it with a 10-foot pole and regard it as assets to your towards things like securing a loan right they’re not gonna count that towards your ⁓ LTV and so consequently you’re dealing with businesses that are having to
having to provide a certain amount of liquidity just to keep their day-to-day operations running. If they have to forgo a significant portion of their revenue to make some people on the internet happy, it’s just not going to happen.
Stephan Livera (1:35:18)
Okay. And what about the, is there any increased costs or time of having to install liberal relay or not just the miners themselves, but even the broader, let’s say spam ecosystem of having to hook up with labor relay instead of using core or whatever else.
Kevin Cai (1:35:35)
Absolutely
not so much in the same way that if I wanted to craft a transaction ⁓ Locally on my computer, which you know, I need a node to do that or at the very least I need a library that can craft to Bitcoin transactions Let’s you know, just say that for the sake of argument. I’ve already crafted this Bitcoin transaction. I’ve got the hex string, right? What do I do in order to minimize my friction to broadcast to the Bitcoin network as it exists today? Maybe I go to like block cipher comm maybe I go to mempool space and I click on the broadcast transaction option I paste in the hex
representation string and I click broadcast. You can do the same exact thing with Libre Relay. All it takes is somebody running a mempool space instance or running an API that allows you to push a transaction to them and then they can broadcast that transaction on your behalf. Right? If you continue down the chain of abstraction, you would get something like my OpReturnBot where you do not need to know what a Bitcoin node is. You do not need to know what RPC is. You don’t need to know what a transaction looks like. You don’t need to know what an OpReturn is.
All you need to know is how to type text into a box, click a button, and how to pay a Lightning invoice. you know, hypothetically, that part could even be abstracted away as well. If you have a credit card processor, for example, right? If I wanted to support fiat payments, which obviously I don’t, you know, I work in Lightning, ⁓ I could just add like a Stripe integration, and then you don’t even need to know what a Lightning invoice is. You don’t need to know what Bitcoin is for that matter. You just need to know that you’re trying to put a message somewhere where it’s going to be stored forever.
And this is the amount that you’re gonna need to pay to make that happen. Everything else behind the scenes does not need to be within your purview. You don’t need to run a Libre Relay node. You don’t need to know what it is. You don’t need to know what a node does. You don’t need to know how transaction relay dynamics work. And you certainly don’t need to know which miner is going to pick up on your transaction. Like the most ridiculous argument that I see online a lot of the times when I post an example of a nonstandard transaction is, well,
you had to go out of your way. You had to go find a miner. You had to go hit them up on Twitter. You had to go email them. You had to go call them up and say, hey, I’ve got a weird transaction for you. Can you please mine this for me? And my first reaction to that is, well, that’s just not true. I created a transaction in very much the same way that I would create any Bitcoin transaction. And then I programmatically sent it to the network using an RPC, which you would be using if you were building this application and built on top of Bitcoin Core instead. And in fact, I do plan on removing Libre Relay.
from my stack in the next month or two. I will just point my OpReturnBot at core V30. Once it’s attained enough adoption where I don’t have to worry about ⁓ manually choosing my peers, I’m gonna switch out the backend and there will be absolutely zero marginal friction. ⁓ And for what it’s worth, the OpReturnBot is open source. So if anybody wanted to do this today, all they’d have to do is spend a little bit of time learning how to deploy this ⁓ and then they could get it running. If I so desired, I could create an Umbrell app
where it would be a one-click install for you to install the OpReturn bot on your Umbrol node, and then you could do all of this within the purview of your own sovereign node. You wouldn’t have to rely or trust me as a service. You wouldn’t have to run Libre Relay, because again, once Core V30 gets enough adoption, there’s this tipping point at which it becomes very reliable to send non-standard transactions. ⁓ It’s just going to basically be a self-improving situation.
Right now, of course, there’s still things that I would need Libre Relay to do if I wanted to do weird stuff like having a one-set output, for example. I’m not in the business of harming Bitcoin. That’s why I do my data embeds in OpReturn. It is the least harmful way of embedding data on chain. I want what’s best for Bitcoin. But if I wanted to create a more subversive bot that created one-set outputs, then yes, I would still need Libre Relay for that. But I would argue that that is a good thing.
And also that it enforces that people care much more strongly about a demonstrable harm, which is bloating the UTXO set, then embedding data in OpReturn, which is prunable, never enters the UTXO set and is basically a nothing burger. If you haven’t been listening to people who are trying to rile you up about things like CSAM risk, which have existed in chain for quite some time. ⁓ As of ⁓ seven years ago, I think people were using Apertus to embed illegal content on Bitcoin.
⁓ Now proponents of knots like Luke and Bitcoin mechanic would have you believe that a court would have a meaningful distinction between a non contiguous embed like an inscription for example and a contiguous embed like an op return and I’ve tried to steel man this on several occasions I’ve tried to think to myself. Okay, Alice is in a trial and Bob is in a trial They have the same exact CSAM file except Alice embedded it using an op return and Bob embedded it using an inscription and there’s no scenario I’ve been through every
possible scenario in my head. There’s no scenario in which the prosecution comes with a different outcome between these two trials, right? ⁓ So I think that a lot of the argument there is just BS.
Stephan Livera (1:40:35)
On that question, it’s probably worth
exploring this kind of this argument that Luke seems to put out, which is sanctioned. They’re saying somehow as a V30, it’s now magically sanctioned. To me, I don’t quite understand this because ⁓ even there you could also talk about the consensus limit, which is one megabyte, not just 100 kilobytes in V30. And that’s been in place, you know, much longer than just V30. I don’t understand. Yeah. Yeah.
Kevin Cai (1:41:02)
I think this goes back to our philosophical differences, right?
I think this goes back down to the philosophical difference of being someone who uses teleology or thinking about it from the perspective that everything is designed with a purpose as the first and foremost sort of starting point, that the purpose is the seed and that you grow the product or the idea into a usable tool versus instrumentalist, which is that you’re building a tool and then you ascribe a purpose to it. I think that people like Jimmy Song, Luke Dash Jr., Bitcoin Mechanic, they’re…
teleological, right? They say that the sanction is because op return is happening in a field that is for arbitrary data. They’re saying that, you know, for example, script pub key does not have that as its explicit purpose that script pub key is used for, you know, a different purpose relative to arbitrary data that you have a more obvious embedding of data. However, I think this overlooks the reality of
how cases like this are prosecuted, right? A judge is not going to sit there and open up a hex editor and look for CSAM. No, they’re going to have a computer forensics company look at a disk snapshot and say, okay, you know, in many cases, this isn’t even about CSAM. In many cases, it’s about corporate espionage where somebody has stolen documents that are sensitive that should have never left the perimeter of an enterprise network. And, you know, maybe they’re trying to take trade secrets to another company. It doesn’t matter, right? In those…
court cases, hire a computer forensics expert who then looks at a snapshot of the disk and says, okay, well, here’s a piece of a file. Here’s a piece of a file. Let’s glue those two pieces together. here’s some other pieces that can be glued together and they can reassemble a file. And that’s not considered by the court to be a reconstructed or transformed piece of data. It’s just a file that had been fragmented across different parts of the file system. And indeed, if you look lower level onto your computer, if you zoom in,
onto the SSD, if you zoom in onto the hard drive, the bits and pieces of a file do not exist right next to each other, right? They are distributed in physically separate isolated locations. And what your hard drive, what your SSD does is it stores a lookup table of where those locations are.
And then it says, okay, here’s a file and here’s all of the sectors on your storage device that make up that file. And then when you go to read that file, it uses that lookup table to reassemble the file on the fly. How is this any different from an inscription which has within a single transaction, multiple data pushes separated by a opcode, right? If you strip out the opcode and you look at this data structure, it’s contiguous data. If you don’t strip out the opcodes and you look at this data structure, it’s a lookup table.
In either way, don’t think that this is like a good faith argument. think it’s just intended to confuse people who don’t have necessarily a very technological, you know, technical background, who don’t know about things like LBAs and sectors who perhaps are easily deceived or, you know, misled into thinking that there’s any real world difference between the two. If you have CSAM on your disk, it doesn’t matter what sort of encoding you use. doesn’t matter how you fragmented it.
it will be prosecutable, right? And the fact that CSAM has existed on chain for over seven years in Bitcoin with no prosecutions is a pretty good data point that courts understand that much like the protections afforded to Yahoo with section 230 in the case of Doe v. Bates, that you are a neutral infrastructure provider and not somebody that they’re interested in coming after for prosecution, right? In the Yahoo case, Doe v. Bates, Yahoo had like a community forum type product where they allowed people to discuss.
various news topics and some terrible people created a group that they used for sharing CSAM and the prosecution moved that Yahoo was essentially a publisher in all of this and should be prosecuted to the same extent of the law as the people who created the content. And what the court ruled was that no, we do not go after neutral infrastructure providers because they did not facilitate the creation of the content and we cannot hold people who are running infrastructure liable for the contents that
their users are producing, right? If we applied this to ISPs, every ISP on earth that has ever served somebody who has sent CSAM over the internet would be held liable and prosecuted to the full extent of law. The fact that we haven’t seen that, the fact that section 230 as a protection has not been repealed means that the government understands the difference between someone who has relayed a piece of data that they themselves did not create or facilitate or take part in creating or transforming, and they can legally distinguish that
from a person who is the origin, the nexus point, and creating this legal content. But at the end of the day, I understand that this is a lot of nuance. Not a lot of people out there are gonna be reading case law for fun, and even less people out there are gonna be trying to relate it to something like Bitcoin, which is pretty unprecedented, With the Yahoo group, we’ve seen content, and arbitrary content specifically, from users, and we’ve learned that that is something that we deal with. When Facebook first came out, people were posting gore.
on Facebook because Facebook didn’t have the necessary moderation to keep that kind of content in check. So we’re all very, very aware of how that scenario plays out because it’s a centralized platform. We know how that works. And more importantly, when that type of content is detected, Facebook very quietly gets rid of it. And so to a lot of people, the distinguishing factor here is that Bitcoin is immutable storage. You can never really delete it. But I would counter that by saying that it doesn’t change
the rules by which the courts ruled the case in the first place, right? There’s no point in their ruling that said this is contingent on you deleting the data. There’s no point in which they said the type of storage that is being used, the type of relay, the type of sending of data is what makes you complicit or not complicit. It was what role do you play?
Did you create the content or not? Did you participate in facilitating the transformation and production of that content? If you did not, if you were not the user that produced and sent that content in to the system, then you are not culpable. And I think that that will be the doctrinal way that we approach this as the courts and as the law catches up to Bitcoin as it exists. Because again, it’s very new relative to everything. you know, whenever we look at law,
As long as there’s not case law precedent that exactly matches, the best that we can do is looking at similar cases and trying to come up with, what are the guidelines? What are the standards by which this ruling was reached? And how can we apply that ⁓ to a system that is maybe not a one-to-one mapping, but shares some similarities in terms of what role that node runners play? ⁓ Even Giacomo, who is a self-described neutral party in all of this.
says that the only reason he does not run core v30 is to send a social signal. He’s not scared of things like CSAM risk. And this is coming from somebody who’s close friends with Luke, right? You know, I think that it’s important to contextualize these things and keep them within the realm of objective reality instead of going to this hysterical place where, you know, people are just getting extremely stressed out about this threat that doesn’t actually exist. At least not in the form that, you know, the people that they’ve been hearing it from.
Stephan Livera (1:48:09)
I So I guess
you’d put it as this is an already existing risk. It’s not a new risk. And yeah.
Kevin Cai (1:48:13)
Exactly, yeah. Core V30 did not introduce
any new attack surfaces to Bitcoin. All it did was align consensus with policy, which is something that I’ve always believed is sensible.
Stephan Livera (1:48:26)
Now, another big Knotts talking point, KnottsKamp talking point is this notion of, oh, inscriptions could have been stopped in 2023, right? Their idea is, well, okay, Luke put in a PR in, I think, September 2023, which I think it was rejected, but if it was accepted, it would have come in in December. But by then, there was a lot of UTXO creation that already happened by then, but I think he’s made this argument, well, look, Core could have done it before.
⁓ And that’s why, you there are arguments made that all core are either negligent or corrupt, that they could have tried to stop inscriptions by filtering it earlier with, ⁓ you know, by trying to make data carrier size also relate to that. Do you have a view on that? Was that actually feasible? Would it have actually happened that way or how would, or do you think it would happen some other way?
Kevin Cai (1:49:16)
think it’s fairly
unlikely that anything would have really come about from trying to block inscriptions. think maybe the most charitable explanation that I have here is that if you introduced enough friction in the beginning, know, early parts of when inscriptions were first starting to take hold, you might have been able to induce a form of infant mortality where it never really got to get to a place where, you know, it has built up enough momentum that there would be enough drive for people to update ORD. ⁓ But…
Unfortunately, I think that this really doesn’t hold up to scrutiny. Earlier today, I think maybe it was yesterday, there was a discussion with Super Testnet. And the argument was basically, what if we just soft fork every year to ban the type of inscription data envelope, right? Right now we use like the Op-If envelope and that’s why it’s part of BIP 444. What if we just keep doing that? So if we ban Op-If and then KC updates the envelope to use like OpDrop, for example.
where you’re just pushing elements onto the stack and then dropping them. What if we just soft work again and then we banned that and then so on and so forth in perpetuity, right? Like annual soft forks essentially to get rid of spam and purify the chain. I think that there is an asymmetry here that a lot of people are not taking into account, right? When we talk about ⁓ the shit coin industry, when we talk about things like MetaMask, when we talk about like Rabi Wallet, when we talk about all of their ecosystem, they’re built on
essentially common core libraries, right? Each one of those wallet developers is not re-implementing the wheel. They’re not building a whole bunch of stuff on top. They’re mostly building UI stuff. All of the consensus, all of the code, all of the transaction crafting, all of that is handled by shared libraries between all of them, which means that they benefit from having a central location where they can update all of the logic and then be able to work around, you know, the new consensus rules that you’ve rolled out as part of your software. We can apply the same thing to ORD.
Casey, if he wanted to, could drink a whole bunch of coffee and do an update to Ornn on a weekly basis. He would be able to trivially outpace the update rate that Khorne ever could, assuming that Khorne was willing to give Luke Dash Jr.’s idea a try. We would be going down the path of this cat and mouse game, and Casey would win. Now, it’s a little bit demotivating to think about, because I think a lot of people, they have a good
They want to approach this from a perspective where you know, somebody should do something We shouldn’t just sit idly by we should try and take some action, right? But if you if the action that you’re taking is committing yourself to wasting a bunch of time and effort For a very very temporary fix that will be undone by the next weekly update to ord It does kind of make you wonder if this is like a bike shedding moment, right? where instead of shipping features that could actually benefit Bitcoin users like CTV or up, you know
with the great script restoration where we could implement things like BLS, ⁓ it does really make you wonder if like, it’s just sucking all of the oxygen out of the room and everybody’s emphasis and focuses on this when they could be focusing on things that they can change, right? At the end of the day, know, like with any permissionless network, you’re having participants be able to jump on and start building without getting anybody’s permission, right? As the name would suggest.
You don’t have to go and apply for a permit to build on top of Bitcoin, which means that if somebody finds a way to do data or arbitrary data embedding, they’re going to do it, whether you like it or not. If you then go and try to ban them and there’s enough sufficient economic demand that they want to continue doing what it is that they’re going to do, they’re going to find a workaround. When you ban that workaround, it’s just going to continue the cycle. And quite frankly, there are some ways of embedding data arbitrarily that
are not defeatable, right? Like Bitmex research talked about embedding data in such a way that you use encrypted data with a very weak private key and then you can grind the bytes to the private key until you find what the hidden data is there, right? There’s not enough time for every single node on the network that’s validating these transactions as they’re witnessing them in the mempool to be able to grind those private keys. Maybe it takes like a week’s time. But if the spammer is dedicated enough, they don’t even need to rely on other spammers who want to view the data. They could just release the private key as well.
Stephan Livera (1:53:05)
into the private key, right?
Kevin Cai (1:53:28)
out of band, right? If I embed a piece of data today that’s a picture of my dog onto the Bitcoin blockchain and I encrypt it using GPG and a symmetric key, I could just wait a week. And if nobody’s figured out the key and posted the image and say, wow, look at this, there’s a dog picture in this transaction. I’ll just release the key. Maybe I’ll even make a website that has a off chain stored copy of every single decryption key for the data embeds that I’ve made. Congratulations. You’ve now made a sealed version of an inscription. And in fact,
even just the very nature of how inscriptions work illustrates this point, right? To embed an inscription onto Bitcoin, there are two stages. The first stage, you commit to a witness script, right? So at that point, all you see on chain is just you’re committing to this hash. The second step is the reveal. So even if you had some means of stopping all of the reveal stages, you’d also have to go after the commit stages, which makes it a little bit more difficult, right? Because you’re not able to introspect into what’s actually going on in the script.
It’s just a never ending race to the bottom and it introduces potentially a lot of complexity for not a lot of gain. The other example of this same sort of, you know, ideology of adding tons of complexity for not a lot of benefit was this hackathon project that was a PR to knots that was a Lua based rule ⁓ engine. And the intent here was to allow any person to create a filter themselves, similar to how you might create a block rule for uBlock origin.
to block ads on a webpage. And then they could share these filter rules with each other. They could upload them maybe to a central GitHub repository and you could have some kind of auto update process. But I think that this is also just a really interesting approach because like auto updates have always been extremely antithetical to the ethos of Bitcoin, right? Like I watched your podcast with Gloria Zowell from ⁓ the core maintaining team. And one of the things that she talks about is that
The role of being a core maintainer is just that. It’s closer to being janitorial in nature. There is no auto updates being pushed out, right? The HP printer on my desk, every time I fucking boot this thing up, checks with the HP servers, phones back home, and sees if there’s a firmware update. What do I know about what’s in that firmware update? Absolutely nothing. I’m blindly downloading a new firmware update from the manufacturer of this device, and I hate it, right? But…
In a sense, this is very much the fiat status quo for many things out there, whether it’s a Tesla that you’ve bought that can auto update its own firmware, whether it’s a computer and it’s running Windows and Microsoft has pushed an update and it auto updates and restarts it. We’ve gotten so used to this concept of remotely pushed updates that we haven’t stopped to think about how beautiful it is that this is not the case for Bitcoin. It might be a little bit frustrating when
There’s a new CVE or security vulnerability that comes out for Bitcoin and people aren’t upgrading as quickly as an auto update would allow us to do it. But it gives you full control. You are always in the driver’s seat. And by taking that away from you and relying on a third party, the logical conclusion of this is that Luke would just set up some sort of auto update server and then Nod’s would on every boot check to see if your filter definitions are up to date, sort of like McAfee antivirus.
and download the latest set of filters. And what happens if those filters are overly broad, right? know, famously, like Luke had added a filter to this DGEN protocol that uses an unlock time of 21. What if you have a legitimate monetary transaction that has an unlock time of 21 and it gets filtered out, right? Like there’s also like the question of, and you know, I think there’s some responsibility that needs to be shared here, but there’s also the question of like when the CoinJoin protocol for Whirlpool,
Stephan Livera (1:57:15)
Mm-hmm.
Kevin Cai (1:57:15)
had
a op return that had more bytes than Luke would like. Luke asked them to change the spec, which I think is fair, right? You he asked them to approach it and to make it more reasonable, to make it smaller so that it would fit. And when they explained to him that there are certain technical reasons for why they didn’t wanna do that, you know, they had like a revenue sharing agreement at one point, and they wanted to be able to store not only an identifier for themselves, but also a partner’s identifier. And then they maybe had like, you know, 20 bytes of padding that they could have gotten rid of.
Their argument was that even if they slimmed down this protocol considerably, that it would still be over this arbitrary limit that had Luke had set. And again, this clashes with the ethos of it being permissionless. There’s no particular reason why Samurai’s Whirlpool service should be the one that has to move in this game of chicken. They have just as much of a claim to using Bitcoin as much as any other user on the network. And the fact that they had built out this slightly large op return as part of their protocol didn’t mean that
you know, they deserved to have their protocol get broken. For this reason, you know, I think that ultimately filters are a nice way of expressing your nodes intent. I think it is a good way for you to contain and constrain what your individual node is going to operate. I think once you start getting into the, yes, if everyone would run this, then we could shape behavior. That’s when it’s starting to get a little bit delusional because, you know, I just look at the compact block reconstruction rates, right? Like people are running.
very different mempool policies from one another because people have different priorities, right? One person who might want to set data carrier size zero because they hate any kind of arbitrary data, even if they might have too broad of a stroke and hurt monetary transactions, is going to have very different incentives from somebody who’s sending 100,000 or maybe just didn’t know that it was raised to 100,000 and is just letting the defaults run. ⁓ You know, the consequences here are that if you’re trying to use a filter as a means of shaping
how behavior works, you’re trying to make it do what consensus is there to do, right? As I stated at the very beginning of the podcast, a filter or a mempool policy is just a gentle nudge. It’s a nudge. It’s a little bit of a reminder from the nodes that, hey, if you’re doing things this way and you’re seeing that there’s a higher failure rate than you expected, maybe you should make a slight adjustment and make it easier for yourself. And in the same process, make it easier for all of the nodes of the network.
to carry your transaction to a miner. It is not a hard stop, hard line policy that you’re enforcing where you can unilaterally impose your will on the network unless you happen to civil attack at the same time, right? And this is why the US government can’t just spin up a hundred thousand nodes blocking out any non-OFAC compliant transactions and bring Bitcoin to be a fully permissioned chain. It doesn’t suddenly change when the subject of filtering is
OFAC transactions versus spam, right? ⁓ The censorship resistant nature of Bitcoin is going to be agnostic to the contents of the transaction.
Stephan Livera (2:00:22)
I see. ⁓ Now we’ve already been going for a while, for about two hours now, but I guess if we could just quickly cover your thoughts on the… Now most of our discussion so far has been on the policy side of things. Obviously there’s the consensus side of the house. Now recently there has been the ⁓ Dathan-Ohm proposed concept, quote unquote, reduced data, temporary soft fork. ⁓ I guess…
people were incorrectly throwing around the title Bit444 and that kind of caught on as a bit of a meme but I think technically they’re trying to say it’s RDTS, reduced data temporary soft fork. Can you just give us a kind of a quick overview on your thoughts on this? As I understand it proposes I think six or seven consensus rule changes, one of which is to lower the operative turn size down to think 83 bytes at consensus level, not a policy change and like six or seven, I think six other changes. Can you just give us your quick thoughts on this?
Do you agree, disagree? Where are you at?
Kevin Cai (2:01:21)
Yeah,
so the RDTS as put forward by DATH and OM has become a little bit more palatable with time. When it first came out, it had two methods of deployment, a flag day type deployment, which we’re all used to, and a reactive deployment wherein if CCM was embedded on chain and observable, there would essentially be a pre-coordinated 51 % attack to reorg it out. Now, DATH and after spoken to a bunch of people realized that the reactive deployment was deeply unpopular.
and removed that along with some language around the legal liabilities of not running soft fork, which was interpreted by many as a threat to basically run the soft fork or else. ⁓ Now, approaching it from a technical standpoint, I think one of the things that stood out to me the most was actually based off of my own usage of Bitcoin. So Nunchuck Wallet, I’m not sure if you’ve ever used them, rolled out an update not too long ago that allows you to set an arbitrary mini script ⁓
Stephan Livera (2:02:12)
have you.
Kevin Cai (2:02:17)
you know, spending condition sort of string that allows you to say basically, all right, I love child A the most. So they’ll be able to spend the funds soonest. And then I love child B the second most. So they’ll be able to spend it, you know, a little bit longer from now and child C ⁓ the furthest, right? In terms of time locks. Now products like this have existed on the market, namely Liana wallet. That was one of the ones that I used back in the day to explain to my dad actually.
why Bitcoin was more than just peer-to-peer payments. It was actually being able to enforce contracts on chain and you could hypothetically do away with your will. You could have your entire inheritance setup ⁓ done on chain and enforced by Bitcoin’s consensus rules. Now, ⁓ the interesting thing here is that Liana Wallet, in their very conservative approach of implementing these mini script type spending conditions within their wallet, is actually not vulnerable to something that my own
version of Liana Wallet is. So Liana Wallet constructs a mini script construction, which doesn’t end up with an op-if in a tap leaf. However, my solution does. And what it comes down to is basically if a tap leafs mini script uses only straight line combinators, the resulting tap script doesn’t have an op-if. But if you have a branching type, a mini script fragment, then what the compiler for mini script will do is it will realize, okay,
Let’s say you have like four pub keys that can spend these funds. What you could do, yes, and this is what Luke believes you should do, is you could create ⁓ spending conditions that each live in their own tap leaf. So you’d have one tap leaf here, another tap leaf here, another tap leaf here, and another tap leaf here. The problem with that is that by separating them out into different tap leaves, instead of having one tap leaf with OpIf, is that the def goes from zero to one. So it actually ends up costing you more.
to execute in the case of having a particular pub key be used as a spending condition. So the Miniscript compiler is quite smart. And if it sees that your spending conditions align with it being cheaper, it will cost optimize for you and it’ll actually pack the conditions into one tap leaf. ⁓ Now Dath and Ohm’s RDTS does include, to his credit, does include a exclusion for any existing UTXOs on chain that would be used for an input. However,
this is where it starts to get into a little bit scarier territory. Many people for one reason or another don’t want to commit funds to their inheritance setup right away. They might create a pre-signed transaction instead. And then on the activation or a particular sort of event that has occurred, let’s say for me that that is my will, my lawyer would have a hex representation, or maybe it’s a QR code, I don’t ⁓ know. It could be either one of those.
And upon my death, he would be instructed to scan that QR code or take that hex string and paste it into mempool space, thus broadcasting the funding transaction, moving the funds from my cold wallet into this set of conditions. At that point, you know, the time locks would start counting down. So maybe my wife has first dibs. She can spend the funds after a hundred blocks. But if my wife is dead, I want my first child to be able to spend the funds that’s locked after 200 blocks in that scenario.
I would not enjoy the exemption because this would be a pre-signed transaction. The UTXO would not have existed on chain prior to the soft forked activation height. So in that situation, this would represent a freezing of my funds. When my lawyer goes to broadcast that transaction, assuming that the RDTS is activated, he would simply get an error. He would have to wait one year until the RDTS, you know, potentially doesn’t get renewed because it is a temporary soft fork.
⁓ to broadcast that transaction to put the state on chain and then my wife would have to maybe wait a 100 block time lock anyway, so suddenly I’ve gone from my wife waits one day to be able to spend these funds to my wife has to wait one year in one day to spend these funds. This is obviously confiscatory, right? If my wife requires these funds in the meantime and you know, let’s say that you know, because I’m such a big Bitcoiner, we have very, very few funds in our fiat bank accounts. She might be out on the street and starving.
because of the RDTS, because of this crusade against spam, it’s actually worsened the outcomes for monetary use on Bitcoin considerably. And pre-signed transactions are not weird. They are not uncommon. They’re used by many types of people. Another example of a transaction that would be affected is one that violates the rule for how deep the tap script tree can go. If you’re doing a 6 of 11 multi-sig and you’re doing it in the cheapest and most efficient way possible,
you won’t be able to spend funds because the depth of the tree would exceed the limitation that’s outlined. So the RDTS has a huge amount of confiscatory risk. And how you define confiscation, think, could be either more liberal or conservative. But I would say at the very least, we could agree to disagree that even if it’s not confiscatory, that it does freeze funds. And this is a significant contributor to why people don’t like the RDTS.
But personally, I don’t like the RDTS because it breaks my free sign transaction flow, right? Like I am just another user who’s using Bitcoin in a way that’s not weird, in a way that’s prescribed by the Miniscript compiler, right? I didn’t go out of my way to write a weird script. I didn’t come up with that tap leaf script myself. I used Miniscript, this higher level ⁓ abstraction layer that is provided to me by Nunchuck, which is also a higher level abstraction layer. And my crime was not knowing that Luke doesn’t like the way that I construct my script. Are you serious?
Right. So I think that like these two in and of themselves are a good enough reason to not go through with the software. But I also want to talk about the chain split risk. So as we know, we’ve been getting a lot of nonstandard transactions getting mined. It happens pretty much every other block, if not every block. So the moment that the RDTS goes into activation and a block that has content that bit four, four, four or.
the RDTS considers to be invalid, you would get a chain split where all of the enforcing nodes on the user activated software would continue extending the chain that is compliant. And then all of the legacy nodes that haven’t updated would probably, given that I don’t think a majority of the hash power will switch, continue building on the legacy chain or the original chain. In this situation, if it were to continue for a long enough period of time, we could see the formation of a new coin entirely, right?
But a lot of people are confused because they see that it’s a soft fork and to them, a soft fork is backwards compatible and therefore chain split is not on the table. But the difference here is that, you know, traditionally in soft forks, what we did was we took an opcode that was just a success and we’ve repurposed it and added additional validation on top of it. We didn’t say that existing transactions under the existing rules would be made invalid. We certainly didn’t try to take something that was in policy and turn it into consensus, right?
And so if you’re a miner and your modus operandum is to update core once every couple of months, you might not even be super aware that the RDTS is going into activation, right? So if there’s enough miners that are like this, I call them asleep at the wheel miners, it’s very possible that the user activated soft fork will not gain enough hash rate to really make a difference. then, you know, either
miners that want to support the soft fork will continue mining on that chain or they will just give up and go back to mining on the original chain after having forfeit all of the block rewards of mining on the alternative chain, right? I myself will be running a user rejection soft fork. I’ll be running a copy of the user activated soft fork to check what the tip is. And even if that tip has a higher weight of proof of work, I will call invalidate block.
on my librae relay and core nodes to reject it and to continue mining and pointing my hash power at the original chain, because I don’t want to be in a situation where, let’s say I’m extremely lucky and I solo mine a block, but it turns out to be on the RDTS chain, right? I don’t want to be in a situation where I have to clean up a potential mess. And so my conviction is so strong that even if the knots, you know, crowd manages to convince enough of the hash power to switch over and mine on the user activated software chain, I will
refuse to do so, and I’ll continue in validating those blocks on my main block template construction notes. I think that it’ll be really interesting to see how this plays out. There are some parallels to the 2017 block size wars.
Stephan Livera (2:10:51)
Interesting. Yeah. And it is a bit of a evolving space because I mean, just as I think just a few days ago, it was initially intended to be I think February 1. And just like a UASF, I think just a straight flag day, right? Just and at that, yeah.
Kevin Cai (2:11:07)
I think there were even some rumors of trying to do like
a miner-activated soft fork to get miners to basically flag their support and then maybe a user-activated soft fork as like the backup option of activation if that didn’t go so well. I think so. And again, there’s a lot of parallels with like the 2017 block size wars here, right? Like ⁓ many of the viewers may remember that when BIP 148 first came out, ⁓ miners didn’t necessarily like the idea a whole lot. then…
Stephan Livera (2:11:18)
Right, I think that might be what they’ve gone to next. I think that, yeah, as opposed to what it was before, yeah.
Kevin Cai (2:11:34)
Eventually BIP-91 was put forward, which basically allowed miners to decide amongst themselves by signaling with flags whether or not they were going to support BIP-148. And then if they did, then BIP-48 would be enforced from a particular block height on. ⁓ Actually, I think it might’ve been on a particular date going forward. And then later we switched to using block heights instead of dates. But nevertheless, right? Like I think that there’s a lot of parallels here in terms of, you know, the approach that’s being taken, the amount of, the types of conversations that people are having regarding soft fork risks.
like chain splits. ⁓ But I think the greatest difference actually between this and the 2017 block size war was that the 2017 block size war exposed a degree of dishonesty and also an existential threat to Bitcoin, which at the time was the ASIC boost technology. And for those who are unaware, ASIC boost was a particular technique that you could use to induce a approximate 20 % efficiency improvement in ASIC miners.
by taking advantage of some entropy generation with the chunks of the block header. Without getting too deep into this, basically there were two types of variants. There was the overt kind of ASIC boost, which was observable on chain because you’re twiddling with your n version bytes in order to make this work. And then there was the covert method, which was actually manipulating Bitcoin transactions in order to find a collision in the Merkle route, specifically last four bytes of it. ⁓ And this covert one was sort of the point of contention. Like Gavin…
⁓ Anderson was defending this and saying that, this is just an efficiency improvement. Why would we try to go against this while ⁓ other Bitcoiners in the space were like, yeah, like, obviously, this is not a good thing. We don’t want to give Bitmain this patent where they have a competitive advantage where, you know, it’s not widely available to everybody else. You know, they’ve implemented this hack essentially in their hardware that gives them a huge massive bonus ⁓ in hashrate efficiency. Right. And a lot of people at that time rightly pointed out that
we should fix this, this is a bug. But the problem was that the incentives were not aligned because the miners were more than happy to hard fork SegWit in or to use extension blocks, which was a different way of implementing SegWit. And the reason why they wanted to do this was because it was compatible with covert ASIC boost, right? So this is sort of an admission of guilt to many people implicitly that, well, why are you interested in these solutions?
that are specifically architected around the idea of preserving your competitive advantage in a way that’s not provable, right? In a way that it’s covert and nobody can point at you and say, well, you’re using ASIC boost. I can see it right here in the block that you just mined. So that dishonesty created a schism between the miners who incorrectly believed that hash rate was what determined consensus with the users in the economic majority, the exchanges. ⁓
To be fair, the exchanges were kind of you know, split on this one because they were selling like futures for the coin outcomes. But for simplicity sake, let’s just say the users, the exchanges, economic actors that were saving in Bitcoin, using Bitcoin, mining Bitcoin even in some cases, right? Some miners were not opposed to BIP 148. Most of it was just Bitmain because Bitmain had this competitive advantage through ASIC Boost. It allowed us to explore the concept of, you know,
re-evaluating this 95 % threshold, which many people had misinterpreted to be a vote by miners. In fact, it was not a vote. Shaolin Frye, a Litecoin developer who also worked in the Bitcoin space in 2017, sends a mailing list post saying, okay, miners are probably going to feel some pressure from people trying to get them to vote one particular way or not in adopting SegWit. Maybe they want to adapt SegWit 2X. Miners might not be in a position to upgrade even. They may not have the means to just go in and update their
⁓ client software without having done audits, without having done some calculations on what their risk levels are. This isn’t something that we want to make miners feel like they have to rush in terms of making a decision. ⁓ And eventually, the community converges around Flag Day activation. We will do an enforcement at a predetermined time in the future. That settles things in a more stable manner, in a more predictable manner. Miners would have time to really think things through. And they could flag support for SegWit by consensus rule.
and eventually activate it. And this worked really well. know, flagging started in August 1st, 2017. We started seeing more more miners come on board. It was controversial to small blockers even to take the user-activated software approach. It was seen by many as reckless and too risky. They wanted to take, you know, a safer way of doing this activation. And, you know, if the BIP 148 user-activated software could fail, it could also give big blockers a huge lead, right? So there’s a whole lot of stress. There’s a whole lot of like,
you know, game theory going on where everybody was trying to figure out what everybody else was going to do. And even Greg Maxwell, one of the goats opposed it, right? He said, I quote, the primary flaw in BIP 148 is by enforcing the activation of the existing non-user activated software segment nodes. It almost guarantees a minor level of disruption to hear some people, non-developers in Reddit and such, ⁓ if you even see the forced orphaning of 148 as a virtue that it’s punitive for misbehaving miners.
The reputation we earn for stability and integrity for being a system of money that people can count on will mean everything. I think this has huge parallels, right? With the RDTS, we just discussed that the reputation for stability, for uptime, for integrity, Bitcoin has never gone down. This is something that we really stake our entire identity on as Bitcoiners, right? Like we laugh at Solana when it goes offline.
We laugh at Ethereum when it goes offline. We do these things because Bitcoin is so incredibly far ahead from a technical perspective, from an engineering quality perspective, that it’s what really sets us apart. And that’s why we say things like, you know, Bitcoin, not crypto. It’s what distinguishes us. To compromise these values in some misguided attempts to make spam consensus invalid and bringing on board all of the risks that we braved for activating SegWit, something that we needed.
desperately because it was an existential issue makes you wonder like what are the actual motivations behind this soft fork? Are they really about spam or are they about centralizing and seizing control of consensus changes making it so that it’s easier to push through more soft forks in the future or perhaps scaring off would be soft forks right sending the message that hey you can try upgrading bitcoin we can always roll it right back right
What else why like what other purpose would there be in removing upgrade hooks unless it was a desire to ossify bitcoin? And then let me just finish up with my comparison to 2017 and then we can dive a little bit into the other topics here but you know, I I see the most interesting thing here being that We had these two groups in 2017 group a wanted to just not give a shit about what miners did they just
didn’t care what miners thought one way or another. They would just reject blocks that are not SegWit. They would use the nodes that the users and the economic actors were running in order to enforce their will around SegWit. And then we had Group B, which wanted to just do a vibe check with miners and check that, okay, is a certain threshold of miners on board before we go through with this massive change that could risk a chain split. The fact that BIP91 was able to bring these two groups that were opposed to each other and allow them to…
reconcile their differences, I think is really nice, but unfortunately won’t happen with the RDTS. The RDTS is far more opinionated. We don’t have groups of people who are ultimately wanting the same thing, but getting there through different means. We have people who are strictly opposed, diametrically opposed in terms of what they would like Bitcoin to be like. so, yeah, ultimately, I think the misinterpretation by a lot of people from 2017 is that
The people have the power, the little nodes defeated the minor giants. This is not what happened, right? This is an incorrect reading of what occurred. think Paul Stork actually, I probably just crippled, I probably just butchered his name, but Paul Stork from Truthcoin, I think puts it the best, which is that talk is cheap. Miners signing a sheet of paper that says they’re gonna run Segwit 2X is not a meaningful signal of how they actually feel.
⁓ Miners refusing to ever run a Segwit 2x client on the other hand is a perfect communication that they are in alignment with the economic majority because at the end of the day it is the miners that suffer from mining on a chain that is economically irrelevant. Their block rewards are worthless if they continue to mine on a chain where users refuse to adopt that chain as the canonical chain. And in this case, I don’t think there’s enough user backing behind the RDTS that miners are going to look at this calculus and say, yes,
It is an existential and revenue based issue for us to adopt RDTS or else we will lose out on the block rewards. I don’t think they’re going to come to that conclusion because this is not an existential issue. This is an issue born of a subjective reading of transactions on chain and a conflation of an annoyance with an existential problem to Bitcoin.
Stephan Livera (2:20:35)
I One other, I guess, broader kind of comment around programmability. I think maybe that seems to be another kind of a schism in a way because I think some people may be more in the nots RDTS camp, see it like we want to minimize the vectors for spam. And if it means we have to cut down programmability, so be it. Whereas maybe the other side on this particular debate see it more like, no, actually programmability is important to Bitcoin.
It’s how we encode things, whether that’s lightning, other L2s, inheritance, security, multi-sig, redundancy, these kinds of things. So do you see it in a similar way or how do you see that split? Is that a good or a bad direction?
Kevin Cai (2:21:11)
Absolutely, yeah.
I think
programmability is extremely important. ⁓ To sort of look at this a little bit more deeply, think, you know, I have looked at this from multiple perspectives. I’ve looked at this from the perspective of being a developer where I am obviously pro improvements to Bitcoin. For example, I think ZK coins, which is something that Litecoin has talked about a lot on Twitter, would be a strict upgrade in privacy, but would require embedding zero knowledge proofs on chain, which is a form of arbitrary data storage.
I think that that’s very exciting to me and I’m very pro programmability. I’ve thought about it from the perspective of my dad, who’s not technical, who’s just using Bitcoin to transact and who’s just using it to ⁓ sort of be a replacement to wire transfers or using Zelle or Venmo. I’ve thought about it from that perspective. And I’ve also thought about it from the perspective of like what you were saying with inheritance, where Bitcoin goes beyond just being money, but actually becomes also a quasi court, right? Where you can enforce your conditions into a contract.
or a smart contract in this case, like lightning HTLCs, for example, and let the Bitcoin chain enforce the outcome of contracts. And maybe you could even extend that to things like DLCs where you have an oracle that is determining an outcome and people can create a prediction market or something like that on chain. I think the programmability of Bitcoin is a non-negotiable aspect that makes it so interesting, right? If we wanted to create something that just approximated digital gold, what we could do is we could just create
a fork in which we remove all of the opcodes, we move back to pay to pub key, we make it so that the fields that are currently responsible for witness can only contain a spending condition, basically would just be the signature essentially, ⁓ and then dumb it down to the greatest possible manner where you can’t do multi-sig, you can’t do anything except for single-sig spending, and even in this crippled husk of a version of Bitcoin, it would still be possible to do arbitrary data embedding.
Right? You could simply use fake addresses. You could use fake addresses. You could use fake pub keys. You could send to those. You could then, you know, continue to shard the individual pieces of a larger file across there. I think there was this BSV guy that actually ⁓ talked about this in regards to op return, but it’s also applicable here, which is that a standard op return contains enough space that if you’re only embedding pointers, that it’s enough to accommodate about 298 megabytes worth of
Other op returns. So basically a indirect form of a table of contents, if you will. Right. You could say that I’m, want to embed a file up to 300 megabytes in size. I split them up across all of these operatives. And then I have a master operative that points to all of them as a table of contents. And then some sort of parsing program can read that first master operative. Search up all of the other operatives and then reassemble this file. Maybe you could put like your favorite movie on chain. Right. Now I think that the problem that a lot of people struggle to grasp.
The concept of a lot of people struggle to grasp is that arbitrary data embedding is part of the very nature of any communication based system. If I have a communication channel with you, that’s a phone line, right? I can build some sort of a data encoding protocol on top of that phone line, even though that phone line is only intending to communicate audio data from me to you through, you know, the microphone going into the landline and then coming out the other end through your speaker. But
If I denote that a certain tone is a zero and another tone is a one, I’ve just created an arbitrary data communication channel. And what difference is there between a communication channel and a Bitcoin transaction? The only difference is that it’s recorded and stored rather than it being an ephemeral channel that then gets closed at some point in the future. The second reason why this is such a concern is steganography. Going back to what we were talking about with a weekly created private key that could be
brute-forced by anyone. If I encrypt the data, there’s only one possible venue for you to determine what’s inside the encrypted data, which is to brute-force it, or for there to be a vulnerability or a flaw with the way that I encrypt that data. Are you then going to spend countless amounts of compute hours trying to determine if anything that looks like it could be encrypted data could be decrypted through brute-force? How much compute are you willing to throw at this? A data center? 10?
Are you going to burn megawatts, gigawatts of energy just trying to determine what’s locked behind this transaction, which could be encrypted data? It creates a form of asymmetry that’s very difficult for someone who wants to impose their will on the censorship resistant system. And that’s part of what makes Bitcoin so beautiful is that even if you try to dumb it down to the very basic atomic level of what it is, you can still do whatever you want on Bitcoin, so long as you’re staying within the relatively permissive rules of consensus.
Stephan Livera (2:26:07)
Okay, so I think essentially, yeah, we’ve covered a lot of stuff. ⁓ Obviously, ⁓ this is a, it’s been a bit of a wide, wide range of conversation. We’ve spoken a bit about the different ins and outs of spam, liberal relay, as well as IDTS and kind of finishing with programmability. I think we’ll leave it there. ⁓ But any last ⁓ kind of where can people find you online? ⁓ And ⁓ listeners can find you, I think it’s a…
proof of cache on X. But any kind of last point you want to… Yeah.
Kevin Cai (2:26:37)
Yes, that’s that’s completely correct. That’s completely correct. So you
can find me on Twitter on x.com slash proof of cash I’m also in the blixed wallet telegram if you’re ever interested in trying out a mobile ⁓ You know lightning wallet that’s easy to use ⁓ My github is DJ Kazak if you want to check out some of the projects that I’ve been working on there I’m typically you know got one or two side projects cooking at any given time and if you ever want to speak to me in person come
to Boston BitDevs meetings and we will have a talk face to face. Sometimes that’s the best way to hash things out. ⁓ I know that the internet doesn’t always carry through tone and you know ultimately if you want to have a good discussion I think nothing beats getting a beer and talking about some bitcoins. So please come find me via whatever means that you’d like and if you’re in Beantown come on down to BitDevs.
Stephan Livera (2:27:28)
Excellent. All right, well, Kevin, thank you for joining me.
Kevin Cai (2:27:30)
Thank you for
having me. It was an absolute pleasure. And I hope to be speaking to you again.