Nick joins me to talk about how he tried to disprove PlanB’s Stock to flow model, but ended up finding out that it holds up. We also talk about cointegration, what that means, and some of Nick’s statistical work. We talk about:

  • Professional background
  • Background to S2F
  • Cointegration and what it means
  • Engle-Granger test
  • Mean Reversion indicator
  • Shitcoins riding Bitcoin’s coat tails

Nick links:

Related episodes:

Sponsor links:

Stephan Livera links:

Podcast Transcript:

Stephan Livera:

Nick, welcome to the show.

Nick BTConometrics:

Hi Stephan.

Stephan Livera:

So Nick, obviously I’ve been following some of your work. We’ve discussed some of your work recently on the show where in one of my earlier episodes with PlanB, we’d love to hear a bit more about you. Obviously it sounds like you’ve got a bit of a statistician background. Can you tell us a little bit about that?

Nick BTConometrics:

Yeah, so my history I guess is, I’ve been in drinking water quality, so I look after sort of the public health aspect of water production. And, as a consequence of that I need to do a lot of statistical modeling and, that’s essentially where my statistical background comes from. So it’s not economic or finance or anything like that, which, in a way might be good, I think to get the perspective of another kind of statistician.

Stephan Livera:

You’ve certainly made some very important and interesting contributions in this whole debate, debacle, whatever you want to call it. Right? Yeah. So tell us a little bit about how you got into Bitcoin. What was it that drew you into Bitcoin?

Nick BTConometrics:

Yeah so look, Bitcoin for me is something that I didn’t really get into very early on. I’m sort of a later entrant into Bitcoin. I came in in the 2017 bull run so I’ve been, I’ve been learning ever since then and really it’s only since 2019 that I’ve become permanently in Bitcoin. Like, I only believe in Bitcoin basically now. And it’s taken probably about a year to get to that point, from when I first entered. And it’s sort of after I learned about Bitcoin, I learned how bad everything else was, if that makes any sense.

Stephan Livera:

So you kind of came in thinking, Oh yeah, crypto or this different cryptocurrency, and then after a while you’ve kind of zeroed into Bitcoin or you focused in on that.

Nick BTConometrics:

That’s right. Well, I started noticing that Bitcoin was really the only cryptocurrency sort of with any merit, if that makes any sense.

Stephan Livera:

Yeah. Okay. And so what were some of your, I guess, influences coming into it? Like, I presume you’ve read Saifedean or obviously you’ve been working with PlanB. Any other kind of influences on your thoughts?

Nick BTConometrics:

Well, PlanB is by far the biggest influence. Because back in, I think it was March 2019 when he came out with the stock to flow model, basically I thought it was rubbish. I don’t know to prove it, that’s rubbish. I think that’s just a spurious regression and I’m gonna show everyone that it’s not worth anything. And, I tried really hard, I really tried to prove it wrong. And I tried all sorts of crazy models to try and invalidate the theory and all that kind of stuff. But at the end of the day it all kind of just pointed to that it wasn’t wrong. I had to I think by early August, I talked to PlanB that, look, I think you’re probably right. I also wrote an article, August 2019 Falsifying Stock to flow as a model of Bitcoin value.

Stephan Livera:

Fantastic. And you mentioned the term spurious regression. Can you explain what that is for the listeners?

Nick BTConometrics:

Okay. well, so when we do a regression, we are just basically saying there’s a correlation between two variables. So two things are moving together or there’s something that’s happening at the same time with something else. With time series, what tends to happen is if they’re, what we say integrated, so they, they’re moving up. If they’re all moving up together, there’s a trend in them and that can sort of misplace the regression. Like we see that there’s a correlation between the number of Nicholas Cage films and the number of deaths by suicide or something like that, right. From the spurious regressions, and it’s completely invalid. There’s nothing to that. Another correlation for example is, you know, there’s a correlation between, people who eat rice and black ahir, but there’s nothing to that, right? It’s basically just a spurious association. So, the way to get around that in time series anyway is to see if the difference between them, doesn’t change. And that I guess is called cointegration where we have a stationary difference between two variables. And that’s something that I’d identified early on as a way to try and prove PlanB’s model wrong was to say these two variables are spuriously correlated, so they’re going up but they going up not together, then they won’t be cointegrated. But they were cointegrated.

Stephan Livera:

So PlanB’s work and some of your work, it’s viewed more in a historical context, not a, not necessarily a predictive context. Right?

Nick BTConometrics:

Yeah. Well, yes. That’s I mean, that’s true of anything we can, we can really only say what’s happened and we can estimate what will happen based on what has happened. And given that there is, it’s like given there is this evidence of this relationship being so strong for so long. Like I just tweeted just now actually where I looked at the ordinary least squares model for stock to flow and calculated it for each day, for a period of time, from 2010 to now. And the other residuals. That’s the difference between the model value and the price. Then looked at the residuals of that and, they were stationary, which to me indicates that not only is it heavily co-integrated, like these things are moving together, but the current integration is getting stronger. So it’s getting like the relationship itself is actually becoming, more well-defined with time. I did send out the, the Engle-Granger statistic, a little earlier on which showed it going down over time as well, which is more evidence of that. So I think what that to me says is that yes, what we have at the moment is true for the past, but I mean, if it’s true for the past, there is a pretty good chance. It’s probably true for the future too. I think. That’s probably different to what PlanB thinks. I’m becoming more and more convinced that it’s a valid non spurious relationship that basically isn’t changing with time. It’s getting stronger.

Stephan Livera:

Right. And so I guess if we go back to the first step of it is understanding, okay, there’s an R squared, right? And this is the ordinary least squares regression. Can you just tell us a little bit about what that means and what does it mean when R squared is a high value?

Nick BTConometrics:

Sure. So an R square is the the determination of correlation coefficient? So basically it gives you the amount of variance explained by the model. So in this case, when it’s 0.95, then 95% of the variance in the true value is explained well by the model. It’s a bit funny in the log space because, like the logarithms sort of, they make really what distances appear close together, and that can actually have an effect on the R squared. So if you de-log the model, right? So you look at the linear model price and the linear price and take the residuals of that, the R squared is much much less, like 0.5, 0.6, in the linear space. So it’s still explaining a large proportion, a really large proportion of the value. There’s just not that, that’s why you get that really wide range in the model predictions zone, I guess. So that’s, that’s a criticism that I think I’ve had for a while and other people have leveled is that being in the log space it’s going to get bigger and bigger and bigger this difference as we go on.

Stephan Livera:

Yup. And can you explain what it means to use log log? Say we’re using logarithm of the price and logarithm of the model.

Nick BTConometrics:

Yep. So that’s basically, we’re just taking the log of the price. So the log function, I guess that log base C and the log of the model value. When we create the OLS, I should say, we create an OLS based on the log of price and the log of stock to flow. And this gives us this log log model, I guess. So we get a log model price and a log model price versus a log price. And then to put them back into linear space, we just exponentiate so we, you know, exponentiate the log of the price and the log of the model value and we end up in the linear space again. And then once you’re in the linear space, then you can do the normal linear stuff.

Stephan Livera:

Okay. And can you tell us a little around, what is required to assess if something is co-integrated? So there’s a couple of things you talk about in your article. One of them was stationarity and a couple other, concepts. Could you outline some of those for the listeners?

Nick BTConometrics:

Sure. There’s a few steps we need to take in order to assess a model like that. So the first thing I did in the article, getting to the co-integration a little bit later on, was look at basic diagnostics of the ordinary least squares regression, because these are things that are often overlooked in, so that’s obviously the first thing I went to. So that’s obviously the first thing I went to. So we look at linearity that’s okay. Heteroscedasticity, basically the variability of the predicted outcome versus the, estimated outcome. And we end up with not being able to reject the model, against those diagnostic tests. And the normality in error, which is the shape of the residuals. It should be no roughly Gaussian in shape. And, it basically is, it does have, I mean it doesn’t pass a formal test for normality, but that sorta doesn’t matter. It just has to be normal enough. Otherwise you get, expanded variances in the coefficients and things, which is detailed in the article there. So all of that’s done. And then we look at the integration of the variables. So we see if there’s, if these variables are stationary or not. So we’re checking to see if the first difference of each of these variables, is moving in a trend or if it’s around zero. Or if the second difference, if it’s second order co-integration, second order integration, in this instance they were both first order integrated, and we’re able to then say, okay, if we both, if we have two variables that are first order integrated, then we can do a co-integration test on these variables.

Nick BTConometrics:

And that tests whether the difference between these two variables, remains stationary with time. So stationarity as I said before, is the idea that there’s no trend, right? There’s no trend in this difference. It’s always going to be higher or lower, might be the same as the mean or whatever. So that’s zero, it won’t be higher than lower than zero, but it’s always going to be within that range. It’s not going to go up or it’s not going to go down. It’s going to remain in a stationary zone. So if that difference remains stationary, these two variables are said to be co-integrated, in the article I got into, a bit more of a complex model called the CM. Which was just really, I was just trying to prove this stock to flow model wrong more. And again, kind of got the results I wasn’t expecting. It said the co-integration is wrong. I even adjusted for the time trend. So there’s a trend in that initial mode it still said, the stock to flow variable was an important co-integrated predictor of the market cap .So I think there’s a bunch of other stability checks that we do there. That probably, don’t need to explain too much. But the essence of it is that the ECM was stable. So the error correction model for that particular, set of variables was stable and which meant that we could use it going forward. One thing I think important to note though is that the stock to flow variable isn’t random. Like it’s random to an extent. It’s random on a daily extent, but it’s not truly random, which meant that we kind of know where the model’s going to go.

Stephan Livera:

And can you tell us a little bit about, I’m not sure the correct pronunciation, Akaike information criteria?

Nick BTConometrics:

Yeah, sure. So that’s basically a way of saying how much information the model needs to, how much information the model needs to be accurate. So if a parsimonious model is presented versus a very complex model, so one that’s got two variables versus one that’s got seven and they give very similar R squares kind of thing, the parsimonious one will give a better AIC compared to the complex one just because it uses less information. So that’s kind of adjusting, you know, people will throw millions of variables at something to try and get some relationship. It’s trying to adjust for that, concept I guess. So, it’s sort of qualifying the amount of effort going into the model.

Stephan Livera:

And so in your experience as a statistician using a model with this number of variables, is that relatively low compared to what you might’ve worked with?

Nick BTConometrics:

And I guess that’s the other important thing to it is, sort of model specific. So if you’re comparing, very similar things, so it’s like Bitcoin price with other single variable Bitcoin price models, then the AIC is appropriate. But you can’t really compare the AIC of one variable model to the AIC of some other one variable model if it’s not relevant. Like if it’s not the same thing cause it could, the variances could be higher or, it’s very domain specific.

Stephan Livera:

I see. Yeah. Okay. And so can you tell us a little bit about, your website now cause you’ve got a bunch of indicators on here, on your website, BTConometrics.com. Do you want to just start with some of these like residual likelihood indicator? What’s that?

Nick BTConometrics:

Sure. So that’s basically a non-parametric way of estimating the probability of a residual. So as I said before, the residual is the difference between the model price. And the actual price. And given we know the residuals are stationary from the cointegration analysis, we can then use, these likelihood techniques to estimate probabilities. Um, and that sort of gives you an idea of when things are going to turn. So the lower that likelihood is the more likely, something’s going to happen next to adjust for that likelihood. So if it’s a really low likelihood and the residuals are high compared to the model so that the model is high compared to the price, sorry, then the likelihood is high that the next price will be, closer to the model, be lower. So it’s kind of a way to try and capture the tops and bottoms, but in a way that gives you a quantified amount of information. So you could, you know, people say, wait, how do you pick the tops? How do you pick the bottoms? This way it gives you a number that you can use to, say this is sort of 5% likely to be at the top, or you know, less than 5% likely to not be the top. Um, and that way you can use that to say, I’m going to put this much into that particular bet, I guess.

Stephan Livera:

I guess one way I’ve heard it explained is it’s like a rubber band effect, right? That the more it kind of stretches out. It kind of now it’s like pulling back harder. Is that another way to analogize what you’re talking about?

Nick BTConometrics:

Sure, that’s the essence of the co-integration. So if there’s, if there’s this kind of integrated price remaining there, which it seems to be there, and remaining strong, then there will be this pullback down. I guess it’s a way to capture that, the strength of the rubber band. So if it’s kinda snapped back really fast or if it’s just got a little bit of tension on it, then that residual likelihood that kind of gives you that.

Stephan Livera:

I see. Yeah. And I guess that would work both ways, right? When we are under the model predicted price and when we are well over the model predicted price, it’s measuring how much it’s going to kind of rubber band back to the model.

Nick BTConometrics:

That’s right. That’s right. Yep. I mean, nothing’s 100% certain in this, so, but it will give you, what’s happened previously. So that’s what likelihood means. It means what’s happened previously is this residual had a really low chance of happening. And the next thing that happened was it, you know, it snapped back really fast and, it sort of quantified all that information from the past. So the way that this breaks is the cointegration breaks down the co-integration breaks down, then all of this doesn’t really matter. But I don’t see it breaking down. I see it getting stronger.

Stephan Livera:

Bullish as, how about the mean reversion indicator? What’s that?

Nick BTConometrics:

That’s very much just same thing, just a different way of looking at it really. So that’s just the residuals of the model, right? So the difference between the model and the price, on a log scale, just so we keep it in that domain, they will center around zero and again, as it gets towards, one side it’ll have a bit of a stronger pull back and if it gets towards the other side, pull back the other way. And it’s much the same sort of idea as that residual likelihood. But, so that gives you a bit more of a direction I guess. And you can then add, the Quantiles onto the main reversion indicator. So you can say that, you know, previously the 95th percentile was here on this distribution of residuals. So we can expect that to continue if the distribution remains stationary and if it’s the 95th percentile and it’s pretty unlikely to go back much more than that for much longer, so we can say that that’s probably going to be a top, that kind of thing.

Stephan Livera:

What’s your view around this whole idea of, like multiples of the model price? So people are sort of trying to speculate about, you know, will it be two times the model price, will it be three times the model price? Do you have any views on that?

Nick BTConometrics:

Yeah, so, three times, it comes down to, again, I don’t put much weight into the, the main price other than that, that’s the middle of the distribution. So it could go either way. It could go, up to the top it could go down to the bottom of that band of residuals. Um, whether it’s two or three times the stock to flow mean price. I mean if it’s, if it’s above the 95th percentile, then it’s going to be a low chance of happening. Um, and probably good time to offload some. But, if it’s below that, then, you know, the opposite is true. I think probably, yeah, the stock, to flow was at the multiple, the stock to flow multiple, is that what you mentioned?

Stephan Livera:

Yeah,

Nick BTConometrics:

I mean it’s a good quick way of people understanding the, rubber banding effect. Um, it’s good to quantify that. I think with the residual likelihoods I’m giving you a probability of that actual price.

Stephan Livera:

Do you have a view around the different, so for example, with this coming halving, there’s the model predicted price of $100,000 and then there’s another one around the $55,000. What’s your view on which model you’re preferring to use in terms of, what your like doing your work on?

Nick BTConometrics:

Yeah, it’s a bit murky isn’t it? It seems to be like it’s “ahh it changes all the time or whatever”, but it’s not really, it’s just that when you use different parts of data, you get slightly different coefficients. And in the log space that’s not very different. But when you exponentiate it, it’s like $50,000. So that’s why we come back to, if you’re using the log space, it’s really all about percentage change. But if you’re using linear space, then you can see the actual process differences. And, like I said before, if you use the stock to flow model, you get an R squared in the Linear Space close to that 0.5 rather than 0.95. But I think for the long run relationship, which is what we talk about for the current regression stuff, the log, the log space is the best space because you capture all of the previous information, all the details in the lower price, in the linear space. It’s just all you see is this big exponential curve, which doesn’t really mean anything to anyone until you look at it in the log space. So yeah, the different price estimates for the future stock to flow value, I don’t put a lot of effort into.I on the website for example, I just calculate the OLS every day and it, updates the model every day because, as I’ve just shown, just now the residuals are quite stationary, even if you do that for all of the history of Bitcoin. So it’s, I think that it’s a focus on the difference to the model than to focus on the future potential outcomes of the model, the difference of the price to the model now. Now does that makes some sense? Yeah.

Stephan Livera:

And also the on some of the different graphing and charting, some charts are showing on a daily basis and some are showing on like a yearly basis. So can you articulate your thoughts on that, on ways to think about like what does it mean when you’re looking at the daily versus say the 365 day?

Nick BTConometrics:

Sure, sure. I think that comes down to how you interpret the scarcity factor itself kind of thing. So I mean, the stock to flow ratio is calculated on a yearly basis. So you could argue that it should use, you know, yearly data, on that alone. But people being, people probably come to know about scarcity faster than the year. So then you go, perhaps we should update that more than that. Maybe monthly. And I’ve personally, I like the 14 days, because because it smooths it out enough and it sort of lines up with the difficulty adjustments, which I think perhaps tend to that scarcity as well. A little bit.

Stephan Livera:

The Engle-Granger co-integration test. So it says, determining the two series are co-integrated by modeling the lag of the residuals against the first difference of the residuals.

Nick BTConometrics:

Yup. So, that’s basically it. You said it exactly how we do it. We get the model, find the residuals, find the first difference of the residuals and model it for [inaudible] and predict the difference. And if it’s a stationary residual in that model, then we say it’s co-integrated and, that’s basically less than about Montessori. It depends on the number of samples you’re using, a whole bunch of things. But less than minus 3, it’s basically co-integrated, for the Engle-Granger statistic anyway.

Stephan Livera:

Yeah. And as we speak today, in early March it is negative 11. And so your website here is showing basically negative 11.74, but the idea is that giving you an idea of whether it is breaking down?

Nick BTConometrics:

Yeah. Well, if it goes up, so if Engle-Granger statistic goes up to above minus three, so if it gets to minus two, or higher than that, then yeah, the co-integration is broken and these things that I’ve been saying are all untrue.

Stephan Livera:

Right? Yeah. But today we’re very in the green, for listeners who are concerned, that’s not a concern right now. Okay, great. So let’s also talk about some of the other stuff you were mentioning, which is instantaneous residual likelihood analysis. Right? And this is, I think what you’ve been speaking about recently on Twitter as well. So can you tell us what that is? How should we think about that?

Nick BTConometrics:

Well, again, this is, the same sort of deal as the residuals analysis, really the residual likelihood analysis. But instead of calculating the model for today and then going back and calculating what the residuals would’ve been if we had the model today, it calculates the residuals as they were at that point in time. So we’ve calculated the model for each day, and then we’ve got the residuals for that point in time and they calculate the likelihood of those residuals and just put them into a series. Which I think gives me more confidence in the model because, we have this idea that, we calculate the model now and then go and back test it. But that’s really, it’s kind of not true because we’re looking at the model now that has all this extra data, which could be influencing the outcome. Um, so if you go back and calculate the model at that point in time, how you would’ve seen the model if you calculated it in 2012 or whatever, and then found the residual of that day and then, you know, 2013 calculate the residual that day. That’s what that kind of does. And it gives me great confidence in the model to be honest, Stephan. It’s basically showing us that the model has been holding for all this time, which is remarkable really.

Stephan Livera:

Right. And so, one other question around the statistics of it. Your website mentions the Kelly fraction. So what’s that?

Nick BTConometrics:

Yeah, this is just something I throw in there just recently, it’s the Kelly criteria and it’s just a way of maximizing your bankroll. It’s probably not. Like if you, if you’re going to use this yourself, you probably need to go and calculate it yourself. This is just a rough rule of thumb. And it’s assuming equal odds and basically says, look, given the history of this likelihood, you probably should put this much of how much you’re willing to lose into this bet. And the Kelly criteria on can be much more complex than that and it’s a way to maximize your bankroll over time, essentially.

Stephan Livera:

Okay. So that’s for the really advanced people out there.

Nick BTConometrics:

Absolutely. Yeah. If you don’t know how to use that.Don’t use that.

Stephan Livera:

Yeah, I mean for me, I’m just a HODLer. I’m just stacking I’m not really like trying to time tops and bottoms myself, but, for people who are interested to try to understand from an analytical point of view, a statistical point of view that’s there. But look, so we’ve spoken mostly about quantitative analysis and I think that’s, from speaking to you, that sounds like that’s really your wheelhouse right? But what about your views from a qualitative perspective? What’s informing that? And is that informing any of your analysis?

Nick BTConometrics:

I mean, yes, it is. I think, cause like I said earlier on, I went and focused on Bitcoin, and the reason for that is, you know, it’s the oldest coin, so it’s got this, the Lindy effect that people talk about. It’s got the biggest network effect so Metcalfe’s stuff there, it’s got the highest rate, so it’s the most secure network. And it’s, it’s really, I mean it’s really quite amazing how well it’s worked for this amount of time, and without, you know, too many substantial errors, in it. It’s got like a really high uptime, 98% or something. Then I had a look at Ethereum just recently and it seems to co-integrate with the Bitcoin stock to flow value, which is kind of odd, right? Like, why is Ethereum cointegrating with the Bitcoin stock to flow value, not with the Bitcoin price, with the Bitcoin stock to flow. I haven’t quite fully comprehended why. But I think it is basically people are spending these precious, things these precious Satoshi’s on gambling on this Ethereum thing. And all of the things that Ethereum sort of brings like all the various rubbish ICO’s, et cetera.

Stephan Livera:

Right. Well, I’ll tell you what, that might be, perhaps, maybe I’m just confirming my own biases and it’s confirmation bias. But my view is basically there’s a lot of other shit coins out there and they are basically riding the coattails of Bitcoin. And it sounds to me like what you’re saying kind of supports that thesis.

Nick BTConometrics:

Yeah, that’s exactly right, I think. There are some that do, follow the Bitcoin price, like Litecoin for example, does follow the Bitcoin price, its own stock to flow doesn’t predict it at all. It’s obviously not working on the value of its own network. It really is, you know, riding Bitcoin and we can predict where it’s going to be based on Bitcoin’s price level essentially. And there’s a few others there that are yeah, like that. But really when they’re just, looking at the, Bitcoin stock to flow and not the price. I’m still, I’m still kind of in the midst of fully understanding it, but I think that’s basically on track that it is just riding Bitcoin’s coattails.

Stephan Livera:

All right. Well, unfortunately there’s a lot of shitcoiners who aren’t gonna like that, but well we’ll get a lot of angry comments in the threads after this anyway.

Nick BTConometrics:

Ah I’m used to that.

Stephan Livera:

But, I think the deeper point is around, you know, causation and correlation, right? So this big, you know, everyone’s always debating about “Oh correlation is not causation” and so on. And the other thing that strikes out is can it be that sometimes there are things that kind of work for reasons we don’t really understand. And it just so happens that the stock to flow is the one thing that best models that.

Nick BTConometrics:

Yeah. That, I mean that, that is that. So I’ve had this idea that perhaps there’s, an X variable, so there’s stock to flow times X or stock to flow plus X equals price or whatever. And we just don’t know what that is yet. That, yeah, mediating effect, mediating variable, confounding variable, whatever. We’re just not quite sure what that is. And that might explain things like when people say, “Oh, there’s no demand in the model, how can this model work?” And this kind of thing. Perhaps stock to flow is informing demand somehow.

Stephan Livera:

Yeah. A silly example, I was, I can’t remember exactly where I heard this, but it was something like when the wind, the direction of the wind changed, then that would change the sex of reptiles. But it just so happened that certain reptiles being born like that, that was also being itself being driven by like what season you are in and so on. So it’s like something else is, but it kinda, it’s still

Nick BTConometrics:

It’s unrelated but happening at the same time.

Stephan Livera:

Yeah, right.

Nick BTConometrics:

So that kind of change, I guess then stock to flow would be a proxy, which is something I think we have with time. So because of the, the history behind stock to flow where we know that it works for gold and things like that, I find it difficult to say that it would be a proxy for value given that it works for value for other things. If it didn’t, then I’d be more open to saying it’s probably a proxy. But yeah, it does kind of work for, various other precious metals as PlanB is showing.

Stephan Livera:

I see. Yeah. So in your view then, that’s kinda going in the direction of saying it’s not a proxy, it is the driver itself or it is the causal kind of factor or the main explaining factor is that.

Nick BTConometrics:

At least it’s on the path. So it’s, we’ll have an arrow of time, Bitcoin price here and perhaps there’s feedback loops and stuff. Right. But they’re on the path from here to here. Stock to flow is in there somewhere. Absolutely. I’ve no doubt because of that cointegrated relationship, it’s something that I’m quite sure quite certain of that it’s a non spurious predictor of Bitcoin price. So that doesn’t mean that it completely explains Bitcoin price. It explains a good portion of it. And in the linear space, that’s about 50% of it. Yeah, which is pretty high. Really like in the public health stuff that I do. To see R Squared above sorta 0.2, is crazy. You know, so 0.5 in a linear space is amazing really. And, point 0.95 on the Log scales, just, it’s quite a high value to have.

Nick BTConometrics:

And good confidence in this thing that you know from other resources shows the same sort of correlation. Whereas something like time where it does look like there’s a relationship but then you do the cointegration analysis and it doesn’t co-integrate and there’s not really, I mean there’s not a really good explanation for why it would increase the time other than perhaps just you know, adoption, that kind of thing. And that by definition I think is a proxy and the proxy could incorporate adoption. I mean stock to flow increases with time as well so it could incorporate stock to flow. So I think that’s perhaps a better example of where you started off there with the wind changing and it was just because of the seasons and you know, correlated to the change in gecko sex and that kind of thing. That’s what time is. Stock to flow is the season.

Stephan Livera:

Yeah. Interesting stuff hey, so then the other question I’ve got is what are your thoughts on if or when the model breaks down?

Nick BTConometrics:

Yeah, well it’ll happen, pretty slowly actually because, unless there’s some really, really drastic things that happened, but it’s like, so when we have the having, the Engle-Granger statistic will increase, it’ll go up from minus 11. Historically it’s bumped up like a couple of points or something. But if it goes up to in the minus three zone, and then we’re going to have to look at it a lot closer to make sure the co-integration is still there. And really we probably need about a year of data after the halving to say for sure this co-integration is gone or if it’s not gone and we can still use it cause we can’t, I mean we can’t say it’s not going to go the year after that and it hasn’t gone the year after the halving then it probably will remain if that makes some sense.

Nick BTConometrics:

If we see the price sort of, drop after the halving and remain at current levels for a year, I would say the models basically useless. But if it’s gone on a slow trend up towards the end of the year and we start seeing the residuals come back to, nice stationary residuals, then you know, you can’t really say that cointegration is broken then, so the model would still be useful. So a lot of these people are saying like, “Ah when the halving happens in May and the price doesn’t go to a 100K is everyone gonna forget this model”. It’s kind of, it’s a bit of a straw man that they’re putting this argument up that isn’t an argument. It’s not really what we’re saying. We’re saying that this model will be useful and probably should be useful in the future. And we can tell how useful it is given the data after about a year after the having, which I think PlanB said like December 2021, which is fine.

Stephan Livera:

Right, right. And I think because I think it’s like a straw man of the position to say, “Oh, the Having going to happen. And if you don’t go to a 100K straight away, it’s, it’s dead. It’s gone.” It’s like, no, that’s not right. Right. I think it might be more accurate to say, you know, once the harming happens, that rubber band pulling it up to a hundred thousand is stronger, if you will. But it won’t necessarily hit that straight away. It’ll take time to get there.

Nick BTConometrics:

That’s right. That’s right. And, so if we can actually see if it starts going against the rubber band like, let’s say it starts declining or whatever, that might be a way to prove it wrong earlier. But I don’t say it happening without about a year of data. After the halving.

Stephan Livera:

I see your point. So you’re saying basically right now as we record this today, the price is what, $8,500 or roughly that area. And let’s say it comes to May 2020, the halving happens in May 9th or May 10th or whatever. And then if the price just stayed at 9,000 for a year, then it would start to be kind of indicating towards the model is breaking down.

Nick BTConometrics:

Yeah. At that point I’d be like, right, this is bullshit. We can’t use this anymore. Because it’s obviously not rubber banding back up to where it’s supposed to be, which is somewhere between $55K and a $100K depending on which parameters you use, how you model the data, the historical data. So I’m happy to be proven wrong, which is what I intended to do to start with.

Stephan Livera:

Yeah. It’s funny, but, yeah, so, and then that’s kind of where some of these other ideas come out and PlanB and I think yourself, you’ve made similar comments that.You know, on like a probabilistic distribution, it’s going to be starting to rise slowly and that you might expect, I don’t know, whatever, I can’t remember the numbers off the top of my head, but like 13,000 a few months after the halving or things like that.

Nick BTConometrics:

Yeah. Even then it’s getting like, you sort of, how long is a business of string, but I think there is a different cut off point. After about a year of data, you should have seen that really rubber banding effect happened. Like it might happen slower. It might happen faster than previous halvings but it really should happen. And if it doesn’t happen, then the model doesn’t work or isn’t useful anymore anyway.

Stephan Livera:

Yeah. Right. So it’s kind of like, yeah. Well, I mean, honestly, we don’t actually know, but it’s just an interesting way to understand, if the model is something useful and remains to be something useful, then it kind of has to stay within certain ranges. So I guess, what do you reckon would be like the, I don’t know if you know this off the top of your head, but what would, let’s say, for the model to hold up say a year after the halving? Do you know what kind of like the lower end of that range would be?

Nick BTConometrics:

Yeah. Oh I mean, I would, I would hazard a guess around 30,000. But it, it really, like even if it makes it to 30,000 and just stayed at 30,000, that’s still like, it still be wrong kind of thing.

Stephan Livera:

Right, it might not be enough.

Nick BTConometrics:

Well it still needs to get the upside. It can be down, but it has to be up after that to get the residuals to be stationary. So it might go up to a 100K and then back down to 30K, and that would make the residual stationary. Which would mean the model was still okay. But if it just goes to 30K and stays there, then the model is not really useful.

Stephan Livera:

I say. Yeah. Because it didn’t go up and off to either the 55K level or even a bit above that or the a hundred K level or a bit above that.

Nick BTConometrics:

Yeah, that’s right. It’s just gone to the lower end of the expected frame. And it really should talk about within a year I think to or shouldn’t take longer than a year to get over the mean level back up over this stock to flow estimate. Which is, probably, I mean probably enough time I guess we’ll find out.

Stephan Livera:

Okay, great. Anything else you wanted to mention? Analysis wise, are there any other things you’re looking at and discussing?

Nick BTConometrics:

At the moment? People have requested me to do a lot of stuff, but I just, I really don’t have much time to play with this stuff. I do enjoy, I do enjoy doing it though, so I do get into it when I can. Some of the things that I’ve been thinking of, sort of exploiting those relationships with the shitcoins that we have. What we’re seeing Litecoin is running Bitcoin’s tails. So maybe we can use that to sort of short Litecoin.

Nick BTConometrics:

Um, well same thing with Ethereum. And so I’ve been looking into you know, building co-integrating models with those things, but it’s, I don’t really have enough time to do it properly to get into enough detail to accurately do it. And the other things I’ve been looking at, adding co variables to the, stock to flow model to try and explain the rest of it. Like, so we said this, you know, stock to flow plus X equals price. What’s X, what can we find X? And you know, is it related to stock to flow? Maybe it’s a mediating variable for stock to flow. So I’ve been, searching for that for sometime and that’s what I’ve sort of been working on.

Stephan Livera:

Fantastic. So, have you got any other plans in terms of your website? What you want to show on there in terms of, for the listeners, like what statistics and things they can expect to find on yoursite?

Nick BTConometrics:

At the moment they can find the residual likelihood analysis, the stock to flow, mean reversion indicator and they can download all of that as CSV files if they want to. The other thing I was probably, I was going to do was going to put some sort of, more back end data availability. Like JSON API or something like that that people could connect to if they wanted to use it in automatic trading algorithms, that kind of thing. But again, I’m working on lost time here so.

Stephan Livera:

Yeah, and also your name change, you went from Phraudsta to BTConometric.

Nick BTConometrics:

Yeah. Well I liked the name BTConometrics. And that’s something that BurgerAM had mentioned that, um, people had sort of brought up that. This guy’s called Phraudsta, he must be a fraud, you know, but really it was just, I have this imposter syndrome, like, I don’t think I’m good enough at anything kind of thing and from years and years ago when I signed up, it was like I’m an imposter and not really as good as they say I am at anything. So that’s where that came from. Um, but yeah, I could see that, you know, in Bitcoin calling yourself Phraudsta, probably not a good match when, you know, the government comes knocking at the door kind of thing.

Stephan Livera:

Well, I think you’ve done a great job. I think it’s pretty cool. The work you’ve done, PlanB definitely speaks very highly of you. So I guess that’s pretty much it for this episode, but make sure you let the listeners know where they can follow you and find your work.

Nick BTConometrics:

Sure. I’m on Twitter and @BTConometrics. And you can go to my website at BTConometrics.com. And there you will find all of this information that we’ve been talking about. I tend to post most things to Twitter, first, so you’ll find my updated stuff there.

Stephan Livera:

Fantastic. Well, thank you for joining me, Nick.

Nick BTConometrics:

No worries. Have a nice day.

Comments (1)
  1. Really been enjoying your podcast recently.

    This guy is the ceo of maths, super strong!

    But! Doesn’t it feel like one of those it’s-right-until-it-is-no-longer things?

Leave a Reply