I’m not sure what this means. Are you saying that every piece of written material has an agenda behind it as I’ve defined the term?
Not every piece of written material, but I’d bet that almost all lengthy pieces of writing intended to communicate a point to others have an agenda behind them, sure. There’s always a temptation to round the numbers to your advantage, to leave out bits of data that might conflict with your hypothesis, to neglect to mention possible problems with your statistical tests, and so on.
Even ignoring that sort of thing, cognitive biases play an important role. Nisbett presumably had a half-formed opinion of the race and IQ argument even before he started researching it in depth. And that would in turn have affected which bits of relevant evidence got stuck in his mind. And that would in turn have hardened his opinion. You get positive feedbacks that push your opinion away from others that conflict with it. So even if Nisbett were consciously being as honest as possible, he could still be
emphasizing the facts that support a particular point of view and de-emphasizing the facts which undermine that point of view in order to persuade the reader
just because his mental database of facts is going to overrepresent the 1st kind of fact and underrepresent the 2nd—and precisely because of that, he is going to be sure that his point of view is obviously correct, and precisely because of that, he is going to be writing to persuade the reader of it—even though, as far as he knows, he is being completely honest!
but I’d bet that almost all lengthy pieces of writing intended to communicate a point to others have an agenda behind them, sure
Ok, and the stronger the agenda, the more you should trust your common sense over claims made by the person with the agenda.
That’s what he said.
Ok, and presumably what he meant was that any warming which took place between 1995 and the present was less than some statistical minimum threshold. I’m not sophisticated enough to calculate such a limit, but that’s what I meant when I said that temperatures have been basically flat for the last 10 years.
Ok, and the stronger the agenda, the more you should trust your common sense over claims made by the person with the agenda.
Cool. I feel more comfortable now that you’ve expressed this in continuous terms. There’s still a catch, though: using your definition of having an agenda, I can’t really tell whether someone has an agenda without also knowing the facts (because ‘having an agenda’ here is being used to mean that someone’s making a slanted presentation of the facts), and if I know the facts already, I have little need for your has-an-agenda heuristic.
Ok, and presumably what he meant was that any warming which took place between 1995 and the present was less than some statistical minimum threshold.
Roughly speaking, I think that’s about right.
I’m not sophisticated enough to calculate such a limit, but that’s what I meant when I said that temperatures have been basically flat for the last 10 years.
I think I understand now. Alrighty...yeah, I would suspect that there’s been no statistically significant warming trend in the last 10 years. I would however avoid using phrases like ‘temperatures have been basically flat for the last 10 years’ to describe this, as I nonetheless believe that if one considers the last 10 years of records in the context of unambiguous past warming, they are consistent with an ongoing, underlying warming trend.
I can’t really tell whether someone has an agenda without also knowing the facts
I would suggest you practice. It also helps to read people who contradict eachother. It also helps if you learn some of the facts.
I would however avoid using phrases like ‘temperatures have been basically flat for the last 10 years’ to describe this, as I nonetheless believe that if one considers the last 10 years of records in the context of unambiguous past warming, they are consistent with an ongoing, underlying warming trend.
Lol, I guess that means American housing prices have been going up the last couple years too.
In any event, I think it’s fair to say that temperatures have been basically flat because it contradicts many of the predictions of the warmists.
I would suggest you practice. It also helps to read people who contradict eachother. It also helps if you learn some of the facts.
I reckon the first two things only help in as much as they help you do the third. Learning the facts is what really matters—and in my experience, once I feel I know enough about an issue to decide who has an agenda (in your sense of the phrase), I typically feel I know enough to make my own judgement of the issue without having to tie my colours to the talking head I like the most.
Lol, I guess that means American housing prices have been going up the last couple years too.
I am not familiar enough with US house prices to be sure, but I suspect that’s a poor analogy to the global warming data.
In any event, I think it’s fair to say that temperatures have been basically flat because it contradicts many of the predictions of the warmists.
Here are two better criteria for judging your statement’s fairness:
Compare the temperature data. It’s quite clear that there’s relatively a lot more noise and external forcing, which makes it harder to see a trend in the data. That’s why it’s reasonable to suppose that past upward trends in temperature are continuing, even though the most recent temperatures look flat in some of the data sets; the greater noise hurts your statistical power to detect a trend, which means that you can get the appearance of no trend whether or not the upward trend is continuing.
Hence why I see your analogy as a poor one: you’re implying that arguing for an ongoing increase in temperature is as silly as arguing for an ongoing increase in house prices, but that ignores the far greater statistical power to detect a change in trend in recent house price data.
And how do I know if the statement is liable to mislead people?
Apply your rough mental model of how other people are likely to interpret your statement to decide whether your statement is likely to direct them to a misleading impression of the data.
For example, if I show someone this graph, it’s fair to say that there’s a significant chance they’ll think it shows that US temperature increase per century has no practical significance. But that would of course be a fallacious inference: the fact that other temperature measurements can vary a lot is logically disconnected from the issue of whether the rise in US temperature has real importance. In that sense the graph is liable to mislead.
That’s why it’s reasonable to suppose that past upward trends in temperature are continuing, even though the most recent temperatures look flat in some of the data sets
It may be reasonable to suppose so, but it doesn’t change the fact that temperatures have been basically flat for the last 10 (or 15) years.
In any event, it’s quite possible—even likely—that the upward trend in housing prices is continuing in the same sense you believe that the upward trend in temperatures may be continuing; and that the recent housing bubble is the rough equivalent of an el nino
It may be reasonable to suppose so, but it doesn’t change the fact that temperatures have been basically flat for the last 10 (or 15) years.
I don’t believe it is true that ‘temperatures have been basically flat’ for the last 15 years: I see a net gain of 0.1 to 0.2 Kelvin, depending on the data set (HadCRUT3 v. GISTEMP v. UAH v. RSS). And it looks to me like temperatures have only been ‘flat’ for the last 10 years in the sense that a short enough snippet of a noisy time series will always look ‘flat.’
In any event, it’s quite possible—even likely—that the upward trend in housing prices is continuing in the same sense you believe that the upward trend in temperatures may be continuing; and that the recent housing bubble is the rough equivalent of an el nino
And the accompanying crash would be a La Niña? I think the house price boom & crash is a little too big to characterize like that.
I don’t believe it is true that ‘temperatures have been basically flat’ for the last 15 years: I see a net gain of 0.1 to 0.2 Kelvin, depending on the data set (HadCRUT3 v. GISTEMP v. UAH v. RSS).
So the standard is “net gain,” and a net gain greater than (or less than) 0.1 Kelvin means not basically flat?
And it looks to me like temperatures have only been ‘flat’ for the last 10 years in the sense that a short enough snippet of a noisy time series will always look ‘flat.’
That may be true, but so what? characterization of evidence != interpretation of evidence. Agreed?
I think the house price boom & crash is a little too big to characterize like that.
Why not? It’s a short term detour in a larger overall trend. If you happened to buy a house at the top of the market, there is still an excellent chance that some day the market price will exceed your purchase price.
So the standard is “net gain,” and a net gain greater than (or less than) 0.1 Kelvin means not basically flat?
Any net gain (or net loss), however small, means not flat, if you are confident enough that it’s not an artefact or noise. (Adding the adverb ‘basically’ muddies things a bit, because it implies that you’re not interested in small deviations from flatness.) So: am I quite confident that there has been a deviation from flatness since 1995, and that the deviation is neither artefact nor noise? Yes. But you knew that already, so I’ll go deeper.
You earlier referred to the Phil Jones interview where he stated that the warming since 1995 is ‘only just’ statistically insignificant. I don’t know enough about testing autocorrelated time series to check that, but I’m willing to pretty much trust him on this point.
OK, so every so often on Less Wrong you see a snippet of Jaynes or a popular science article presented in the context of a frequentism vs. Bayesianism comparison. I’ve gone to bat before (see that first link’s discussion) to explain why pitting the two against each other seems wrong-minded to me. I’ve yet to see an example where frequentist methods necessarily have to give a different result to Bayesian methods, just by virtue of being frequentist rather than Bayesian. I see the two as two sides of the same coin.
Still, there are certain techniques that are more associated with the frequentist school than the Bayesian. One of them is statistical significance testing. That particular technique gets a lot of heat from statisticians of all sorts (not just Bayesians!), and arguably rightly so. People are liable to equate statistical significance with practical significance, which is simply wrong, and to dogmatically reject any null hypothesis that doesn’t clear a particular p-value bar. On this point, I have to agree with the critics. Far as I can tell, there are too many people who fundamentally misunderstand significance tests, and as someone who does understand them (or I think I do—maybe that’s just the Dunning-Kruger effect talking) and finds them useful, that disappoints me.
In the end, you have to exercise judgment in interpreting significance tests, like any other tool. Just because a test limps over the magic significance level with a p-value of 0.049 doesn’t mean you should immediately shitcan your null hypothesis, and just because your test falls a hair short with an 0.051 p-value doesn’t mean there’s nothing there.
To get more specific, that net warming since 1995 has been ‘just’ statistically insignificant does not mean no warming. It means that under a particular model, the null hypothesis of no overall trend cannot be rejected. It could be because there really is no trend. Or there might be a true trend, but your data are too noisy and too few. Or the test could be cherrypicked. You have to exercise judgment and decide which is most likely. I believe the last two possibilities are most likely: I can see the noise with my own eyes, and apparently 1995 is the earliest year where warming since that year is statistically insignificant, which would be consistent with cherry-picking the year 1995.
Which is why I reject the null hypothesis of no net temperature change since 1995, even though the p-value of Phil Jones’ test is presumably a bit higher than 0.05.
That may be true, but so what? characterization of evidence != interpretation of evidence. Agreed?
They are distinct concepts.
I get the feeling that you think calling the last decade of temperatures ‘flat’ is characterization and not interpretation, and I would disagree. When I say temperatures have risen overall, that’s an interpretation. When you say they have not, that’s an interpretation. Either interpretation is defensible, though I believe mine is more accurate (but of course I would believe that).
Why not? It’s a short term detour in a larger overall trend.
Right, but if you compare the housing price detour to the noise in the house price data, it’s relatively way way bigger than the El Niño deviation compared to the noise in the temperature data.
I pulled the temperature data behind this plot and regressed temperature on year. Then I calculated the standard deviation of the residuals from the start of the time series up to 1998 (when the EN kicked in). The peak in the data (at ‘year’ 1998.08, with a value of 0.6595 degrees) is then 3.9 sigmas above the regression line.
Look back at the home price graph—maybe that particular graph’s been massively smoothed, but the post-peak drop looks like way more than a 4 sigma decline: I’d eyeball it as on the order of 10-20 sigmas—and that’s a big underestimate because the standard deviation is going to be inflated by what looks like a seasonal fluctuation (the yearly-looking spikes). The El Niño is big and bold, no doubt about it, but it’s a puppy compared to the housing pricing crash.
Adding the adverb ‘basically’ muddies things a bit, because it implies that you’re not interested in small deviations from flatness.
Of course it muddies things and we should not be interested in small deviations. That’s the basic point of your argument. The only question is how small is small.
When I say temperatures have risen overall, that’s an interpretation. When you say they have not, that’s an interpretation
Well can you give me an example of a statement about temperature in the last 10 years which is not an “interpretation”?
Right, but if you compare the housing price detour to the noise in the house price data, it’s relatively way way bigger than the El Niño deviation compared to the noise in the temperature data.
The El Niño is big and bold, no doubt about it, but it’s a puppy compared to the housing pricing crash.
So what? In 1998, would it have been wrong to say that global surface temperatures had risen (relatively) rapidly over the previous few years?
Of course it muddies things and we should not be interested in small deviations. That’s the basic point of your argument.
?!
The point I was making in the first 550 words of the grandparent comment is that one shouldn’t automatically disregard a small deviation from flatness merely because it’s (barely) statistically insignificant. I am not sure how you interpreted it to mean that ‘we should not be interested in small deviations.’
Well can you give me an example of a statement about temperature in the last 10 years which is not an “interpretation”?
A statement that’s a few written words or sentences? I doubt it. Trying to summarize a complicated time series in a few words is inevitably going to mean not mentioning some features of the time series, and your editorial judgment of which features not to mention means you’re interpreting it.
So what?
You should know, you asked me ‘Why not?’ in the first place.
In 1998, would it have been wrong to say that global surface temperatures had risen (relatively) rapidly over the previous few years?
Practically, yes, because that claim carries the implication that the El Niño spike is representative of the warming ‘over the previous few years.’
So the standard is “net gain,” and a net gain greater than (or less than) 0.1 Kelvin means not basically flat?
My linear regressions based on NOAA data (I was stupid and lost the citation for where I downloaded it) have 0.005-0.007 K/year since 1880; 0.1 to 0.2 K in a decade is beating the trend.
I took the liberty of downloading the GISTEMP data, which I suspect are very similar to the NOAA data (because the GISTEMP series also starts at 1880, and I dimly remember reading somewhere that the GISS gets land-based temperature data from the NOAA). Regressing anomaly on year I get an 0.00577 K/year increase since 1880, consistent with Robin’s estimate. R tells me the standard error on that estimate is 0.00011 K/year.
However, that standard error estimate should be taken with a pinch of salt for two reasons: the regression’s residuals are correlated, and it is unlikely that a linear model is wholly appropriate because global warming was reduced mid-century by sulphate emissions. Caveat calculator!
(ETA: I just noticed you wrote ‘these,’ so I thought you might be interested in the trend for the past decade as well. Regressing anomaly on year for the past 120 monthly GISTEMP temperature anomalies has a trend of 0.0167 ± 0.0023 K/year, but the same warning about that standard error applies.)
I have no idea. Varying the starting point from ten to thirty years ago with Feb 2010 as the endpoint puts the slope anywhere in the range [-0.0001,0.2], so it must be fairly large on the scale of a decade.
Not every piece of written material, but I’d bet that almost all lengthy pieces of writing intended to communicate a point to others have an agenda behind them, sure. There’s always a temptation to round the numbers to your advantage, to leave out bits of data that might conflict with your hypothesis, to neglect to mention possible problems with your statistical tests, and so on.
Even ignoring that sort of thing, cognitive biases play an important role. Nisbett presumably had a half-formed opinion of the race and IQ argument even before he started researching it in depth. And that would in turn have affected which bits of relevant evidence got stuck in his mind. And that would in turn have hardened his opinion. You get positive feedbacks that push your opinion away from others that conflict with it. So even if Nisbett were consciously being as honest as possible, he could still be
just because his mental database of facts is going to overrepresent the 1st kind of fact and underrepresent the 2nd—and precisely because of that, he is going to be sure that his point of view is obviously correct, and precisely because of that, he is going to be writing to persuade the reader of it—even though, as far as he knows, he is being completely honest!
(Tangent: it’s somehow amusing and fitting that the person we’re using to argue this point is the person whose most cited article is “Telling more than we can know: Verbal reports on mental processes.”)
That’s what he said.
Ok, and the stronger the agenda, the more you should trust your common sense over claims made by the person with the agenda.
Ok, and presumably what he meant was that any warming which took place between 1995 and the present was less than some statistical minimum threshold. I’m not sophisticated enough to calculate such a limit, but that’s what I meant when I said that temperatures have been basically flat for the last 10 years.
Cool. I feel more comfortable now that you’ve expressed this in continuous terms. There’s still a catch, though: using your definition of having an agenda, I can’t really tell whether someone has an agenda without also knowing the facts (because ‘having an agenda’ here is being used to mean that someone’s making a slanted presentation of the facts), and if I know the facts already, I have little need for your has-an-agenda heuristic.
Roughly speaking, I think that’s about right.
I think I understand now. Alrighty...yeah, I would suspect that there’s been no statistically significant warming trend in the last 10 years. I would however avoid using phrases like ‘temperatures have been basically flat for the last 10 years’ to describe this, as I nonetheless believe that if one considers the last 10 years of records in the context of unambiguous past warming, they are consistent with an ongoing, underlying warming trend.
I would suggest you practice. It also helps to read people who contradict eachother. It also helps if you learn some of the facts.
Lol, I guess that means American housing prices have been going up the last couple years too.
In any event, I think it’s fair to say that temperatures have been basically flat because it contradicts many of the predictions of the warmists.
I reckon the first two things only help in as much as they help you do the third. Learning the facts is what really matters—and in my experience, once I feel I know enough about an issue to decide who has an agenda (in your sense of the phrase), I typically feel I know enough to make my own judgement of the issue without having to tie my colours to the talking head I like the most.
I am not familiar enough with US house prices to be sure, but I suspect that’s a poor analogy to the global warming data.
Here are two better criteria for judging your statement’s fairness:
is the statement true?
is the statement liable to mislead people?
How is it a poor analogy? The general for the last 50 years is upwards, but the trend over the last couple years is flat or downwards.
And how do I know if the statement is liable to mislead people?
A quick Google for house price data led me to this graph of US house prices from 1970 up until what looks like last year. It is immediately clear to me that there is far less noise obscuring the changes in trends than in the global temperature data. The recent house price crash looks about an order of magnitude larger than the seasonal(?) fuzz, so it’s easy to distinguish it from the earlier upward trend.
Compare the temperature data. It’s quite clear that there’s relatively a lot more noise and external forcing, which makes it harder to see a trend in the data. That’s why it’s reasonable to suppose that past upward trends in temperature are continuing, even though the most recent temperatures look flat in some of the data sets; the greater noise hurts your statistical power to detect a trend, which means that you can get the appearance of no trend whether or not the upward trend is continuing.
Hence why I see your analogy as a poor one: you’re implying that arguing for an ongoing increase in temperature is as silly as arguing for an ongoing increase in house prices, but that ignores the far greater statistical power to detect a change in trend in recent house price data.
Apply your rough mental model of how other people are likely to interpret your statement to decide whether your statement is likely to direct them to a misleading impression of the data.
For example, if I show someone this graph, it’s fair to say that there’s a significant chance they’ll think it shows that US temperature increase per century has no practical significance. But that would of course be a fallacious inference: the fact that other temperature measurements can vary a lot is logically disconnected from the issue of whether the rise in US temperature has real importance. In that sense the graph is liable to mislead.
It may be reasonable to suppose so, but it doesn’t change the fact that temperatures have been basically flat for the last 10 (or 15) years.
In any event, it’s quite possible—even likely—that the upward trend in housing prices is continuing in the same sense you believe that the upward trend in temperatures may be continuing; and that the recent housing bubble is the rough equivalent of an el nino
I don’t believe it is true that ‘temperatures have been basically flat’ for the last 15 years: I see a net gain of 0.1 to 0.2 Kelvin, depending on the data set (HadCRUT3 v. GISTEMP v. UAH v. RSS). And it looks to me like temperatures have only been ‘flat’ for the last 10 years in the sense that a short enough snippet of a noisy time series will always look ‘flat.’
And the accompanying crash would be a La Niña? I think the house price boom & crash is a little too big to characterize like that.
So the standard is “net gain,” and a net gain greater than (or less than) 0.1 Kelvin means not basically flat?
That may be true, but so what? characterization of evidence != interpretation of evidence. Agreed?
Why not? It’s a short term detour in a larger overall trend. If you happened to buy a house at the top of the market, there is still an excellent chance that some day the market price will exceed your purchase price.
Any net gain (or net loss), however small, means not flat, if you are confident enough that it’s not an artefact or noise. (Adding the adverb ‘basically’ muddies things a bit, because it implies that you’re not interested in small deviations from flatness.) So: am I quite confident that there has been a deviation from flatness since 1995, and that the deviation is neither artefact nor noise? Yes. But you knew that already, so I’ll go deeper.
You earlier referred to the Phil Jones interview where he stated that the warming since 1995 is ‘only just’ statistically insignificant. I don’t know enough about testing autocorrelated time series to check that, but I’m willing to pretty much trust him on this point.
OK, so every so often on Less Wrong you see a snippet of Jaynes or a popular science article presented in the context of a frequentism vs. Bayesianism comparison. I’ve gone to bat before (see that first link’s discussion) to explain why pitting the two against each other seems wrong-minded to me. I’ve yet to see an example where frequentist methods necessarily have to give a different result to Bayesian methods, just by virtue of being frequentist rather than Bayesian. I see the two as two sides of the same coin.
Still, there are certain techniques that are more associated with the frequentist school than the Bayesian. One of them is statistical significance testing. That particular technique gets a lot of heat from statisticians of all sorts (not just Bayesians!), and arguably rightly so. People are liable to equate statistical significance with practical significance, which is simply wrong, and to dogmatically reject any null hypothesis that doesn’t clear a particular p-value bar. On this point, I have to agree with the critics. Far as I can tell, there are too many people who fundamentally misunderstand significance tests, and as someone who does understand them (or I think I do—maybe that’s just the Dunning-Kruger effect talking) and finds them useful, that disappoints me.
In the end, you have to exercise judgment in interpreting significance tests, like any other tool. Just because a test limps over the magic significance level with a p-value of 0.049 doesn’t mean you should immediately shitcan your null hypothesis, and just because your test falls a hair short with an 0.051 p-value doesn’t mean there’s nothing there.
To get more specific, that net warming since 1995 has been ‘just’ statistically insignificant does not mean no warming. It means that under a particular model, the null hypothesis of no overall trend cannot be rejected. It could be because there really is no trend. Or there might be a true trend, but your data are too noisy and too few. Or the test could be cherrypicked. You have to exercise judgment and decide which is most likely. I believe the last two possibilities are most likely: I can see the noise with my own eyes, and apparently 1995 is the earliest year where warming since that year is statistically insignificant, which would be consistent with cherry-picking the year 1995.
Which is why I reject the null hypothesis of no net temperature change since 1995, even though the p-value of Phil Jones’ test is presumably a bit higher than 0.05.
They are distinct concepts.
I get the feeling that you think calling the last decade of temperatures ‘flat’ is characterization and not interpretation, and I would disagree. When I say temperatures have risen overall, that’s an interpretation. When you say they have not, that’s an interpretation. Either interpretation is defensible, though I believe mine is more accurate (but of course I would believe that).
Right, but if you compare the housing price detour to the noise in the house price data, it’s relatively way way bigger than the El Niño deviation compared to the noise in the temperature data.
I pulled the temperature data behind this plot and regressed temperature on year. Then I calculated the standard deviation of the residuals from the start of the time series up to 1998 (when the EN kicked in). The peak in the data (at ‘year’ 1998.08, with a value of 0.6595 degrees) is then 3.9 sigmas above the regression line.
Look back at the home price graph—maybe that particular graph’s been massively smoothed, but the post-peak drop looks like way more than a 4 sigma decline: I’d eyeball it as on the order of 10-20 sigmas—and that’s a big underestimate because the standard deviation is going to be inflated by what looks like a seasonal fluctuation (the yearly-looking spikes). The El Niño is big and bold, no doubt about it, but it’s a puppy compared to the housing pricing crash.
Of course it muddies things and we should not be interested in small deviations. That’s the basic point of your argument. The only question is how small is small.
Well can you give me an example of a statement about temperature in the last 10 years which is not an “interpretation”?
So what? In 1998, would it have been wrong to say that global surface temperatures had risen (relatively) rapidly over the previous few years?
?!
The point I was making in the first 550 words of the grandparent comment is that one shouldn’t automatically disregard a small deviation from flatness merely because it’s (barely) statistically insignificant. I am not sure how you interpreted it to mean that ‘we should not be interested in small deviations.’
A statement that’s a few written words or sentences? I doubt it. Trying to summarize a complicated time series in a few words is inevitably going to mean not mentioning some features of the time series, and your editorial judgment of which features not to mention means you’re interpreting it.
You should know, you asked me ‘Why not?’ in the first place.
Practically, yes, because that claim carries the implication that the El Niño spike is representative of the warming ‘over the previous few years.’
My linear regressions based on NOAA data (I was stupid and lost the citation for where I downloaded it) have 0.005-0.007 K/year since 1880; 0.1 to 0.2 K in a decade is beating the trend.
What are the uncertainties on each of these?
I took the liberty of downloading the GISTEMP data, which I suspect are very similar to the NOAA data (because the GISTEMP series also starts at 1880, and I dimly remember reading somewhere that the GISS gets land-based temperature data from the NOAA). Regressing anomaly on year I get an 0.00577 K/year increase since 1880, consistent with Robin’s estimate. R tells me the standard error on that estimate is 0.00011 K/year.
However, that standard error estimate should be taken with a pinch of salt for two reasons: the regression’s residuals are correlated, and it is unlikely that a linear model is wholly appropriate because global warming was reduced mid-century by sulphate emissions. Caveat calculator!
(ETA: I just noticed you wrote ‘these,’ so I thought you might be interested in the trend for the past decade as well. Regressing anomaly on year for the past 120 monthly GISTEMP temperature anomalies has a trend of 0.0167 ± 0.0023 K/year, but the same warning about that standard error applies.)
I have no idea. Varying the starting point from ten to thirty years ago with Feb 2010 as the endpoint puts the slope anywhere in the range [-0.0001,0.2], so it must be fairly large on the scale of a decade.
Your regression package doesn’t report uncertainties? (Ideally this would be in the form of a covariance matrix.)
My regression package is a tab-deliminated data file, a copy of MATLAB, and least-squares.