• We have updated our Community Code of Conduct. Please read through the new rules for the forum that are an integral part of Paradox Interactive’s User Agreement.
I feel like I should comment on this, but Kiwi probably has a better handle on the math here anyway.

See that is where physics is superior. We just notice that it works and is correct and then don't bother about such small details.
LWWLout.gif

The rigorous logical proofs are to a large extent the whole point of math. We want to be able to demonstrate with certainty that whatever we're trying to do works, not wave our hands and say "meh, good enough" and hope it really is.
 
See! Madchemist is here now! I wasn't wrong!

SCAN MADCHEMIST
 
See! Madchemist is here now! I wasn't wrong!

SCAN MADCHEMIST

You hunt people on their birthdays. Scanning is reserved for the other days of the year.
 
So if the p-values are above the significance value that would mean the null hypothesis is valid, because they represent that probability that the results could have resulted from the null. They represent a way of discriminating between hypotheses based on results because the more consistent and the more results you have the more unlikely it becomes that the null hypothesis could have resulted in the results. :cool: *feels smart*

So both of ours were above the signifiance value, which means there was a good probability they were caused by the null. And am I correct in guessing that you derive the p-value by taking 1/2 or whatever it is to the something power?
 
So if the p-values are above the significance value that would mean the null hypothesis is valid, because they represent that probability that the results could have resulted from the null. They represent a way of discriminating between hypotheses based on results because the more consistent and the more results you have the more unlikely it becomes that the null hypothesis could have resulted in the results. :cool: *feels smart*

So both of ours were above the signifiance value, which means there was a good probability they were caused by the null. And am I correct in guessing that you derive the p-value by taking 1/2 or whatever it is to the something power?

P-values do allow you to reject the null hypothesis (to some level of significance) if they are low enough. Confirming a hypothesis isn't really something you can do with p-values; all you can do is argue that your data is not inconsistent with the null hypothesis ("fail to reject" the null is the usual way of putting it), which is a substantially weaker claim.
 
So if the p-values are above the significance value that would mean the null hypothesis is valid, because they represent that probability that the results could have resulted from the null. They represent a way of discriminating between hypotheses based on results because the more consistent and the more results you have the more unlikely it becomes that the null hypothesis could have resulted in the results. :cool: *feels smart*

So both of ours were above the signifiance value, which means there was a good probability they were caused by the null. And am I correct in guessing that you derive the p-value by taking 1/2 or whatever it is to the something power?

We avoid saying the null hypothesis is valid; instead, we "do not reject the null hypothesis". It's the difference between "innocent" and "not (found) guilty".

The p-value is the likelihood of seeing a result more extreme (compared to the null hypothesis) than what we saw this time, based on our assumptions about the probability distribution. A high p-value means our result was boring; a low p-value means our result was fairly extreme. If the p-value is lower than our chosen significance level we reject the null hypothesis, yes. Typically this is 0.05 ie 5%.

It would be better to say something like "the null hypothesis is adequate to explain the observed data" though, It's a hypothesis, and as such does not cause or affect things. The question is at what confidence level would our observed data would no longer be consistent with our null hypothesis (or for Bayesians, how likely the null hypothesis is to be the true state).

The p-value is found from whichever distribution you assume the situation follows. Here the best option is binomial, rather than the Poisson distribution originally used.

Edited twice for reduced risk of heretical interpretation.
 
Last edited:
You guys seem pretty smart but can you answer the age old question,

What is true if there are no cell phones in a room?
 
You guys seem pretty smart but can you answer the age old question,

What is true if there are no cell phones in a room?

Depends what you want to do and whether or not this:

snom2030020ip20phone-11397390.jpg


Or this:

personal_computer.jpg


are in said room.
 
What about all the humans currently walking on the sun?
Did you know they are all wearing green shirts?
 
Congrats to the winners and thanks to the GM. Nicely done there at the end. Spockyt did well there.
 
Binomial does converge to Poisson, but only for sufficiently large samples and sufficiently small success probabilities (it's actually np which remains constant as n -> infinity, so p must be a function of sample size).
Ah yeah of course. That is the reason---which I had completely forgot.

You get enough of a feel to spot by eye if it's worth doing something. If this was some sort of research project or paid job of course I'd actually do it, but getting 5/9 is quite clearly as close to 50% as you can get for an odd number of discrete data.
Sure and I also had done that. But I was bored and it takes 2 minutes coding the p-value so why not calculating it.

Kiwi, I think you're out-Wagonlitzing Wagonlitz. How does this make you feel?
He is a statistician while I am a physicist and we were talking about statistics; what did you expect?
The rigorous logical proofs are to a large extent the whole point of math. We want to be able to demonstrate with certainty that whatever we're trying to do works, not wave our hands and say "meh, good enough" and hope it really is.
Sure. And we also have large proofs for our equations. It is just that there are a few things which are known to work, but which the mathematicians find iffy. Like for instance solving differential equations by treating the derivative as two infinitesimal intervals. And also seem to remember that one of those "iffy" things we do has been adopted by mathematicians, but I cannot remember which.
You hunt people on their birthdays. Scanning is reserved for the other days of the year.
It actually is his birthday?
And am I correct in guessing that you derive the p-value by taking 1/2 or whatever it is to the something power?
No. You integrate over the distribution for the parameter you are testing under the assumption of your null hypothesis being true.
A high p-value means our result was boring
Actually very high p-values can be problematic too. I know that in a pure statistics sense there isn't a cutoff, but if you see a p-value of say 99% for your data in physics then you get really suspicious. After all you are in the tail---just the other tail. If things fit that well then the probably is something wrong.
 
Actually very high p-values can be problematic too. I know that in a pure statistics sense there isn't a cutoff, but if you see a p-value of say 99% for your data in physics then you get really suspicious. After all you are in the tail---just the other tail. If things fit that well then the probably is something wrong.

Depends on whether you're doing a one-tail or two-tail test, I suppose. A two tail test with a p-value of 0.99 would mean you landed pretty much spot on the expected value. Do that too often and people question the genuineness of the data, but as a one off it's not a bad thing. As you say it's fitting almost suspiciously well. You expect it to sometimes just happen though.

But for a one tail test it would mean you could reject the null hypothesis in the opposite direction to what you were testing, if you cared to do so. It's not fitting at all; just for the opposite reason.
 
A two tail test with a p-value of 0.99 would mean you landed pretty much spot on the expected value. Do that too often and people question the genuineness of the data, but as a one off it's not a bad thing. As you say it's fitting almost suspiciously well. You expect it to sometimes just happen though.
I would as mentioned be really suspicious even if it happens once, since measurements are never that precise. Though I guess there could be something similar to the look elsewhere effect here which would give such a good agreement. Don't know if that was what you hinted at. Though if there is such an effect you should be suspicious too.

And now it just started pouring down and I have to go out in half an hour; lovely just lovely. What does this always happen.
 
Yes, Wagon, yesterday was chemist's birthday. The things you learn from rereading old games.