Testwiki:Reference desk/Archives/Mathematics/2009 April 11
Template:Error:not substituted
|- ! colspan="3" align="center" | Mathematics desk |- ! width="20%" align="left" | < April 10 ! width="25%" align="center"|<< Mar | April | May >> ! width="20%" align="right" |Current desk > |}
| Welcome to the Wikipedia Mathematics Reference Desk Archives |
|---|
| The page you are currently viewing is a transcluded archive page. While you can leave answers for any questions shown below, please ask new questions on one of the current reference desk pages. |
Contents
April 11
Limit involving moment generating functions
Hi there - I'm looking to prove the result that if X is a real R.V. with moment generating function , then if the MGF is finite for some theta > 0, . I really have absolutely no clue where to start - is it central limit theorem related? I could really use a hand on this one, thanks a lot!
Otherlobby17 (talk) 06:51, 11 April 2009 (UTC)
- Look at the integral that defines , namely . If it is finite, then , which is stronger than you need. McKay (talk) 09:14, 11 April 2009 (UTC)
- Fantastic, thanks very much buddy :) Otherlobby17 (talk) 04:15, 12 April 2009 (UTC)
Graph
Is it possible for the graph of a continuous function to touch the x-axis without there being a repeated root? 92.0.38.75 (talk) 13:02, 11 April 2009 (UTC)
- Are you referring to polynomial functions, rather than the less restrictive class of continuous functions? It may be my own ignorance, but I don't know the definition of a repeated root for an arbitrary continuous function. But in the case of a polynomial function, any instance where the function touches but does not pass through the x-axis must be a repeated root. To see this (assuming you have had calculus), note that the derivative at the root must be zero for the curve to just barely kiss the graph. Take the derivative first:
- If the root is , the term appears in every product except one (when b = i). Now set , as well as setting the expression equal to zero (knowing that the derivative is zero at the root); all that is left is
- Ergo there must be another term , or equivalently, the multiplicity of the root is greater than one.
- Hope this helps, --TeaDrinker (talk) 15:01, 11 April 2009 (UTC)
What type of root does the absolute value function |x| have at 0? What about at 0? I think the concept "repeated root" is only usually defined for functions with derivatives. The order of the root is the number of values f(x), f'(x), f"(x),... which are zero. McKay (talk) 01:50, 12 April 2009 (UTC)
- You could extend that to say that if a function is continuous but not differentiable at a zero, then the zero has order 0, analogously to the way one sometimes thinks of continuous functions as being 0-times differentiable, or C0. I don't know if this is useful for anything though. Algebraist 11:44, 12 April 2009 (UTC)
What symbol is this?
Ran across this symbol in a mathematics book but I have no idea what it is, or how to type it up in LaTeX. It's a bit like a cursive capital X with a short horizontal line through the middle. My first instinct was but it looks nothing like it. 128.86.152.139 (talk) 14:17, 11 April 2009 (UTC)
- Hmm, I don't know anything fitting that description, but have you tried the comprehensive LaTeX symbols list? --TeaDrinker (talk) 14:23, 11 April 2009 (UTC)
- I'm going through it now. :( But it's incredibly long... 128.86.152.139 (talk) 14:36, 11 April 2009 (UTC)
- What book is it? What's the context in which the symbol was used? --TeaDrinker (talk) 14:45, 11 April 2009 (UTC)
- I'm going through it now. :( But it's incredibly long... 128.86.152.139 (talk) 14:36, 11 April 2009 (UTC)
- If you'd have said a vertical line, I'd have said Zhe. It's a shame you didn't. 163.1.176.253 (talk) 14:57, 11 April 2009 (UTC)
don't forget that a lot of people cross weird things. a close acquaintance crosses their v's! Maybe it's just a fancy x, so that you don't think it's a normal x? :) 79.122.103.33 (talk) 15:49, 11 April 2009 (UTC)
- Do you mean ? That's just a fraktur capital X. Algebraist 16:37, 11 April 2009 (UTC)
- Brilliant, thanks. For context, my lecturer uses it to denote a set of data: Is there a non-Blackletter-style font, though? 128.86.152.139 (talk) 02:34, 12 April 2009 (UTC)
how does variable-length testing work?
let's say I buy a rigged coin (slightly favors one side) but forget what side is favored
could i just write a script that i put throws into one after the other, and at each stage it tries that many with a fair coin (for example at throw #5 it throws a fair coin five times) but REPEATEDLY, a MILLION times, to see how many times the fair coin behaves the same way under that many throws, ie as a percentage.
Then if the fair coin only behaves that way 4% of the million times, then it would be 96% confident that the currently winning side is weighted?
here are real examples i just ran with a script: if at throw #10 the count is 7 heads to 3 tails (70% heads), it ran a million times ten throws and came up in 172331 of them (17%) with at least that many. So it would report 83% confidence that heads are weighted.
if at throw #50 the count is 35 heads to 15 tails (70% heads), it ran a million times fifty throws and came up in 3356 of them (0.33%) with at least that many. So it would report report 99.67% confidence heads are weighted.
#1: t
0 head 1 tails
50% conf. heads weighted
#2: t
0 heads 2 tails
50% conf. heads weighted
#3: h
1 head 2 tails
50% conf. heads weighted
...
#10: h
7 heads 3 tails
83% conf. heads weighted
...
#50:h
35 heads 15 tails
99.7% conf. heads weighted
is that really how statistics works? if I write my script like I intend to will it be accurate? Also how many decimal places should I show if I am running the 'monte carlo' simulation with a million throws?
Is a million throws accurate enough to see how often a fair coin behaves that way, or should I up it to a billion or even more? could i use a formula, and if so what? (i dont know stats).
Thanks! 79.122.103.33 (talk) 15:30, 11 April 2009 (UTC)
- The confidence interval is much tighter than that, see binomial distribution. 66.127.52.118 (talk) 20:13, 11 April 2009 (UTC)
why does modulus of a random integer in a high range favor lower answers SO SLIGHTLY?
say rand() returns 0-32767 but you want 0-100 - you can just do rand() % 100 which is pretty much an accepted programming practice but results in a very slighly skewed distribution.
My questions are:
- why did I have to do a BILLION (which is a huge number) times to see this nice and clear pattern?
- why is the pattern so TINY
- why does the switch from over the expected (were it an even distribution) to under the expected CLEARLY happen at 66? wouldn't 50 make more sense? It's a huge and clear shift and I bet it would repeat around there, given that the numbers above and below are clearly all in line and it's not like a statistical fluke (e.g. if 64 through 67, were just a hair away from each other and not actually in order but 66 "happened to" win out) , it follows the pattern decisively... I wonder why 66 (or two thirds of 99) is where it happens, it seems odd... why?
So: what are the mathematical reasons for such a tiny tiny slight favoring (like 0.1% it seems like over a billion iterations of the lower modulus numbers?), why don't they show up over a million iterations, and why does the shift from over the expected to under the expected number happen at 66 of 99 instead of something sensible like 50?
Thank you! 79.122.103.33 (talk) 17:43, 11 April 2009 (UTC)
- I'm not quite sure I understand the problem. The remainders from 0 through 67 occur 328 times in the range, while the remainders from 68 through 99 only occur 327 times. So we would expect any low number (0–67) to occur with probability 328/32768 = 1.001% and any high number (68–99) with probability 327/32768 = 0.998%. It easily drowns in sheer randomness for too few trials. I have no idea why your result seems to treat 67 as a high number though. That seems weird. —JAO • T • C 18:03, 11 April 2009 (UTC)
- Jao's answer is correct. For another explanation of why the shift happens where it does, think of a system where rand() returns numbers from 0-5 (like a die). Now, if you wanted a number between 0 and 4 exclusive, you could do rand() % 4, with the following possible outcomes:
- rand() returns 0 - result 0
- rand() returns 1 - result 1
- rand() returns 2 - result 2
- rand() returns 3 - result 3
- rand() returns 4 - result 0
- rand() returns 5 - result 1
- It is easy to see that 0 or 1 would occur twice as often as the other numbers. If you made a similar table for 0-32767 and 0-99, you would see that the numbers 0-67 occur once more than the other numbers (namely, when the generator returns a number between 32000 and 32767). decltype (talk) 18:43, 11 April 2009 (UTC)
- thanks for the answers! They make sense. So how would you correctly choose a number between 0 and 3 inclusive using a generator that goes from 0-5 inclusive, in an equally distributed way? Should you just reflip on a 4 or 5, no matter how many times you keep getting it? 79.122.103.33 (talk) 21:25, 11 April 2009 (UTC)
To get uniformity at little cost, just reject values of rand() greater than 32699. Incidentally, it is usually recommended to use division rather than mod for this problem. That is, instead of rand() % 100, use rand() / 327 (after rejecting values above 32699). If rand() was perfectly random, it wouldn't make any difference, but division is considered less likely to magnify the imperfections of imperfect random number generators. McKay (talk) 01:45, 12 April 2009 (UTC)
okay then what is the answer
this is in relation to my thread two above (about weighted coins). so how many flips out of how many would I need to reach to be sure my coin isn't fair, for 75%, 90%, 95%, 98.5%, 99%, 99.9% confidence...
what is the formula? (this is not homework) 79.122.103.33 (talk) 21:21, 11 April 2009 (UTC)
- The number of flips of the weighted coin that are necessary to ascertain which direction the weight is in (or to ascertain that it is weighted at all) to a specified level of confidence depends on the extent of the weight. At a fixed level of confidence, a coin with 2/3 probability of landing heads will be determined to be weighted much sooner than a coin with 501/1000 probability of landing heads. So the answer depends upon the amount of skew. Eric. 131.215.159.99 (talk) 23:32, 11 April 2009 (UTC)
how to do monte carlo properly
if after doing n flips and getting a certain number of heads, I want to be exactly 95% sure that the results show my coin favors heads (isnt fair) but I'm really bad at statistics, and want to do monte carlo instead, could I see what the most heads is in 20 runs (20 because 19/20th is 95%) by making a list of "most heads out of n flips in 20 runs" a billion times, average those numbers, and get my 95% threshold for n?
For example, if I want to see what number out of 50 flips my coin has to beat for me to be exactly 95% sure that it isn't fair, I average a billion metaruns of "most heads out of 50 flips, in 20 tries"?
sorry that I'm such a zero in statistics. this must be so frustrating to anyone who actually knows what's going on. anyway does this proposed methodology work for my intended confidence level (95%)?
if not this then what is the correct monte carlo method for the 95% interval? Thanks!79.122.103.33 (talk) 21:45, 11 April 2009 (UTC)
- The distribution of the number of head in 20 tosses of a fair coin is approximately a normal distribution with a mean of 10 and a variance of 5, so a standard deviation of sqrt(5). In a normal distribution, 95% of observations are less than 1.65 standard deviations above the mean (since your question is "does my coin favour heads" then this is a one-sided test). 10 + 1.65 x sqrt(5) is approximately 13.7. So the probability that a coin that does not favour heads will score more than 13 heads in 20 tosses is less than 5%. So if your coin scores 14 or more heads, you can be "95% certain" that it favours heads. See hypothesis testing and Z-test for more details. Gandalf61 (talk) 12:25, 12 April 2009 (UTC)
- You need to know the prior probability that the coin is biased before you can answer such a question precisely. See hypothesis testing (mentioned by Gandalf) for some discussion. Really, you've asked the question a couple very different ways: 1) you have a coin that is known to be biased, but you're not sure if it's towards heads or towards tails (let's say that's an implicit assumption that it's 50-50 guess between heads-biased or tails-biased); or 2) you have a coin that might be biased (with some unknown probability) and you want to check. Case 1 is straightforward: the null hypothesis is that the coin is biased towards tails, then compare the outcome of your experiment with the binomial or normal distribution. Case 2 is harder: for example, say you have 1000 coins, of which 999 are fair and one is 60-40 biased towards heads. You pick one of those coins uniformly at random, flip 20 times and get 14 heads. Are you 95% confident that you picked the biased coin? Nope, because of the prior probability distribution. In this case you'd use Bayes's theorem in conjunction with the known 0.999 prior probability to interpret the result. But if you don't know the prior probability, it's not so clear what to do. 66.127.52.118 (talk) 12:46, 12 April 2009 (UTC)
- not knowing the prior probabiity, what can one do? If a magician lets you check a coin he's using, what are you supposed to guess for the chances it's fair? How woud you check? (to be 90% / 95% / 98% / 99% etc sure in your conclusion?) Thanks 94.27.222.70 (talk) 22:29, 12 April 2009 (UTC)
- That is to some extent a question about philosophy rather than mathematics. See Bayesian probability and frequency probability (as well as statistical hypothesis testing already mentioned) for some discussion. 66.127.52.118 (talk) 01:19, 13 April 2009 (UTC)