<div dir="ltr">Thanks a lot, Neil! <div><br></div><div>I see that I didn't read your earlier message correctly and I now get what you're suggesting. It makes a lot of sense (and the followup discussion also helped me).<br>
<div><br></div><div>Robby</div></div></div><div class="gmail_extra"><br><br><div class="gmail_quote">On Thu, Feb 20, 2014 at 10:49 AM, Neil Toronto <span dir="ltr"><<a href="mailto:neil.toronto@gmail.com" target="_blank">neil.toronto@gmail.com</a>></span> wrote:<br>
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">The connection between my final answer and sampling between 2^(X-1) and 2^X is... mostly my intuition, but I think it can be made rigorous. I noticed that the mean number of bits seemed to always be near 1/p, which is the mean of a geometric distribution with parameter p. So I defined a random variable whose log mean would be close to the same. It appears to have the same characteristics: it looks like an "anything can happen" sort of distribution, and computing the mean never seems to converge, no matter how many samples I put into it.<br>
<br>
Sampling between 2^(X-1) and 2^X is just to make sure the sampling algorithm will return any natural number. (Well, any that will fit in your computer.) I can prove this distribution has no expected value.<br>
<br>
The expected value of a discrete random variable is<br>
<br>
x_0 * p(x_0) + x_1 * p(x_1) + x_2 * p(x_2) + ...<br>
<br>
where the x_i are the possible outcomes, and p(x_i) are their probabilities. It's a generalization of computing the mean of finitely many uniform outcomes, in which p(x) = 1/n:<br>
<br>
(x_0 + x_1 + ... + x_{n-1}) / n<br>
= x_0 * 1/n + x_1 * 1/n + ... + x_{n-1} * 1/n<br>
<br>
If X ~ Geometric(1/2), its expected value is<br>
<br>
0 * 1/2 + 1 * 1/4 + 2 * 1/8 + 3 * 1/16 + ...<br>
= 1/4 + 2/4 + 3/16 + ...<br>
= 1<br>
<br>
Decision theory and economics are dominated by the idea that making choices to maximize expected values is the best thing to do. But expected values aren't stable, in the sense that applying a common function to a random variable that has an expected value can yield another random variable that doesn't have one. If you exponentiate those geometric outcomes, you get this expected value:<br>
<br>
1 * 1/2 + 2 * 1/4 + 4 * 1/8 + 8 * 1/16 + ...<br>
= 1/2 + 1/2 + 1/2 + 1/2 + ...<br>
= +inf<br>
<br>
If you use X ~ Geometric(1/3) and Y = 2^X, you get<br>
<br>
1 * 1/3 + 2 * 2/9 + 4 * 4/27 + ...<br>
= 1/3 + 4/9 + 16/27 + ...<br>
= +inf<br>
<br>
for the expected value of Y. More generally, if X ~ Geometric(p), the formula for the expected value of Y is<br>
<br>
sum_{x=0}^inf 2^x * (1-p)^x * p<br>
= sum_{x=0}^inf (2*(1-p))^x * p<br>
<br>
By the geometric series convergence test, this fails to converge when |2*(1-p)| >= 1, which happens when p <= 1/2. So if p <= 1/2, Y = 2^X has no expected value, and we can use this fact to baffle economics students and impress the ladies.<br>
<br>
The expected value of the random variable I gave (roughly, sampling between 2^(X-1) and 2^X) is bounded below by substituting the lowest possible values (i.e. 2^(x-1) if x > 0, otherwise 0):<br>
<br>
0 * p(x_0) + 1 * p(x_1) + 2 * p(x_2) + 4 * p(x_3) + ...<br>
<br>
The tail of this series starting at 1 is 1/2 * the tail of the series for the expected value of Y starting at 1, which diverges.<br>
<br>
A proof of divergence for the expected value of the final function I gave won't be as easy because the uniform numbers it chooses aren't bounded below by a function that grows without bound (i.e. 2^(X-1)). It'll probably have something to do with the linearity of expectation.<br>
<br>
Linearity, BTW, is what makes expected value popular. It often yields closed-form solutions, and makes expected values scale and add together easily. Still, medians might be a better choice because they always exist. In the Y = 2^X game, if X ~ Geometric(1/2), the median of Y's distribution is $1. If X ~ Geometric(1/4), it's $4, and if X ~ Geometric(1/16), it's $1024.<span class="HOEnZb"><font color="#888888"><br>
<br>
Neil ⊥</font></span><div class=""><br>
<br>
On 02/19/2014 08:07 PM, Robby Findler wrote:<br>
</div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div class="">
Thanks Neil! (I think probably Max and Burke are also interested.)<br>
<br>
What is the connection between the economic philosophers argument<br>
against maximizing expected value, the idea that we should sample<br>
between 2^(X-1) and 2^X, and then the answer that you gave?<br>
<br>
I think that I know that the geometric distribution is something you get<br>
by repeatedly playing a game much like that first one, but I'm not<br>
really sure of the details and unable to piece together the rest....<br>
<br>
Robby<br>
<br>
<br>
<br>
On Wed, Feb 19, 2014 at 4:02 PM, Neil Toronto <<a href="mailto:neil.toronto@gmail.com" target="_blank">neil.toronto@gmail.com</a><br></div><div><div class="h5">
<mailto:<a href="mailto:neil.toronto@gmail.com" target="_blank">neil.toronto@gmail.com</a><u></u>>> wrote:<br>
><br>
> I've CC'd the Racket users mailing list because I thought more people<br>
(e.g. Robby) would be interested in the answer.<br>
><br>
> You can't sample uniformly from the naturals, but you can sample<br>
according to an "anything can happen" sort of distribution. You want a<br>
distribution with no expected value (aka mean, average).<br>
><br>
> Economic philosophers use such distributions to disabuse people of<br>
the idea that maximizing expected value is always the best way to make<br>
decisions. Here's a simple example.<br>
><br>
> 1. Flip a coin until you get heads. Call the number of flips X.<br>
> 2. You receive Y = 2^X dollars.<br>
><br>
> You can flip a fair coin, or a coin biased toward tails. Which coin<br>
do you choose? Obviously the biased coin. But in either case, the<br>
expected value of Y is infinite. Voila! Paradox.<br>
><br>
> (The problem isn't that there's no upper bound. For Y = X, the award<br>
is still unbounded, but for any coin with nonzero heads probability, the<br>
expected value is finite.)<br>
><br>
> You want *every* natural to be possible, though, so we'd have to do<br>
something like sample uniformly between 2^(X-1) and 2^X. Here's a<br>
function that does it:<br>
><br>
> (: random-natural/no-mean (-> Real Natural))<br>
> (define (random-natural/no-mean prob-zero)<br>
> (define n (exact-floor (sample (geometric-dist prob-zero))))<br>
> (define m1 (assert (expt 2 n) integer?))<br>
> (define m0 (quotient m1 2))<br>
> (max 0 (random-integer m0 m1)))<br>
><br>
> The "max 0" keeps TR from complaining that `random-integer' returns<br>
an Integer. The `prob-zero' argument is the probability the coin is<br>
heads, which is also the probability this function returns 0.<br>
><br>
> This looks sort of like what you want:<br>
><br>
> > (random-natural/no-mean 0.001)<br>
> - : Integer [more precisely: Nonnegative-Integer]<br>
> 561364746952251720117282918026<u></u>262169942568335458947662838737<u></u>03181<br>
> 103649863945364064978171205218<u></u>344034571826566243581365778156<u></u>79690<br>
> 734699942820608335737668487564<u></u>316697472385632691128991187079<u></u>63866<br>
> 081542526488241839952873336930<u></u>589513143317213419862223204383<u></u>59883<br>
> 508615135177375071501443403599<u></u>870885434537994239694097211659<u></u>23104<br>
> 821283867724893128944826597456<u></u>301414441084396221571137170277<u></u>84284<br>
> 761278658873104057329347939770<u></u>192491322955855902267565083888<u></u>5440<br>
><br>
> > (random-natural/no-mean 0.001)<br>
> - : Integer [more precisely: Nonnegative-Integer]<br>
> 57<br>
><br>
> If it returns small numbers too often, you can mitigate that by<br>
taking the max of a few of them.<br>
><br>
> If you want a feel for the "average" output, you can put it in terms<br>
of average number of bits:<br>
><br>
> > (mean (build-list 1000 (λ _ (integer-length (random-natural/no-mean<br>
0.01)))))<br>
> - : Real<br>
> 99 407/1000<br>
><br>
> In fact, now that I think about it, this is probably nearly equivalent:<br>
><br>
> (: random-natural/no-mean (-> Real Natural))<br>
> (define (random-natural/no-mean p)<br>
> (random-bits (add1 (exact-floor (sample (geometric-dist p))))))<br>
><br>
> The average number of bits should be near (/ 1 p).<br>
><br>
> Neil ⊥<br>
><br>
> ____________________<br>
> Racket Users list:<br>
> <a href="http://lists.racket-lang.org/users" target="_blank">http://lists.racket-lang.org/<u></u>users</a><br>
</div></div></blockquote>
<br>
</blockquote></div><br></div>