< 1 2 3 4 5 > 
 
   
 

Prologue to Bill Maher & Larry Charles

 
Zyryab
 
Avatar
 
 
Zyryab
Total Posts:  5
Joined  31-05-2017
 
 
 
08 October 2018 05:05
 

Sam’s approach to anything political is filtered through his hatred of Trump. It doesn’t matter who Kavanaugh was or what he did or didn’t do 36-40 years ago, or whether he lied or not. He was a Trump appointee. Period.

I hope Sam wakes up and realizes his bias. It’s made him completely unhinged and irrational on more than one issue.

 
mapadofu
 
Avatar
 
 
mapadofu
Total Posts:  767
Joined  20-07-2017
 
 
 
08 October 2018 05:53
 

Analytic, you criticize but offer nothing better.  I provided an explicit and mathematically valid probabilistic model. What is a better model than saying, given truthful, the reported detail matches the actual detail with probability 1?

Semantics here, again: I’m using truthful in the sense that the statements are true, not that the person reporting is free from deceit.  If a better model requires formalizing things in a different way, I’m down with that.

I’m used to thinking in terms of likelihood factors (sometimes called Bayes factors), and interpret Sam’s comment as being a narrow statement on the relative likelihood between her testifying that an accomplice was present given truthful testimony vs. the likelihood that she’d testify to 1 accomplice given untruthful testimony.  This is perfectly sensible to me. 

If your complaint is that it is incomplete in the sense that you can’t get to the posteriors from it, then I’m in agreement with you.  But that doesn’t make it meaningless or erroneous.

“asking the FBI to interview a hostile accomplice tells says nothing about whether one is lying” - how so?  To me it would seem that an intentional, rational, liar would choose to not put an accomplice in the room, exactly to avoid having that person disconfirm the false account.  (Intentional, rational is to differentiate from someone who is misrepresenting the truth due to some less-conscious deep seated delusion)

[ Edited: 08 October 2018 06:57 by mapadofu]
 
mapadofu
 
Avatar
 
 
mapadofu
Total Posts:  767
Joined  20-07-2017
 
 
 
08 October 2018 06:56
 

Analytic, on another point, if you really don’t want to come across as harsh or judgemental, then you probably shouldn’t use descriptions like “distorting” and “fundamentally meaningless”.  Principle of charity and all that.

“are distorting Bayesian hypothesis testing and making a fundamentally meaningless comparison.  Again, I don’t mean this to be harsh or judgmental, ”

 
mapadofu
 
Avatar
 
 
mapadofu
Total Posts:  767
Joined  20-07-2017
 
 
 
08 October 2018 07:48
 

Maybe you’re not familiar with this way of representing things.

Consider the two hypothesis case.  So our posteriors are p(h0 | x), p(h1 | x) the posteriors of the two hypotheses given the observation.

p(h0|x)+p(h1|x)=1 in this binary problem.

Then you can compute the ratio R1=p(h1|x)/ph(h0|x).

Given R1 you can recover p(h1|x) = R1/(1+R1), and p(h0|x) =1/(1+R1).

By Bayes theorem R1.=[p(x|h1)/p(x|h0)] * [p(h1)/p(h0)] = L1 * Pr1. A data likelihood ratio times a prior likelihood ratio.

Forbinary hypothesis testing, you can do all of your thinking in terms of R instead of p(h…)

To go beyond binary problems, think of the unnormalized likelihood ratio vector
{1,R1} in the binary case, which, when normalized is just the probability distribution over the two hypotheses. Another way to say t is that this vector is proportional to the posterior distribution.

In the multiple hypothesis case, you have {1, R1, R2…RN} vector that is proportional to the posterior distribution.

It is perfectly sensible to talk about the relative magnitude if the components of this: Ra/Rb = p(ha|x)/p(hb |x)In discussing a probabilistic model, ad breaking it down into a likelihood factor, and prior etc.

[ Edited: 08 October 2018 08:30 by mapadofu]
 
TheAnal_lyticPhilosopher
 
Avatar
 
 
TheAnal_lyticPhilosopher
Total Posts:  1028
Joined  13-02-2017
 
 
 
08 October 2018 11:11
 

Analytic, you criticize but offer nothing better.  I provided an explicit and mathematically valid probabilistic model.  What is a better model than saying, given truthful, the reported detail matches the actual detail with probability 1.

In fact I offered an account of how one could use Bayesian reasoning in this case.  I just didn’t bother with a formal model because one is neither necessary nor helpful (in my opinion) in this kind of situation.  And almost any model is better than assuming Ford is truthful and therefore the event happened as described with a probability of 1, then testing the likelihoods of lying against that assumption.

We are too far apart here for me to spend any more time on this.  Best to you, sincerely.  I apologize if I was offensive.

 

[ Edited: 08 October 2018 13:40 by TheAnal_lyticPhilosopher]
 
mapadofu
 
Avatar
 
 
mapadofu
Total Posts:  767
Joined  20-07-2017
 
 
 
08 October 2018 15:31
 

Gussying up your natural language argument (post 16) with a few probabilistic terms is neither necessary or helpful, and certainly doesn’t make it rigorous.  If you can’t express what you are talking about in terms of a probabilistic model, then you’re not really doing Bayesian reasoning, in my book.

 
TheAnal_lyticPhilosopher
 
Avatar
 
 
TheAnal_lyticPhilosopher
Total Posts:  1028
Joined  13-02-2017
 
 
 
09 October 2018 07:19
 
mapadofu - 08 October 2018 03:31 PM

Gussying up your natural language argument (post 16) with a few probabilistic terms is neither necessary or helpful, and certainly doesn’t make it rigorous.  If you can’t express what you are talking about in terms of a probabilistic model, then you’re not really doing Bayesian reasoning, in my book.

Look, to use the Bayes Factor in a hypothesis test, one takes it times the prior odds of the null in order to get the posterior odds of the null.  So assume the prior odds of a woman telling the truth in a sexual assault is 9 to 1, meaning they tell the truth 90% of the time.  Pretty close to the current estimates.  Now, assume when they lie, they say multiple assailants 50% of the time, and of those, 50% of the time they say 1 accomplice.  Under your model, this means a prior odds of 9 to 1, a Bayes Factor of 4, for a posterior odds of 36 to 1, meaning an increase in the probability of telling the truth to .97—an apparent confirmation of its logic.  A lie with an accomplice suggests an increased chance she’s telling the truth.  This is what Harris suggests verbally, and you formalize.

But note: on your model, any lie a woman tells increases the odds that she is telling the truth.  To see this, just assume one assailant (the minimum lie she can tell), which women falsely report 50% of the time.  Using the Bayes Factor, then, we have the same prior odds of 9 to 1, now a Bayes Factor of 2, thus a posterior odds of 18 to 1—or a probability that she’s telling the truth at 95%.  This makes no sense.  By factoring in the possibility of a specific lie into the Bayes Factor with “complete truthful reporting” (p=1) as the numerator, any lie she tells increases the odds that she is telling the truth by the very fact that she is lying.  And this is what I mean by a fundamentally meaningless comparison.  Since the probability of any given lie is always less than one, putting a numerator of 1 in the Bayes Factor formula in a test like you’ve constructed will always result in an increasing probability that Ford is telling the truth, meaning that in your hypothesis test one will always “affirm the null” of truth telling, absurdly enough all the more strongly the wilder the lie she tells. 

I don’t need to justify this reasoning with a formal model.  It is in fact the logic of using the Bayes Factor in a hypothesis test—in my circles, at least, medical research (see Goodman, “Toward evidence-based medical statistics. 2: The Bayes Factor”, Annals of Internal Medicine, 130 (12), p. 1005.).  That said, you can justify your hypothesis testing with any formal probability model you want.  That merely provides, as far as I can tell, a probabilistic and mathematical model for a fundamentally meaningless—even absurd—test.

If you are doing something else here than testing two hypotheses using the Bayes Factor, then I withdraw all my criticisms and would ask for clarification on what you are doing.  In any case, I think I have shown that setting up the Bayes Factor as you do results in a fundamentally meaningless, even absurd, hypothesis test. 

 

[ Edited: 09 October 2018 08:11 by TheAnal_lyticPhilosopher]
 
mapadofu
 
Avatar
 
 
mapadofu
Total Posts:  767
Joined  20-07-2017
 
 
 
09 October 2018 08:34
 

Counterintuitive is not absurd.  Again, this is an odd feature that pops out when you, or at least I, formally model this problem.  I don’t see a way to formalize what it means for “Dr. Ford’s testimony is true” other than the way I have done so.

Note that it is not “any lie” it is “any statement about the details of the event” that tends to increase the likelihood of “true, event occurred as described”  (Or at least that is how I’d articulate it in English.). This is kind of intuitive- at least as a heuristic, we tend to be more suspicious of people who are vague or evasive about the details of an event and give more credence to testimony that provides a lot of detail.. (You can go down that road in this case making a more complex model, with more variables representing the additional features and so on to account for the aspects of her testimony that were vague, but that gets us away from the nub of what I think we’re discussing)

I’m glad you took the time to explicitly work through a version of the problem.

I follow paragraph 1 as long as
p(r=assailant only| lying, no event) =1/2, p(r=1 accomplice| lying, no event) = 1/4 and p(r=more than one accomplice | lying, no event) = 1/4.  Here “r” is what the person reporting the crime (Dr. Ford) says occurred.

I’m not 100% sure what you’re trying to get at in the second paragraph.  If the victim reports an assailant only crime (zero accomplices) the likelihood of an assailant only crime goes up, the factor is smaller due to the higher proportion of false claims that don’t implicate anyone else, but the general result is the same.  Note that if she reports assailant only (zero accomplices) then the likelihood for having truthfully reported a 1accomplice crime is driven to zero. 


One feature of the perfect reporting model is that if the reporter makes contradictory statements then you’d infer (with certainty no less) that he/she was lying in at least one of those statements.  (which is intuitive to me) This can be “fixed” bay allowing some spread in the p(report | truthful, event) but for this case, where we’re dealing with an integer count, it doesn’t completely make sense, and only softens, not eliminates, the counter intuitive aspect.

The only situation I can think of where falsely reporting an accomplice could be more likely would be if the reporter had high confidence that the person she fingered would himself lie and corroborate the false accusation.  Fodder for a movie of the week, but not relevant for Dr. Fird’s situation.

 

 

[ Edited: 09 October 2018 09:45 by mapadofu]
 
TheAnal_lyticPhilosopher
 
Avatar
 
 
TheAnal_lyticPhilosopher
Total Posts:  1028
Joined  13-02-2017
 
 
 
09 October 2018 10:42
 

Your interpretation of the first paragraph is correct. 

I’m not 100% sure what you’re trying to get at in the second paragraph.

Your model stipulates for R “the number of accomplices reported: r_null = no event reported at all,  r_0 = event but no accomplices reported, r_1 for reporting the event occurred with one accomplice etc.”, and you say “p(r_x | T=false, a_null)” is a monotonically decreasing geometric distribution, which means that a_0—i.e. an “event” with no accomplices, and therefore r_0—has the highest probability, which has to be, by definition, less than 1.  This means that p(r_x | T=false, a_null) will either be 0 (for r_null, in which case there isn’t even a problem because there is no report; you specify this at p=0)), or it will always be less than 1.  This means in turn that for any false report, the Bayes Factor defined as p(r_1 | true, a_1)/ p(r_1 | false, a_null) will always be greater than one, which means any lie Ford tells increases—you say “counterintuitively,” I say absurdly—the probability that Ford is telling the truth, according to the formula for using the Bayes Factor in hypothesis testing [(Odds_prior)(Bayes Factor)= Odds_post].  This means, then, that the hypothesis (true, a_1) will be “affirmed” even if Ford is lying, and as p(r_x) is monotonically decreasing, the greater the x, i.e. the bigger lie she tells, the more strongly her truth telling will be affirmed. 

I get that you say you are modeling ‘a lie with a greater number of accomplices means it is less likely to be a lie,’ with an eye toward how this plays “into the posterior for the relative likelihood between the two hypotheses (true, a_1), (false, a_null).”  This is what Harris does verbally.  But in hypothesis testing terms your “model” results in formal contradiction, namely, that lying increases the probability of telling the truth.  It’s a fallacy structurally built into the model itself, making it a bad model.  I don’t know how to express this fundamental error in your model any better than expressing it in terms of the model itself, according to what I think are the recognized principles of Bayesian hypothesis testing.

 

 

[ Edited: 11 October 2018 07:13 by TheAnal_lyticPhilosopher]
 
mapadofu
 
Avatar
 
 
mapadofu
Total Posts:  767
Joined  20-07-2017
 
 
 
09 October 2018 20:03
 

You keep saying that she’s lying, literally “any lie Ford tells”.  We don’t know (observe) that, we only observe what she reports, and have to infer the likelihood that she’s lying.  It is only problematically absurd if you assume she’s lying.

We don’t know that the report is false.

A better way to explain it to me is to formulate an explicit mathematical model that doesn’t have this property.

He’s a non-informative limit:

A fair 6 sided die is rolled in secret.
Someone reports the number of pips.  50% of the time they report the actual rolled number, no errors.  The other 50% they report a random (independent) number uniformly from 1-6.

Person reports 1 (for concretebess).
P(r=1| true, die=1)/p(r=1|false, die=1)=6, i.e. a big likelihood boost for the “true, actual” state relative to any /specific/ “given false” likelihood (any p(r=1|false, die=n))

However, if you consider the posterior, and marginalize over the (I observed) state of the die, the p(true report)=p(false report)=1/2.

The prior probability is a uniform vector with elements 1/12.  There are 12 states because it is [true,false]x[1-6].  I’ll order them [t1, t2…t6 || f1…f6].

The likelihood vector is [6,0,0,0,0,0 || 1,1,1,1,1,1],which yields 50/50 truth/lying overall, but favors truth over any specific one of the possibilities.

I think this 50/50 result is a special feature of this contrived example, and don’t think it works out for more plausible models, but maybe there is some way of shaping the distributions so that the posterior for true/false (marginalizing over the true state of the event) works out the way you intuit.

 

 
TheAnal_lyticPhilosopher
 
Avatar
 
 
TheAnal_lyticPhilosopher
Total Posts:  1028
Joined  13-02-2017
 
 
 
10 October 2018 02:59
 

You keep saying that she’s lying, literally “any lie Ford tells”.  We don’t know (observe) that, we only observe what she reports, and have to infer the likelihood that she’s lying.  It is only problematically absurd if you assume she’s lying.

But the model, ostensibly to compute this likelihood, assumes she’s lying; that’s the denominator in your Bayes Factor.  And that means under the model increasing the odds of telling the truth given a lie—what p(r_1 | false, a_null) entails—entails a logical contradiction.  For the fact that r_1 is a lie is given, and the probability attaches to the specificity of the lie—r_x, meaning the specific lie—not the question of whether she’s lying.  So on the model, to compute Ford’s likelihood of truthfulness, a lie increases the evidence of that truthfulness, which makes no sense.  The model is thus invalid for leading to—in fact being based on—a logical contradiction.  By assumption.  It’s built right into the model.

Another way to point out your error is in general Bayesian hypothesis test terms, to wit: any probabilistic model that deductively specifies a Bayes Factor greater than 1 is on its face an invalid test.  To be useful in a hypothesis test, the Bayes Factor must be allowed to vary to below 1, otherwise one can never reject the null and affirm the alternative.  Rather, one can only affirm the null with greater degrees of certainty, no matter what data you put in.  In fact, in the medical literature there are even correlates for the decreasing Bayes Factor corresponding to traditional frequentist p-values, like BF=.15 for p=.05, or BF=.005 for p=.001—that sort of thing.  Absent this possibility of varying to below 1, the test is meaningless.  It can’t test hypotheses, only affirm the null.  And that is all your test does, as I’ve indicated in both a concrete example and using the terms of your own model. 

A better way to explain it to me is to formulate an explicit mathematical model that doesn’t have this property.

Honestly, this very approach to the problem is misguided—at least as Bayesian hypothesis testing is concerned.  Bayesian hypothesis tests are used to test the relative suitability of probabilistic models, models that, if “real”—and by that I mean ‘express the underlying mechanism of the phenomena under investigation’—would serve as reliable generators of the data; this is what one hopes to discover.  Bayesian hypothesis tests are not first constructed on deductive probabilistic models, ones where the sample space is apriori pre-delineated into assignable True or False outcomes (or whatever other value the random variable takes).  Yes, the logic of Bayesian testing rests on the axioms of probability, and the theorems that follow from them; in that sense the conceptual foundations of the test are both “probabilistic” and deductively valid.  But a true hypothesis test is never based on a “valid probabilistic model” (as you are using the term here) that apriori delineates the sample space.  In fact, by demanding such a model as the basis of a Bayesian test—again, as you are using the term here—you are, in effect, putting the Bayesian cart before the testing horse.  You are presupposing precisely what you are trying to test and explain.  Instead of formulating an explicit mathematical model that predetermines the sample space, one only needs to construct a null and an alternative that are likely probabilistic models of what is given in that space, one of which if true would account for the existing data.  In this way one uses the test to determine which model would best fit as the generator of that data.  But to do this, one does not, apriori, construct a probabilistic model that assigns values to the outcomes of the random variable, then base the test on that model.

What I see you asking for here—and what you are doing in your examples—is almost precisely how not to go about a Bayesian hypothesis test.  That you are approaching the problem in this way is why, I think, you are getting—and will invariably get—an invalid model.  Testing apriori assigned probabilities in the sample space for a binary outcomes model where one part of the Bayes Factor is True and the other is False will always result in the kind of invalid test you’ve constructed for the Ford-Kavanaugh situation.  Unless I miss my guess, you’ll always get a denominator in the Bayes Factor less than 1 and a numerator at unity, which means the test will always only affirm the null without ever actually testing anything.

I wouldn’t hang my hat on this last assertion without more thought, but either way the “probabilistic models” you are demanding are not the way Bayesian hypothesis testing is done—with the caveat, I guess: in the medical literature, at least.  In any case, for the reasons I’ve specified, your probabilistic model of the Ford-Kavanaugh situation is invalid as “a Bayesian hypothesis test.”  In fact, it’s not even a hypothesis test, just a probabilistic model with a dubious assumption contorted into a Bayesian hypothesis testing framework.  As a tool for assessing the truth or not of Ford’s testimony, it is—and I honestly regret saying this, but I don’t know what else to say, or how to soften it—it is completely useless (or even, if ever used, worse).

 

[ Edited: 10 October 2018 11:06 by TheAnal_lyticPhilosopher]
 
mapadofu
 
Avatar
 
 
mapadofu
Total Posts:  767
Joined  20-07-2017
 
 
 
10 October 2018 16:55
 

The model covers all of the possibilities, it is the full probability distribution.

The quantity of interest (because it is what Sam called out) is the ratio p(r=1|true,a=1)/p(r=1|false, a=null) [I’m back to counting the number of accomplices, and using null to indicate no attack].  Evaluating this does involve considering the probability of reporting 1 accomplice given that she’s lying and there was no event.  But it also includes the probability of reporting 1 accomplice given true reporting and an attack involving 1 accomplice.

One of the great things about spelling out a complete probability model is that it can never be logically (mathematically) inconsistent; it is what it is.

I don’t find the perfect reporting representation absurd; indeed I find it the natural way of representing true reporting (true in the sense that the report is a true representation of the actual event).

Yes, this facet looked at in isolation always yields a likelihood ratio >1.  But if you consider more complex models that include other observable features, you can find things that would reduce the likelihood (inconsistencies between multiple reports being a good example). 

That modeling a somewhat squishy real world scenario doesn’t match up with the characteristics you are used to seeing in the class of medical hypothesis testing you are familiar with doesn’t make ithis model incoherent.  Limited, sure, but a valid example to illustrate how Sam’s statement can be represented more formally.

 
mapadofu
 
Avatar
 
 
mapadofu
Total Posts:  767
Joined  20-07-2017
 
 
 
10 October 2018 17:04
 

Analytic, what is a typical kind of Bayesian hypothesis testing problem you’ve worked on, and what is the process for tackling it?

 
TheAnal_lyticPhilosopher
 
Avatar
 
 
TheAnal_lyticPhilosopher
Total Posts:  1028
Joined  13-02-2017
 
 
 
11 October 2018 03:03
 
mapadofu - 10 October 2018 04:55 PM

The model covers all of the possibilities, it is the full probability distribution.

The quantity of interest (because it is what Sam called out) is the ratio p(r=1|true,a=1)/p(r=1|false, a=null) [I’m back to counting the number of accomplices, and using null to indicate no attack].  Evaluating this does involve considering the probability of reporting 1 accomplice given that she’s lying and there was no event.  But it also includes the probability of reporting 1 accomplice given true reporting and an attack involving 1 accomplice.

One of the great things about spelling out a complete probability model is that it can never be logically (mathematically) inconsistent; it is what it is.

I don’t find the perfect reporting representation absurd; indeed I find it the natural way of representing true reporting (true in the sense that the report is a true representation of the actual event).

Yes, this facet looked at in isolation always yields a likelihood ratio >1.  But if you consider more complex models that include other observable features, you can find things that would reduce the likelihood (inconsistencies between multiple reports being a good example). 

That modeling a somewhat squishy real world scenario doesn’t match up with the characteristics you are used to seeing in the class of medical hypothesis testing you are familiar with doesn’t make ithis model incoherent.  Limited, sure, but a valid example to illustrate how Sam’s statement can be represented more formally.

Well mapadofu, I appreciate that this conversation hasn’t become rancorous, and I’ve enjoyed it so far, but I don’t see it going anywhere from here, so I’m going to excuse myself.

I’ve shown that your model leads to a logically absurd result, incorporating as it does a logical contradiction—that telling a lie increases the odds of telling the truth.  I’ve illustrated this with a concrete example entirely realistic for this situation, and I’ve expressed the logic of that example in terms of your own model.  I’ve also pointed out your “model” can’t be the basis for a Bayesian hypothesis test because by design it prevents the Bayes Factor from becoming less than 1, meaning that it can only affirm the null—and, even more absurdly, it affirms the null only on the condition that the alternative is true.  I’ve also pointed out that your very approach—this insistence on a probabilistic model apriori delineating the sample space as a specific distribution—is not even Bayesian hypothesis testing (which it’s not); it’s basically its opposite.  You have not explained why any one of these arguments is wrong, much less addressed the significance of all three of them taken together.  All you have done is reassert that your model represents “a valid example” of “how Sam’s statement can be represented more formally”, and now you are asserting, using the same methods, an even more complex model to erect around it, as though that increasing complexity would somehow correct its deficiencies.  And now you are asking me what kind of problems I’ve worked on.

I find this exasperating and see the conversation only going nowhere from here, so I’m moving on.  Best to you, sincerely.  It’s been a pleasure.  See you around the forum.

 

 

[ Edited: 11 October 2018 03:24 by TheAnal_lyticPhilosopher]
 
mapadofu
 
Avatar
 
 
mapadofu
Total Posts:  767
Joined  20-07-2017
 
 
 
11 October 2018 03:50
 

If you spell out the likelihood ratio in English fully it is “For someone who is giving a true account of an event involving 1 accomplice, the probability of reporting 1 accomplice is higher than the probability that someone giving the same report when no crime occurred.”  Logical contradiction, not so much.  For me, not even really absurd, though if you find it so that might say more about your mindset than about the model.

I don’t know where this requirement on likelihood ratios spanning from<1 to >1 comes from.  I know that this is not a generic feature of all formal probabilistic models; maybe in a specific domain it is a signature of good experimental design, but modeling the real world need not conform to that.

I’ve shown a limiting case where the posterior for “telling the truth” (marginalizing over the other variable in the problem( is unchanged by the report, so at least that is possible.  I just don’t think that the kind of balance in that contrived example will apply to real world situations.

[ Edited: 11 October 2018 04:42 by mapadofu]
 
 < 1 2 3 4 5 >