Are two heads better than one?

(eieio.games)

74 points | by evakhoury 7 hours ago

13 comments

  • thomascountz 6 minutes ago
    Let's pretend Alice tells the truth 100% of the time, and Bob is still at 20%. It's more easy to intuit that Bob's contribution is only noise.

    Slide Alice's accuracy down to 99% and, again, if you don't trust Alice, you're no better off trusting Bob.

    Interestingly, this also happens as a feature of them being independent. If Bob told the truth 20% of the time that Alice told a lie, or if Bob simply copied Alice's response 20% of the time and otherwise told the truth, then the maths are different.

  • gweinberg 1 hour ago
    Bob isn't giving you any actionable information. If Alice and Bob agree, you're more confident than you were before, but you're still going to be trusting Alice. If they disagree you're down to 50% confidence, but you still might as well trust Alice.
  • stared 14 minutes ago
    In voting parity matters. In some cases, effects are interesting - like in some cases if the mafia/werewolf game, when adding a citizen/villager decreses their winning chance by sqrt(pi/2), vide https://arxiv.org/abs/1009.1031

    When it comes to the wisdom of crowds, see https://egtheory.wordpress.com/2014/01/30/two-heads-are-bett...

  • zahlman 26 minutes ago
    I imagined it like:

        F  T (Alice)
      F xx ????????
        xx ????????
    
      T ?? vvvvvvvv
        ?? vvvvvvvv
      ^ ?? vvvvvvvv
      B ?? vvvvvvvv
      o ?? vvvvvvvv
      b ?? vvvvvvvv
      v ?? vvvvvvvv
        ?? vvvvvvvv
    
    (where "F" describes cases where the specified person tells you a Falsehood, and "T" labels the cases of that person telling you the Truth)

    In the check-mark (v) region, you get the right answer regardless; they are both being truthful, and of course you trust them when they agree. Similarly you get the wrong answer regardless in the x region.

    In the ? region you are no better than a coin flip, regardless of your strategy. If you unconditionally trust Alice then you win on the right-hand side, and lose on the left-hand side; and whatever Bob says is irrelevant. The situation for unconditionally trusting Bob is symmetrical (of course it is; they both act according to the same rules, on the same information). If you choose any other strategy, you still have a 50-50 chance, since Alice and Bob disagree and there is no reason to choose one over the other.

    Since your odds don't change with your strategy in any of those regions of the probability space, they don't change overall.

  • saila 6 minutes ago
    What happens if Bob lies to Alice 20% of the time and Alice lies to me 20% of the time but I only get input from Alice?
  • pavon 51 minutes ago
    It depends on what you are doing with the guess. If it is just a question of how frequently you are right or wrong the second person doesn't help. But if you are, for example, betting on your guess the second person improves your odds of coming out ahead significantly, since you can put down a higher wager when they agree than when they disagree.
    • cortesoft 0 minutes ago
      The post discusses this exact point near the end.
  • sambaumann 1 hour ago
    I paused and wrote out all the probabilities and saw no way to improve beyond 80% - I scrolled down hoping to be proven wrong!
    • eieio 1 hour ago
      (I'm the author)

      I think there's an annoying thing where by saying "hey, here's this neat problem, what's the answer" I've made you much more likely to actually get the answer!

      What I really wanted to do was transfer the experience of writing a simulation for a related problem, observing this result, assuming I had a bug in my code, and then being delighted when I did the math. But unfortunately I don't know how to transfer that experience over the internet :(

      (to be clear, I'm totally happy you wrote out the probabilities and got it right! Just expressing something I was thinking about back when I wrote this blog)

  • nick238 31 minutes ago
    This kinda reminds me of error correction, and where at some level you can have detectable but not correctable error conditions. Adding Bob is just like adding a parity bit: can give you a good indication someone lied, but won't fix anything. Adding Charlie gives you the crudest ECC form, a repetition code (though for storing one bit, I don't think you can do better?)
    • benlivengood 13 minutes ago
      I guess if you only need to store one bit you could store either 0 or 11 and on average use less than two bits (for bit flips only), or 111 if you have to also worry about losing/duplicating bits.
  • layer8 55 minutes ago
    This gives me an idea of how to implement isEven() and isOdd() probabilistically.
  • dweez 34 minutes ago
    One way to think about this is you have a binomial distribution with p=0.8 and n=number of lying friends. Each time you increase n, you shift the probability mass of the distribution "to the right" but if n is even some of that mass has to land on the "tie" condition.

    I wrote a quick colab to help visualize this, adds a little intuition for what's happening: https://colab.research.google.com/drive/1EytLeBfAoOAanVNFnWQ...

  • millipede 1 hour ago
    Why not unconditionally trust Bob?
    • zahlman 1 hour ago
      You can, but trivially that strategy is also no better than unconditionally trusting Alice.
  • Jadiiee 10 minutes ago
    Better to stay away from lying friends, no?