# To decide between two candidates, long live the simple majority!

Flames. Can math shed some light on the American election twist? Imagine a population that votes for two candidates and assume that voters choose one or the other by coinage. At the end of the ballot, the ballots are counted and the candidate with the most votes is elected. Now suppose that, during the count, the tellers make some mistakes (or cheat), for example by making a mistake once in 10,000. What is the probability that these small mistakes will distort the overall result and that the other candidate will be elected? It turns out that this probability is of the order of 6 in 1000 (for the curious, it is 2 / π times the square root of 1 / 10,000). Is this an acceptable risk in a democracy?

Article reserved for our subscribers Read also The flaws of democracy in the United States, episode 2: the disparity in voting rules between states

The American elections are on two levels. Each state elects its representatives by majority vote, and they in turn elect the president. Assuming again a reading error of once in 10,000 (which is reasonable when you see the American ballots), what is the probability of skewing the final result? The existence of this second level makes the probability much worse: one in 20 elections would be distorted! It’s way too much.

## “Noise sensitivity”

Of course, all of this depends on unrealistic assumptions and in no way supports Donald Trump’s fraud allegations! Supposing that voters choose to toss it obviously makes no sense, although one may be stunned by the near equality of results in Georgia, for example. However, this illustrates a phenomenon highlighted by mathematicians some twenty years ago: the “sensitivity to noise” of various decision-making processes, which go well beyond elections. This concerns computer science, combinatorics, statistical physics, and social sciences at the same time. When a large number of “agents”, which may be human beings or neurons for example, have “opinions”, what are the right processes that allow a stable overall decision to be made? This stability means that we want the decision to be as insensitive as possible to noise, that is to say to small errors that we do not control.

One can imagine a lot of electoral processes. For example, each district could elect its representative who would then elect the representative of the city, who would elect its representative in the canton, then the department, etc. It would be a sort of sporting tournament, in successive stages, a bit like the American elections but with many more levels. It turns out that this method is extremely sensitive to noise, and should definitely be avoided. The smallest proportion of errors in the counting would lead to a very high probability of being wrong about the final result. It is unacceptable for a vote but it is part of the charm of sports tournaments: it is not always the best who wins and that’s good.