Bayes craze, neural networks and uncertainty

Story, context and hype

Named after its inventor, the 18th-century Presbyterian minister Thomas Bayes, Bayes’ theorem is a method for calculating the validity of beliefs (hypotheses, claims, propositions) based on the best available evidence (observations, data, information). Here’s the most dumbed-down description: Initial/prior belief + new evidence/information = new/improved belief.

P(B|E) = P(B) X P(E|B) / P(E), with P standing for probability, B for belief and E for evidence. P(B) is the probability that B is true, and P(E) is the probability that E is true. P(B|E) means the probability of B if E is true, and P(E|B) is the probability of E if B is true.

Since recently, Bayesian theorem has become ubiquitous in modern life and is applied in everything from physics to cancer research, psychology to ML spam algorithms. Physicists have proposed Bayesian interpretations of quantum mechanics and Bayesian defences of string and multiverse theories. Philosophers assert that science as a whole can be viewed as a Bayesian process, and that Bayesian approach can distinguish science from pseudoscience more precisely than falsification, the method popularised by Karl Popper. Some even claim Bayesian machines might be so intelligent that they make humans “obsolete.”

Bayes going into AI/ML

Neural networks are all the rage in AI/ML. They learn tasks by analysing vast amounts of data and power everything from face recognition at Facebook to translation at Microsoft to search at Google. They’re beginning to help chatbots learn the art of conversation. And they’re part of the movement toward driverless cars and other autonomous machines. But because they can’t make sense of the world without help from such large amounts of carefully labelled data, they aren’t suited to everything. Induction is prevalent approach for learning methods and they have difficulties dealing with uncertainties, probabilities of future occurrences of different types of data/events and “confident error” problems.

Additionally, AI researchers have limited insight into why neural networks make particular decisions. They are, in many ways, black boxes. This opacity could cause serious problems: What if a self-driving car runs someone down?

Regular/standard neural networks are bad at calculating uncertainty. However, there is a recent trend of bringing in Bayes (and other alternative methodologies) into this game too. Currently, AI researchers, including those working on Google’s self-driving cars, started employing Bayesian software to help machines recognise patterns and make decisions.

Gamalon, an AI startup that went life earlier in 2017, touts a new type of AI that requires only small amounts of training data – its secret sauce is Bayesian Program Synthesis.

Rebellion Research, founded by the grandson of baseball grand Hank Greenberg, relies upon a form of ML called Bayesian networks, using a handful of machines to predict market trends and pinpoint particular trades.

There are many more examples.

The dark side of Bayesian inference

The most notable pitfall of Bayesian approach is the calculation of prior probability. In many cases, estimating  the prior is just guesswork, allowing subjective factors to creep into calculations. Some prior probabilities are unknown or don’t even exist such as multiverses, inflation or God. In this way, Bayes’ theorem can promote pseudoscience and superstition as well as reason.

In 1997, Microsoft launched its animated MS Office assistant Clippit, which was conceived to work on Bayesian inference system but failed miserably .

In law courts, Bayesian principles may lead to serious miscarriages of justice (see the prosecutor’s fallacy). In a famous example from the UK, Sally Clark was wrongly convicted in 1999 of murdering her two children. Prosecutors had argued that the probability of two babies dying of natural causes (the prior probability that she is innocent of both charges) was so low – one in 73 million – that she must have murdered them. But they failed to take into account that the probability of a mother killing both of her children (the prior probability that she is guilty of both charges) was also incredibly low.

So the relative prior probabilities that she was totally innocent or a double murderer were more similar than initially argued. Clark was later cleared on appeal with the appeal court judges criticising the use of the statistic in the original trial. Here is another such case.

Alternative, complimentary approaches

In actual practice, the method of evaluation most scientists/experts use most of the time is a variant of a technique proposed by Ronald Fisher in the early 1900s. In this approach, a hypothesis is considered validated by data only if the data pass a test that would be failed 95% or 99% of the time if the data were generated randomly. The advantage of Fisher’s approach (which is by no means perfect) is that to some degree it sidesteps the problem of estimating priors where no sufficient advance information exists. In the vast majority of scientific papers, Fisher’s statistics (and more sophisticated statistics in that tradition) are used.

As many AI/ML algorithms automate their optimisation and learning processes, they can deploy a more careful Gaussian process consideration, including type of kernel and the treatment of its hyper-parameters, can play a crucial role in obtaining a good optimiser that can achieve expert level performance.

Dropout (which addresses overfitting problem), is another technique that has been in use for several years in deep learning, is another technique that enables uncertainty estimates by approximating those of Gaussian process. This is a powerful tool in statistics that allows model distributions over functions and been applied in both the supervised and unsupervised domains, for both regression and classification tasks. It offers nice properties such as uncertainty estimates over the function values, robustness to over-fitting, and principled ways for hyper-parameter tuning.

Google’s Project Loon uses Gaussian process (together with reinforcement learning) for its navigation.