Reinforcement learning and its new frontiers

RL’s origins and historic context

RL copies a very simple principle from nature. The psychologist Edward Thorndike documented it more than 100 years ago. Thorndike placed cats inside boxes from which they could escape only by pressing a lever. After a considerable amount of pacing around and meowing, the animals would eventually step on the lever by chance. After they learned to associate this behaviour with the desired outcome, they eventually escaped with increasing speed.

Some of earliest AI researchers believed that this process might be usefully reproduced in machines. In 1951, Marvin Minsky, a student at Harvard who would become one of the founding fathers of AI, built a machine that used a simple form of reinforcement learning to mimic a rat learning to navigate a maze. Minsky’s Stochastic Neural Analogy Reinforcement Computer (SNARC), consisted of dozens of tubes, motors, and clutches that simulated the behaviour of 40 neurons and synapses. As a simulated rat made its way out of a virtual maze, the strength of some synaptic connections would increase, thereby reinforcing the underlying behaviour.

There were few successes over the next few decades. In 1992, Gerald Tesauro demonstrated a program that used the technique to play backgammon. It became skilled enough to rival the best human players, a landmark achievement in AI. But RL proved difficult to scale to more complex problems.

In March 2016, however, AlphaGo, a program trained using RL, won against one of the best Go players of all time, South Korea’s Lee Sedol. This milestone event opened again teh pandora’s box of research about RL. Turns out the key to having a strong RL is to combine it with deep learning.

Current usage and major methods of RL

Thanks to current RL research, computers can now automatically learn to play ATARI games, are beating world champions at Go, simulated quadrupeds are learning to run and leap, and robots learn how to perform complex manipulation tasks that defy explicit programming.

However, while RL saw its advancements accelerate, progress in RL has not been driven as much by new ideas or additional research as just by more of data, processing power and infrastructure. In general, there are four separate factors that hold back AI:

  1. Processing power (the obvious one: Moore’s Law, GPUs, ASICs),
  2. Data (in a specific form, not just somewhere on the internet – e.g. ImageNet),
  3. Algorithms (research and ideas, e.g. backprop, CNN, LSTM), and
  4. Infrastructure (Linux, TCP/IP, Git, AWS, TensorFlow,..).

Similarly for RL, for example for computer vision, the 2012 AlexNet (deeper and wider version of 1990’s Convolutional Neural Networks – CNNs). Or, ATARI’s Deep Q Learning is an implementation of a standard Q Learning algorithm with function approximation, where the function approximator is a CNN. AlphaGo uses Policy Gradients with Monte Carlo tree search (MCTS).

RL’s most optimal method vs. human learning

Generally, RL approaches can be divided into two core categories. The first focuses on finding the optimum mappings that perform well in the problem of interest. Genetic algorithmgenetic programming and simulated annealing have been commonly employed in this class of RL approaches. The second category is to estimate the utility function of taking an action for the given problem via statistical techniques or dynamic programming methods, such as TD(λ) and Q-learning. To date, RL has been successfully applied in many real-world complex applications, including autonomous helicopterhumanoid roboticsautonomous vehicles, etc.

Policy Gradients (PGs), one of RL’s most used methods, is shown to work better than Q Learning when tuned well. PG is preferred because there’s an explicit policy and a principled approach that directly optimises the expected reward.

Before trying PGs (canon), it is recommended to first try to use cross-entropy method (CEM) (normal gun), a simple stochastic hill-climbing “guess and check” approach inspired loosely by evolution. And if you really need to or insist on using PGs for your problem, use a variation called TRPO, which usually works better and more consistently than vanilla PG in practice. The main idea is to avoid parameter updates that change the policy dramatically, as enforced by a constraint on the KL divergence between the distributions predicted by old and the new policies on data.

PGs, however have few disadvantages: they typically converge to a local rather than a global optimum and they display inefficient and high variance while evaluating a policy. PGs also require lot of training samples, take lot of time to train, and are hard to debug debug when they don’t work.

PG is a fancy form of guess-and-check, where the “guess” refers to sampling rollouts from a current policy and encouraging actions that lead to good outcomes. This represents the state of the art in how we currently approach RL problems. But compare that to how a human might learn (e.g. a game of Pong). You show him/her the game and say something along the lines of “You’re in control of a paddle and you can move it up or down, and your goal is to bounce the ball past the other player”, and you’re set and ready to go. Notice some of the differences:

  • Humans communicate the task/goal in a language (e.g. English), but in a standard RL case, you assume an arbitrary reward function that you have to discover through environment interactions. It can be argued that if a human went into a game without knowing anything about the reward function, the human would have a lot of difficulty learning what to do but PGs would be indifferent, and likely work much better.
  • A human brings in a huge amount of prior knowledge, such as elementary physics (concepts of gravity, constant velocity,..), and intuitive psychology. He/she also understands the concept of being “in control” of a paddle, and that it responds to your UP/DOWN key commands. In contrast, algorithms start from scratch which is simultaneously impressive (because it works) and depressing (because we lack concrete ideas for how not to).
  • PGs are a brute force solution, where the correct actions are eventually discovered and internalised into a policy. Humans build a rich, abstract model and plan within it.
  • PGs have to actually experience a positive reward, and experience it very often in order to eventually shift the policy parameters towards repeating moves that give high rewards. On the other hand, humans can figure out what is likely to give rewards without ever actually experiencing the rewarding or unrewarding transition.

In games/situations with frequent reward signals that requires precise play, fast reflexes, and not much planning, PGs quite easily can beat humans. So once we understand the “trick” by which these algorithms work you can reason through their strengths and weaknesses.

PGs don’t easily scale to settings where huge amounts of exploration are difficult to obtain. Instead of requiring samples from a stochastic policy and encouraging the ones that get higher scores, deterministic policy gradients use a deterministic policy and get the gradient information directly from a second network (called a critic) that models the score function. This approach can in principle be much more efficient in settings with  high-dimensional actions where sampling actions provide poor coverage, but so far seems empirically slightly finicky to get working.

There is also a line of work that tries to make the search process less hopeless by adding additional supervision. In many practical cases, for instance, one can obtain expert trajectories from a human. For example AlphaGo first uses supervised learning to predict human moves from expert Go games and the resulting human mimicking policy is later fine-tuned with PGs on the “real” goal of winning the game.

RL’s new frontiers: MAS, PTL, evolution, memetics and eTL

There is another method called Parallel Transfer Learning (PTL), which aims to optimize RL in multi-agent systems (MAS). MAS are computer systems composed of many interacting and autonomous agents within an environment of interests for problem-solving. MAS have a wide array of applications in industrial and scientific fields, such as resource management and computer games.

In MAS, as agents interact with and learn from one another, the challenge is to identify suitable source tasks from multiple agents that will contain mutually useful information to transfer. In conventional MAS (cMAS), which are optimal for simple environments, actions of each agent are pre-defined for possible states in the environment. Normal RL methodologies have been used as the learning processes of (cMAS) agents through trial-and-error interactions in a dynamic environment.

In PTL, each agent will broadcast its knowledge to all other agents while deciding whose knowledge to accept based on the reward received from other agents vs. expected rewards it predicts. Nevertheless, agents in this approach tend to infer incorrect actions on unseen circumstances or complex environments.

However, for more complex or changing environments, it is necessary to endow the agents with intelligence capable of adapting to an environment’s dynamics. A complex environment, almost by definition, implies complex interactions and necessitated learning of MAS, which current RL methodologies are hard-pressed to meet. A more recent machine learning paradigm of Transfer Learning (TL) was introduced as an approach of leveraging valuable knowledge from related and well studied problem domains to enhance problem-solving abilities of MAS in complex environments. Since then, TL has been successfully used for enhancing RL tasks via methodologies such as instance transferaction-value transferfeature transfer and advice exchanging (AE).

Most RL systems aim to train a single agent or cMAS. Evolutionary Transfer Learning framework (eTL) aims to develop intelligent and social agents capable of adapting to the dynamic environment of MAS and more efficient problem solving. It’s inspired by Darwin’s theory of evolution (natural selection + random variation) by principles that govern the evolutionary knowledge transfer process. eTL constructs social selection mechanisms that are modelled after the principles of human evolution. It mimics natural learning and errors that are introduced due to the physiological limits of the agents’ ability to perceive differences, thus generating “growth” and “variation” of knowledge that agents have, thus exhibiting higher adaptability capabilities for complex problem solving. Essential backbone of eTL comprises of memetic automaton, which includes evolutionary mechanisms such as meme representation, meme expression, etc.

Memetics

 

The term “meme” can be traced back to Dawkins’ “The Selfish Gene”, where he defined it as “a unit of information residing in the brain and is the replicator in human cultural evolution.” For the past few decades, the meme-inspired science of Memetics has attracted increasing attention in fields including anthropology, biology, psychology, sociology and computer science. Particularly, one of the most direct and simplest applications in computer science for problem solving has become memetic algorithm. Further  research of meme-inspired computational models resulted in concept of memetic automaton, which integrates memes into units of domain information useful for problem-solving. Recently, memes have been defined as transformation matrixes that can be reused across different problem domains for enhanced evolutionary search. As with genes serving as “instructions for building proteins”, memes carry “behavioural instructions,” constructing models for problem solving.

 

Memetics in eTL

 

Meme representation and meme evolution form the two core aspects of eTL. It then undergoes meme expression and meme assimilation. Meme representation is related to what a meme is, while meme expression is defined for an agent to express its stored memes as behavioural actions, and meme assimilation captures new memes by translating corresponding behaviours into knowledge that blends into the agent’s mind-universe. The meme evolution processes (i.e. meme internal and meme external evolutions) comprise the main behavioural learning aspects of eTL. To be specific, meme internal evolution denotes the process for agents to update their mind-universe via self learning or personal grooming. In eTL, all agents undergo meme internal evolution by exploring the common environment simultaneously. During meme internal evolution, meme external evolution might happen to model the social interaction among agents mainly via imitation, which takes place when memes are transmitted. Meme external evolution happens whenever the current agent identifies a suitable teacher agent via a meme selection process. Once the teacher agent is selected, meme transmission occurs to instruct how the agent imitates others. During this process, meme variation facilitates knowledge transfer among agents. Upon receiving feedback from the environment after performing an action, the agent then proceeds to update its mind-universe accordingly.

 

eTL implementation with learning agents

 

There are two implementations of learning agents that take the form of neurally-inspired learning structures, namely a FALCON and a BP multilayer neural network. Specifically, FALCON is a natural extension of self-organizing neural models proposed for real-time RL, while BP is a classical multi-layer network that has been widely used in various learning systems.
  1. MASs with TL vs. MAS without TL: Most TL approaches outperform cMAS. This is due to TL endowing agents with capacities to benefit from the knowledge transferred from the better performing agents, thus accelerating the learning rate of the agents in solving the complex task more efficiently and effectively.
  2. eTL vs. PTL and other TL approaches: FALCON and BP agents with the eTL outperform PTL and other TL approaches due to the reason that, when deciding whether to accept  information broadcasted by the others, agents in PTL tend to make incorrect predictions on previously unseen circumstances. Further, eTL also demonstrates superiority in attaining higher success rates than all AE models thanks to meme selection operator of eTL, which considers a fusion of the “imitate-from-elitist” and “like-attracts-like” principles so as to give agents the option of choosing more reliable teacher agents over the AE model.

Conclusions

While popularisation of RL is traced back to Edward Thorndike and Marvin Minsky, it’s been inspired by nature and present with us humans since ages long gone. This is how we effectively teach children and want to now teach our computer systems, real (neural networks) or simulated (MAS).

RL reentered human consciousness and rekindled our interest again in 2016 when AlphaGo beat Go champion Lee Sedol. RL has, via its currently successful PGs, DQNs and other methodologies, already contributed and continues to accelerate, turn more intelligent and optimise humanoid robotics, autonomous vehicles, hedge funds, and other endeavours, industries and aspect of human life.

However, what is that optimises or accelerates RL itself? Its new frontiers represent PTLs, Memetics and a holistic eTL methodology inspired by natural evolution and spreading of memes. This latter evolutionary (and revolutionary!) approach is governed by several meme-inspired evolutionary operators (implemented using FALCON and BP multi-layer neural network), including meme evolutions.

The performance efficacy of eTL seems to have outperformed even most state-of-the-art MAS TL systems (PTL).

What future does RL hold? We don’t know. But the amount of research resources, experimentation and imaginative thinking will surely not disappoint us.

Bitcoin, ICOs, Mississippi Bubble and crypto future

Bitcoin bubble

Bitcoin has risen 10x in value so far in 2017, the largest gain of all asset classes, prompting sceptics to declare it a classic speculative bubble that could burst, like the dotcom boom and the US sub-prime housing crash that triggered the global financial crisis. Stocks in the dotcom crash were worth $2.9tn before collapsing in 2000, whereas the market cap of bitcoin currently (as of 03.12.2017) stands at $185bn, which could signal there is more room for the bubble to grow.

 

Many a financiers and corporate stars think there is a bubble and a huge opportunity. One of the biggest bitcoin bulls on Wall Street, Mike Novogratz, thinks cryptocurrencies are in a massive bubble (but anticipates Bitcoin reaching $40,000 by end of 2018). Ironically (or not), he’s launching a $500 million fund, Galaxy Digital Assets Fund, to invest in them, signalling a growing acceptance of cryptocurrencies as legitimate investments.  John McAfee has doubled down on his confidence in bitcoin by stating his belief it will be worth $1 million by the end of 2020.

 

Former Fed Chairman Alan Greenspan has said that “you have to really stretch your imagination to infer what the intrinsic value of bitcoin is,” calling the cryptocurrency a “bubble.” Even financial heavyweights such as CME, the world’s leading derivatives marketplace, is planning to tap into this gold rush by introducing bitcoin derivatives, which will let hedge funds into the market before end of 2017.

 

The practical applications for cryptocurrencies to facilitate legal commerce appear hampered by relatively expensive transaction fees and the skyrocketing energy costs associated with mining at this juncture. On this note, Nobel Prize-winning economist Joseph Stiglitz thinks that bitcoin “ought to be outlawed” because it doesn’t serve any socially useful function and yet consumes enormous resources.

Bitcoin mania has many parallels with Mississippi Bubble

Bitcoin’s boom has gone further than famous market manias of the past like the tulip craze or the South Sea Bubble, and has lasted longer than the dancing epidemic that struck 16th-century France, or recent dot.com bubble in 2000. Like many others events such South Sea Bubble, ultimately, it was a scheme. No (real economy) trade would reasonably take place but the company’s stock kept rising on promotion and the hope of investors.

 

In my view, a more illustrative example, with many parallels for Bitcoin, is Mississippi Bubble, which started in 1716.  Not only was the Mississippi Bubble bigger than the South Sea Bubble, but it was more speculative and more successful. It completely wiped out the French government’s debt obligations at the expense of those who fell under the sway of John Law’s economic innovations.

 

Its origins track back to 1684 when Compagnie du Mississippi (Mississippi Company) was chartered. In August 1717, Scottish businessman/economist John Law acquired a controlling interest in the then-derelict Mississippi Company and renamed it the Compagnie d’Occident. The company’s initial goal was to trade and do business with the French colonies in North America, which included most of the Mississippi River drainage basin, and the French colony of Louisiana. Law was granted a 25-year monopoly by the French government on trade with the West Indies and North America. In 1719, the company acquired many French trading companies and combined these into the Compagnie Perpetuelle des Indes (CPdI). In 1720, it acquired the Banque Royale, which had been founded by John Law himself as the Banque Generale (forerunner of France’s first central bank) in 1716.

 

Law then created speculative interest in CPdI. Reports were skillfully spread as to gold and silver mines discovered in these lands.  Law exaggerated the wealth of Louisiana with an effective marketing scheme, which led to wild speculation on the shares of the company in 1719. Law had promised to Louis XV that he would extinguish the public debt. To keep his word he required that shares in CPdI should be paid for one-fourth in coin and three-fourths in billets d’Etat (public securities), which rapidly rose in value on account of fake demand which was created for them.  The speculation was further fed by the huge increase in the money supply (by printing more money to meet the growing demand) introduced by Law (as he was also Controller General of Finances, equivalent to Finance Minister, of France) in order to ‘stimulate’ the economy.

 

CPdI’s shares traded around 300 at the end of 1718, but rose rapidly in 1719, increasing to 1000 by July 1719 and broke 10,000 in November 1719, an increase of over 3,000% in less than one year. CPdI shares stayed at the 9000 level until May 1720 when they fell to around 5000. By the spring of 1720, more than 2 billion livres of banknotes had been issued, a near doubling of the money supply in less than a year. By then, Law’s system had exploded – the stock-market bubble burst, confidence in banknotes evaporated and the French currency collapsed. The company sought bankruptcy protection in 1721. It was reorganised and open for business in 1722. However, in late 1720, Law was forced into exile and died in 1729. At its height, the capitalisation of CPdI was greater than either the GDP of France or all French government debt.

Why did Law fail? He was over-ambitious and over-hasty (like this Bitcoin pioneer?). He believed that France suffered from a dearth of money and incumbent financial system (Bitcoin enthusiasts claim it will revolutionize economies and countries like India are ideal for it) and that an increase in its supply would boost economic activity (Bitcoin aims to implement a variant of Milton Friedman’s k-percent rule: proposal to fix the annual growth rate of the money supply to a fixed rate of growth). He believed that printing and distributing more money would lower interest rates, enrich traders, and offer more employment to people. His conceptual flaw was his belief that money and financial assets were freely interchangeable – and that he could set the price of stocks and bonds in terms of money.

Law’s aim was to replace gold and silver with a paper currency (just like how Bitcoiners want to democratise/replace fiat money and eliminate banks). This plan was forced upon the French public – Law decreed that all large financial transactions were to be conducted in banknotes. The holding of bullion was declared illegal – even jewelry was confiscated. He recommended setting up a national bank (Banque Generale in 1716), which could issue notes to buy up the government’s debt, and thus bring about a decline in the interest rate.

During both South Sea and Mississippi bubbles, speculation was rampant and all manner of initial stock offerings were being floated, including:

  • For settling the island of Blanco and Sal Tartagus
  • For the importation of Flanders Lace
  • For trading in hair
  • For breeding horses

Some of these made sense, but lot more were absurd.

Economic value and price fluctuations of Bitcoin

Bitcoin is similar to other currencies and commodities such as gold, oil, potatoes or even tulips in that its intrinsic value is difficult – if not impossible – to separate from its price.

A currency has three main functions: store of value; means of exchange; and unit of account. Bitcoin’s volatility, seen when it fell 20% within minutes on November 29th 2017 before rebounding, makes it both a nerve-racking store of value and a poor means of exchange. A currency is also a unit of account for debt. As an example, if you had financed your house with a Bitcoin mortgage, in 2017 your debt would have risen 10x. Your salary, paid in dollars, etc. would not have kept pace. Put another way, had Bitcoin been widely used, 2016 might have been massively deflationary.

But why has the price risen so fast? One justification for the existence of Bitcoin is that central banks, via quantitative easing (QE), are debasing fiat money and laying the path to hyperinflation. But this seems a very odd moment for that view to gain adherents. Inflation remains low and the Fed is pushing up interest rates and unwinding QE.

A more likely explanation is that as new and easier ways to trade in Bitcoin become available, more investors are willing to take the plunge. As the supply of Bitcoin is limited by design, that drives up the price.

There are governments standing behind currencies and reliable currency markets for exchange. And with commodities, investors have something to hold at the end of the transaction. Bitcoin is more speculative because it’s digital ephemera. That isn’t true for all investments. Stockholders are entitled to a share of a company’s assets, earnings and dividends, the value of which can be estimated independent of the stock’s price. The same can be said about a bond’s payments of principal and interest.

This distinction between price and value is what allowed many observers to warn that internet stocks were absurdly priced in the late 1990s, or that mortgage bonds weren’t as safe as investors assumed during the housing bubble. A similar warning about Bitcoin isn’t possible.

What about Initial Coin Offerings (ICOs)? An ICO (in almost all jurisdictions so far) is an unregulated means, bypassing traditional fund raising methods, of raising capital for a new venture. Afraid of missing out on the next big thing, people are willing to hand their money over no matter how thin the premise, very much like in case of South Sea or Mississippi Bubbles. They have close resemblance to penny stock trading, with pump-n-dump schemes, thin disclosures and hot money pouring in and out of stocks.

ICOs, while an alternative financing scheme for startups, aren’t so far sustainable for business. Despite the fact that more than 200 ICOs have raised more than $3 billion so far in 2017, only 1 in 10 tokens is use after the ICO. And a killer app for most popular public blockchain platform Ethereum, which sees increasing number of ICOs? First ecosystem (game to trade kittens) has been launched and almost crashed Ethereum network. This game alone consumes 15% of Ethereum traffic and even than it’s hard to play due to its slowness (thanks Markus for this info bite!).

So overall, Bitcoin (and other crypto currencies) exist only for the benefit of those that buy-n-hold and use them while creating an explicit economic program of counter-economics. In other words, Bitcoin is not as much about money but power.

How it all may end (or begin)

The South Sea Bubble ended when the English government enacted laws to stop the excessive offerings. Mississippi Bubble ended when French currency collapsed, French government bought back (and ultimately wrote off debt via QE) all CPdI’s shares and cast out instigators. The unregulated markets became regulated.

From legal perspective, most likely the same thing will happen to cryptocurrencies and ICOs. China temporarily banned cryptocurrency exchanges till regulations can be introduced. Singapore, Malaysia, and other governments have plans to introduce regulations by end of 2017 or early 2018. Disregard, ignorance, or flaunting of regulatory and other government-imposed rules be mortal for startups and big businesses alike.

From technology perspective, a number of factors, including hard forks, ledger and wallet hacking and its sheer limitations related to scaling, energy consumption, security might bring it down. Also many misconceptions about blockchain/Bitcoin such as claims of a blockchain being everlasting, indestructible, miners providing security, and anonymity being a universally good thing are either exaggerated, not always or patently not true at all.

From business perspective, startups and companies raising money via ICO can be subject to fraud – Goldman Sachs’ CEO claims Bitcoin is a suitable means for conducting fraud, and thus subject to money laundering, counter-terrorist and other relevant government legislation. From investors perspective, shorting seems to be the most sure-fire way of investing profitably in cryptocurrencies.

During the dot-com craze, Warren Buffett was asked why he didn’t invest in technology. He famously answered that he didn’t understand tech stocks. But what he meant was that no one understood them, and he was right. Why else would anyone buy the NASDAQ 100 Index when its P/E ratio was more than 500x – a laughably low earnings yield of 0.2% – which is where it traded at the height of the bubble in March 2000.

It’s a social or anthropological phenomenon that’s reminiscent of how different tribes and cultures view the concept of money, from whale’s teeth to abstract social debts. How many other markets have spawned conceptual art about the slaying of a “bearwhale

Still, the overall excitement around Bitcoin shows that it has tapped into a speculative urge, one that isn’t looking to be reassured by dividends, business plans, cash flows, or use cases. Highlighting a big, round number like $10,000 only speaks to our emotional reaction to big, round numbers. But it doesn’t explain away the risk of this one day falling to the biggest, roundest number of all – zero.

How AI systems learn: approaches and concepts

As you know, goal of AI learning is generalisation, but one major issue is that data alone will never be enough, no matter how much of it is available. AI systems need both data and they need to learn based on data in order to generalise.

So let’s look at how AI systems learn. But before we do that, what are the few different and prevalent AI approaches?

Neural networks model a brain learning by example―given a set of right answers, a neural network learns the general patterns. Reinforcement Learning models a brain learning by experience―given some set of actions and an eventual reward or punishment, it learns which actions are ‘good’ or ‘bad,’ as relevant in context. Genetic Algorithms model evolution by natural selection―given some set of agents, let the better ones live and the worse ones die.

Usually, genetic algorithms do not allow agents to learn during their lifetimes, while neural networks allow agents to learn only during their lifetimes. Reinforcement learning allows agents to learn during their lifetimes and share knowledge with other agents.

Consider learning a Boolean function of (say) 100 variables from a million examples. There are 2100 ^ 100 examples whose classes you don’t know. How do you figure out what those classes are? In the absence of further information, there is no way to do this that beats flipping a coin. This observation was first made (in somewhat different form) by David Hume over 200 years ago, but even today many mistakes in ML stem from failing to appreciate it. Every learner must embody some knowledge/assumptions beyond the data it’s given in order to generalise beyond it.

This seems like rather depressing news. How then can we ever hope to learn anything? Luckily, the functions we want to learn in the real world are not drawn uniformly from the set of all mathematically possible functions. In fact, very general assumptions—like similar examples having similar classes, limited dependences, or limited complexity—are often enough to do quite well, and this is a large part of why ML has been so successful to date.

AI systems use induction, deduction, abduction and other methodologies to collect, analyse and learn from data, allowing generalisation to happen.

Like deduction, induction (what learners do) is a knowledge lever: it turns a small amount of input knowledge into a large amount of output knowledge. Induction (despite its limitations) is a more powerful lever than deduction, requiring much less input knowledge to produce useful results, but it still needs more than zero input knowledge to work.

Abduction is sometimes used to identify faults and revise knowledge based on empirical data. For each individual positive example that is not derivable from the current theory, abduction is applied to determine a set of assumptions that would allow it to be proven. These assumptions can then be used to make suggestions for modifying the theory. One potential repair is to learn a new rule for the assumed proposition so that it could be inferred from other known facts about the example. Another potential repair is to remove the assumed proposition from the list of antecedents of the rule in which it appears in the abductive explanation of the example – parsimonious covering theory (PCT). Abductive reasoning is useful in inductively revising existing knowledge bases to improve their accuracy. Inductive learning can be used to acquire accurate abductive theories.

One key concept in AI is classifier. Generally, AI systems can be divided into two types: classifiers (“if shiny and yellow then gold”) and controllers (“if shiny and yellow then pick up”). Controllers also include classify-ing conditions before inferring actions. Classifiers are functions that use pattern matching to determine a closest match. They can be tuned according to examples known as observations or patterns. In supervised learning, each pattern belongs to a certain predefined class. A class can be seen as a decision that has to be made. All the observations combined with their class labels are known as data set. When a new observation is made, it is classified based on previous experience.

Classifier performance depends greatly on the characteristics of the data to be classified. The most widely used classifiers use kernel methods to be trained (i.e. to learn). There is no single classifier that works best on all given problems – “no free lunch“. Determining an optimal classifier for a given problem is still more an art than science.

The following formula sums up the process of AI learning.

LEARNING = REPRESENTATION + EVALUATION + OPTIMISATION

Representation. A classifier must be represented in some formal language that the computer can handle. Conversely, choosing a representation for a learner is tantamount to choosing the set of classifiers that it can possibly learn. This set is called the hypothesis space of the learner. If a classifier is not in the hypothesis space, it cannot be learned. A related question is how to represent the input, i.e., what features to use.

Evaluation. An evaluation function is needed to distinguish good classifiers from bad ones. The evaluation function used internally by the algorithm may differ from the external one that we want the classifier to optimise, for ease of optimisation (see below) and due to the issues discussed in the next section.

Optimisation. We need a method to search among the classifiers in the language for the highest-scoring one. The choice of optimisation technique is key to the efficiency of the learner, and also helps determine the classifier produced if the evaluation function has more than one optimum. It is common for new learners to start out using off-the-shelf optimisers.

Key criteria for choosing a representation is which kinds of knowledge are easily expressed in it. For example, if we have knowledge about probabilistic dependencies, graphical models are a good fit. And if we have knowledge about what kinds of preconditions are required by each class, “IF . . . THEN . . .” rules may be the the best option. The most useful learners in this regard are those that don’t just have assumptions hard-wired into them, but allow us to state them explicitly, vary them widely, and incorporate them dynamically into the learning.

What if the knowledge and data we have are not sufficient to completely determine the correct classifier? Then we run the risk of just inventing a classifier (or parts of it) that is not grounded in reality, and is simply encoding random quirks in the data. This problem is called overfitting, and is the bugbear of ML. When a learner outputs a classifier that is 100% accurate on the training data but only 50% accurate on real data, when in fact it could have output one that is 75% accurate on both, it has overfit.

One way to understand overfitting is by decomposing generalisation error into bias and variance. Bias is a learner’s tendency to consistently learn the same wrong thing. Variance is the tendency to learn random things irrespective of the real signal. Cross-validation can help to combat overfitting, but it’s no panacea, since if we use it to make too many parameter choices it can itself start to overfit. Besides cross-validation, there are many methods to combat overfitting, the most popular one is adding a regularisation term to the evaluation function. Another option is to perform a statistical significance test like chi-square before adding new structure, to decide whether the distribution of the class really is different with and without this structure.

 

Sources and relevant articles:

Limits of deep learning and way ahead

Artificial intelligence has reached peak hype. News outlets report that companies have replaced workers with IBM Watson and algorithms are beating doctors at diagnoses. New AI startups pop up every day – especially in China – and claim to solve all your personal and business problems with machine learning.

Ordinary objects like juicers and wifi routers suddenly advertise themselves as “powered by AI”. Not only can smart standing desks remember your height settings, they can also order you lunch.

Much of the AI hubbub is generated by reporters who’ve little or superficial knowledge about the subject matter and startups  hoping to be acquihired for engineering talent despite not solving any real business problems. No wonder there are so many misconceptions about what A.I. can and cannot do.

Deep learning will shape the future ahead

Neural networks were invented in the 60s, but recent boosts in big data and computational power made them actually useful. The results are undeniably incredible. Computers can now recognize objects in images and video and transcribe speech to text better than humans can. Google replaced Google Translate’s architecture with neural networks and now machine translation is also closing in on human performance.

The practical applications are mind-blowing. Computers can predict crop yield better than the USDA and indeed diagnose cancer more accurately than expert physicians.

DARPA, the creator of Internet and many other modern technologies, sees three waves of AI:

  1. Handcrafted knowledge, or expert systems like IBM’s DeepBlue or IBM Watson;
  2. Statistical learning, which includes machine learning and deep learning;
  3. Contextual adaption, which involves constructing reliable, explanatory models for real world phenomena using sparse data, like humans do.

As part of the current second wave of AI, deep learning algorithms work well because of what the report calls the “manifold hypothesis.” This refers to how different types of high-dimensional natural data tend to clump and be shaped differently when visualised in lower dimensions.

darpa_manifolds_750px_web

By mathematically manipulating and separating data clumps, deep neural networks can distinguish different data types. While neural networks can achieve nuanced classification and predication capabilities they are what is called “spreadsheets on steroids.”

darpa_manifolds_separation_750px_web

Deep learning algorithms have deep learning problems

At the recent AI By The Bay conference, one expert and inventor of widely used deep learning library Keras,  Francois Chollet, thinks that deep learning is simply more powerful pattern recognition vs. previous statistical and machine learning methods and that the most important problems for AI today are abstraction and reasoning. Current supervised perception and reinforcement learning algorithms require lots of training, are terrible at planning, and are only doing straightforward pattern recognition.

By contrast, humans “learn from very few examples, can do very long-term planning, and are capable of forming abstract models of a situation and manipulate these models to achieve extreme generalisation.”

Even simple human behaviours are laborious to teach to a deep learning algorithm. Let’s examine the task of not being hit by a car as you walk down the road.

Humans only need to be told once to avoid cars. We’re equipped with the ability to generalise from just a few examples and are capable of imagining (i.e. modelling) the dire consequences of being run over. Without losing life or limb, most of us quickly learn to avoid being overrun by motor vehicles.

Let’s now see how this works out if we train a computer. If you go the supervised learning route, you need big data sets of car situations with clearly labeled actions to take, such as “stop” or “move”. Then you’d need to train a neural network to learn the mapping between the situation and the appropriate action. If you go the reinforcement learning route, where you give an algorithm a goal and let it independently determine the ideal actions to take, the computer will “die” many times before learning to avoid cars in different situations.

While neural networks achieve statistically impressive results across large sample sizes, they are “individually unreliable” and often make mistakes humans would never make, such as classify a toothbrush as a baseball bat.

misclassification_darpa_web

Your results are only as good as your data

Neural networks fed inaccurate or incomplete data will simply produce the wrong results. The outcomes can be both embarrassing and damaging. In two major PR debacles, Google Images incorrectly classified African Americans as gorillas, while Microsoft’s Tay learned to spew racist, misogynistic hate speech after only hours training on Twitter.

Undesirable biases may even be implicit in our input data. Google’s massive Word2Vec embeddings are built off of 3 million words from Google News.  The data set makes associations such as “father is to doctor as mother is to nurse” which reflect gender bias in our language.

For example, researchers go to human ratings on Mechanical Turk to perform “hard de-biasing” to undo the associations. Such tactics are essential since word embeddings not only reflect stereotypes but can also amplify them. If the term “doctor” is more associated with men than women, then an algorithm might prioritise male job applicants over female job applicants for open physician positions.

Neural networks can be tricked or exploited

Ian Goodfellow, inventor of GANsshowed that neural networks can be deliberately tricked with adversarial examples. By mathematically manipulating an image in a way that is undetectable to the human eye, sophisticated attackers can trick neural networks into grossly misclassifying objects.

ian_goodfellow_adversarial_attacks

The dangers such adversarial attacks pose to AI systems are alarming, especially since adversarial images and original images seem identical to us. Self-driving cars could be hijacked with seemingly innocuous signage and secure systems could be compromised by data that initially appears normal.

Potential solutions

How can we overcome the limitations of deep learning and proceed towards general artificial intelligence? Chollet’s initial plan is using “super-human pattern recognition like deep learning to augment explicit search and formal systems”, starting with the field of mathematical proofs. Automated Theorem Provers (ATPs) typically use brute force search and quickly hit combinatorial explosions in practical use. In the DeepMath project, Chollet and his colleagues used deep learning to assist the proof search process, simulating a mathematician’s intuitions about what lemmas might be relevant.

Another approach is to develop more explainable models. In handwriting recognition, neural nets currently need to be trained on many thousand examples to perform decent classification. Instead of looking at just pixels, generative models can be taught the strokes behind any given character and use this physical construction information to disambiguate between similar numbers, such as a 9 or a 4.

Yann LeCun, AI boss of Facebook, proposes “energy-based models” as a method of overcoming limits in deep learning. Typically, a neural network is trained to produce a single output, such as an image label or sentence translation. LeCun’s energy-based models instead give an entire set of possible outputs, such as the many ways a sentence could be translated, along with scores for each configuration.

Geoffrey Hinton, called the “father of deep learning” wants to replace neurons in neural networks with “capsules” which he believes more accurately reflect the cortical structure in the human mind. Evolution must have found an efficient way to adapt features that are early in a sensory pathway so that they are more helpful to features that are several stages later in the pathway. He thinks capsule-based neural network architectures will be more resistant to the adversarial attacks.

Perhaps all of these approaches to overcoming the limits of deep learning have a value. Perhaps none of them do. Only time and continued investment in AI will tell. But one thing seems quite certain: it might be impossible to achieve general intelligence simply by scaling up today’s deep learning techniques.

Survival of blockchain and Ethereum vs. alternatives

As outlined in my previous post, blockchain faces number of fundamental – technological, cultural, and business – issues before it becomes mainstream. However, potential of blockchain, especially if it were coupled with AI, cannot be ignored. The potent combination of blockchain and AI  can revolutionise healthcare, science, government, autonomous driving, financial services, and a number of key industries.

Discussions continue about blockchain’s ability to lift people out of poverty through mobile transactions, improve accounting for tourism in second-world countries, and make governance transparent with electronic voting. But, just like the complementary – and equally hyped – technologies of AI, IoT, and big data, blockchain technology is emerging and yet unproven at scale. Additional, socio-political as well as economic roadblocks remain to blockchain’s widespread adoption and application:

1. Disparity of computer power and electricity distribution

Bitcoin transactions on blockchain require “half the energy consumption of Ireland”. This surge of electricity use is simply impossible in developing countries where the resource is scarce and expensive. Even if richer countries assist and invest in poorer ones, the UN is concerned that elite, external ownership of critical infrastructure may lead to a digital form of neo-colonialism.

2. No mainstream trust for blockchain

Bitcoin inspired the explosive attention on blockchain, but there isn’t currently much trust in the technology – as it’s relatively new, unproven and has technical problems and limitations – outside of digital currencies. With technologies still in their infancy, blockchain companies are slow to deliver on promises. This turtle pace does not satisfy investors seeking quick ROI. Perhaps the largest, challenge to blockchain adoption is the massive transformation in architectural, regulatory, and business management practices required to deploy the technology at scale. Even if such large-scale changes are pulled off, society may experience a culture shock from switching to decentralised, automated systems after a history of only centralised ones.

3. Misleading and misguided ‘investments’

Like the Internet, blockchain technology is most powerful when everyone is on the same network. The Internet grew in fits and starts, but was ultimately driven by the killer app of email. While Bitcoin and digital currencies are the “killer app” of blockchain, we’ve already seen aggressive investments in derivative cryptocurrencies peter out.

Many technologies also call themselves “blockchain” to capitalise on hype and capture investment, but are not actual blockchain implementations. But, even legitimate blockchain technologies suffer from the challenge of timing, often launching in a premature ecosystem unable to support adoption and growth.

4. Cybersecurity risks and flaws

The operational risks of cybersecurity threats to blockchain technology make early adopters hesitate to engage. Additionally, bugs in the technology are challenging to detect, yet caused outsized damage. Getting the code right is critical, but this requires time and talent.

While relatively more known Bitcoin’s PoW-based blockchain systems and Ethereum see limelight and PR, there are number of alternative blockchain protocols and approaches, which are scalable and solve many of fundamental challenges the incumbents face.

PoW and Ethereum alternatives

Disclaimer: I neither condone, engage nor promote any of the below alternatives but simply provide information as found on websites, articles and social media of relevant entities and therefore not responsible whether the information thus provided is accurate and realistic.

1. BitShares, SteemIt (based on Steem) and EOS white papers which are all based on Delegated Proof of Stake (DPOS). DPOS enables BitShares to process 180k transactions per second, which is more than 5x NASDAQ transactions/s. Steem and Bitshares process more transactions/day than the top 20 blockchains combined.

In DPOS, each 2 seconds – Bitcoin’s PoW generates a new block each 10 minutes – a new block is created, through witnesses (stakeholders can elect any number of witnesses to generate blocks – currently 21 in Steem and 25 in BitShares). DPOS is using pipelining to increase scalability. Those 20 witnesses generate their own block in a specified order, that holds for a few rounds (hence the pipelining), after the order is changed. DPOS confirms transactions with 99.9% certainty in an average of just 1.5 seconds while degrading in a graceful, detectable manner that is trivial to recover from. It is easy to increase the scalability of this schema, by introducing additional witnesses either by increasing the pipeline length or using sharding to allow to generate in a deterministic/verifiable way few blocks during the same epoch.

2. IOTA (originally designed to be financial system for IoT) is a new blockless distributed ledger which is scalable, lightweight and fee-less. It’s based on DAG, and its performance INCREASES the bigger the networks gets.

3. Ardor solves the common (to all blockchains) bloat problem, relying on an innovative parent/child chain architecture and pruning of the child chain transactions. It shares some similarities with plasma.io, based on NXT blockchain technology and already running on testnet.

4. LTCP uses State Channels by stripping 90% of the transaction data from the blockchain. LTCP combined with RSK’s Lumino network or Ethereum’s Raiden network can serve 1 billion users in both retail and online payments.

5. Stellar runs off of Stellar Consensus Protocol (SCP) and is scalable, robust, got a distributed exchange and is easy to use. SCP implements “Federated Byzantine Agreement,” a new approach to achieving consensus in a real-world network that includes faulty “Byzantine” nodes with technical errors or malicious intent. To tolerate Byzantine failures, SCP is designed not to require unanimous consent from the complete set of nodes for the system to reach agreement, and to tolerate nodes that lie or send incorrect messages. In the SCP, individual nodes decide which other participants they trust for information, and partially validate transactions based on individual “quorum slices.” The systemwide quorums for valid transactions result from the individual quorum decisions by individual nodes.

6. A thin client is a program which connects to the Bitcoin network but which doesn’t fully validate transactions or blocks, i.e it’s a client to the full nodes on the network. Most thin clients use the Simplified Payment Verification (SPV) method to verify that confirmed transactions are part of a block. To do this, they connect to a full node on the blockchain network and send it a filter (Bloom filter) that will match any transactions affecting the client’s wallet. When a new block is created, the client requests a special lightweight version of that block: Merkle block, which includes a block header, a relatively small number of hashes, a list of one-bit flags, and a transaction count. Using this information—often less than 1 KB of data—the client can build a partial Merkle tree to the block header. If the hash of the root node of the partial Merkle tree equals the hash of Merkle root in the block header, the SPV client has cryptographic proof that the transaction was included in that block. If that block then gets 6 confirmations at the current network difficulty, then the client has extremely strong proof that the transaction was valid and is accepted by the entire network.

The only major downside of the SPV method is that full nodes can simply not tell the thin clients about transactions, making it look like the client hasn’t received bitcoins or that a transaction the client broadcast earlier hasn’t confirmed.

7. Mimir proposes a network of Proof of Authority micro-channels for using in generating a trustless, auditable, and secure bridge between Ethereum and the Internet. This system aims to establish Proof of Authority for individual validators via a Proof-of-Stake contract registry located on Ethereum itself . This Proof-of-Stake contract takes stake in the form of Mimir B2i Tokens. These tokens serve as collateral that may be repossessed in the event of malicious actions. In exchange for serving requests against the Ethereum blockchain, validators get paid in Ether.

8. Ripple’s XRP ledger already handles 1,500 transactions/second on-chain, which keeps on being improved (was 1,000 transactions/sec at the beginning of 2017).

9. QTUM, a hybrid blockchain platform whose technology combines a fork of bitcoin core, an Account Abstraction Layer allowing for multiple Virtual Machines including the Ethereum Virtual Machine (EVM) and Proof-of-Stake consensus aimed at tackling industry use cases.

10. Blocko, which has enterprise and consumer grade layers and has already successfully piloted/launched products (dApps) with/for Korea Exchange, LotteCard and Huyndai.

11. Algorand uses “cryptographic sortition” to select players to create and verify blocks. It scales on demand and is more secure and faster than traditional PoW and PoS systems. While most PoS systems rely on some type of randomness, algorand is different in that you self-select by running the lottery on your own computer (not on cloud or public chain). The lottery is based on information in the previous block, while the selection is automatic (involving no message exchange) and completely random. Thanks David Deputy for pointing out this platform!!!

12. NEO, also called “Ethereum of China,”  is a non-profit community-based blockchain project that utilizes blockchain technology and digital identity to digitize assets, to automate the management of digital assets using smart contracts, and to realize a “smart economy” with a distributed network.

Bitcoin and blockchain demystified: basics and challenges

Bitcoin, blockchain, Ethereum, gas, …

A new breed of snake oil purveyors are peddling “blockchain” as the magic sauce that will power all the world’s financial transactions and unlock the great decentralised database in the sky. But what exactly are bitcoin and blockchain?

Bitcoin is a system for electronic transactions that don’t rely on a centralised or trusted third-party (bank or financial institution). Its creation was motivated by the fact that digital currency made of digital signatures, while providing strong ownership control, was viable but incomplete solution unable to prevent double-spending. Bitcoin’s proposed solution was a peer-to-peer network using proof-of-work (in order to deter network attacks) to record a public history of transactions that is computationally impractical for an attacker to change if honest nodes control a majority of CPU power. The network is unstructured, and its nodes work with little coordination and don’t need to be identified. Truth (i.e. consensus on longest chain) is achieved by CPU voting, i.e network CPUs express their acceptance of valid blocks (of transactions) by working on extending them and rejecting invalid blocks by refusing to work on them.

Satoshi Nakamoto’s seminal paper “Bitcoin: A Peer-To-Peer Electronic Cash System” has references to a “proof-of-work chain”,“coin as a chain,” “chain of ownership”, but no “blockchain” or “block chain” ever make an appearance in it.

Blockchain (which powers Bitcoin, Ethereum and other such systems) is a way for one Internet user to transfer a unique piece of digital asset (Bitcoins, Ether or other crypto assets) to another Internet user, such that the transfer is guaranteed to be safe and secure, everyone knows the transfer has taken place, and nobody can challenge the legitimacy of the transfer. Blockchains are essentially distributed ledgers and have three main characteristics: a) decentralisation, b) immutability and c) availability of some sort of digital assets/token in the network.

While decentralisation consensus mechanisms offer critical benefits, such as fault tolerance, a guarantee of security (by design), political neutrality, they come at the cost of scalability. The number of transactions the blockchain can process can never exceed that of a single node that is participating in the network. In fact, blockchain actually gets weaker (only for transacting) as more nodes are added to its network because of the inter-node latency that logarithmically increases with every additional node.

All public blockchain consensus protocols make the tradeoff between low transaction throughput and high-degree of centralisation. As the size of the blockchain grows, the requirements for storage, bandwidth, and computing power required to fully participating in the network increases. At some point, it becomes unwieldy enough that it’s only feasible for a few nodes to process a block — that might lead to the risk of centralisation.

Currently, the blockchain (and with it, Bitcoin, Ethereum and others) challenges are:

  1. Since every node is not allowed to validate every transaction, we need nodes to have a statistical and economic means to ensure that other blocks (which they are not personally validating) are secure.
  2. Scalability is one of the main challenges. Bitcoin, despite having a theoretical limit of 4,000 transactions per second (TPS) currently has a hard cap of about 7 transactions per second for small transactions and 3 per second for more complex transactions. An Ethereum node’s maximum theoretical transaction processing capacity is over 1,000 TPS but processes between 5-15 TPS. Unfortunately, this is not the actual throughput due to Ethereum’s “gas limit, which is currently around 6.7 million gas on average for each block. Gas is the computation cost within Ethereum, which users pay in order to issue transactions or perform other actions. A higher gas limit means that more actions could be performed per-block. In order to scale, the blockchain protocols must figure out a mechanism to limit the number of participating nodes needed to validate each transaction, without losing the network’s trust that each transaction is valid.
  3. There must be a way to guarantee data availability, i.e. even if a block looks valid from the perspective of a node not directly validating that block, making the data for that block unavailable leads to a situation where no other validator in the network can validate transactions or produce new blocks, and we end up stuck in the current state (reasons a node is offline include malicious attack and power loss).
  4. Transactions need to be processed by different nodes in parallel in order to achieve scalability (one solution is similar to database sharding, which is distribution and parallel processing of data). However, blockchain’s transitioning state has several non-parallelizable (serial) parts, so we’re faced with some restrictions on how we can transition state on the blockchain while balancing both parallelizability and utility.
  5. End-users and organisations (such as banks) have hard time or don’t want to use blockchain (despite many having used or using distributed ledgers). To do a simple Bitcoin transaction requires a prior (quite a few exceptions) KYC check just to sign up on one of many crypto trading or exchange platforms.  “The Rare Pepe Game is built on a blockchain with virtual goods and characters and more,” explains Fred Wilson of USV. “And it shows how clunky this stuff is for the average person to use.”
  6. There is lot of hype, around blockchain which sets wrong expectations, misleads investments and causes lots of mistakes. Bloomberg reports that Nasdaq is seeking to show progress using the much-hyped blockchain. The Washington Post lists Bitcoin and the blockchain as one of six inventions of magnitude we haven’t seen since the printing press.  Bank of America is allegedly trying to load up on “blockchain” patents. Also, due to its volatility, uncertain status (can it be considered a legal tender such as normal fiat money or is it security, etc?),  there is much instability of holding crypto assets.
  7. Contrary to common belief, disintermediating financial institutions, so the reasoning goes, multiple parties can conduct transactions seamlessly, without paying a commission. However, according to one research, cost savings might be dubious as moving cash equity markets to a blockchain infrastructure would drive a significant increase of the overall transaction cost. Trading on a blockchain system would also be slower (at least in foreseeable future) than traders would tolerate, and mistakes might be irreversible, potentially bringing huge losses.
  8. To drive massive adoption which will induce further technological advancement, a killer app on blockchain or Ethereum would be a must. Despite much invested resources and efforts globally, So far there doesn’t seem to be one, but there arguably is potential in few areas such as digital gold, payments and tokenization.
  9. Blockchain’s immutability might pose a problem for specific types of data. The EU ‘right to be forgotten requires the complete removal of information, which might be impossible on blockchain. There are other privacy-related concerns that people might want to remove or forgotten such as previous insolvency, negative rankings, and other personal details that need to change.

To conclude, I think Ethereum is furthers along compared to PoW-based public blockchains. Ethereum is still orders of magnitude off (250x off being able to run a 10m user app and 25,000x off being able to run Facebook on chain) from being able to support applications with millions of users. If current efforts are well executed, Ethereum could be ready for a 1–10m user app by the end of 2018.

However, there are less-known alternative models that are much more scalable. Once scalability issues are solved, everything will become tokenized and connected by blockchain.

Blockchain + AI = ?

What happens when two major technological trends see an synergy or overlap in usage or co-development?

We have blockchain’s promise of near-frictionless value exchange and AI’s ability to conduct analysis of massive amounts of data. The joining of the two could mark the beginning of an entirely new paradigm. We can maximize security while remaining immutable by employing AI agents that govern the chain. With more companies and institutions adopting blockchain-based solutions, and more complex, potentially critical data stored in distributed ledgers, there’s a growing need for sophisticated analysis methods, which AI technology can provide.

The combination of AI and blockchain is fueling the onset of the “Fourth Industrial Revolution“ by reinventing economics and information exchange.

1. Precision medicine

Google DeepMind is developing an “auditing system for healthcare data”. Blockchain will enable the system to remain secure and shareable, while AI will allow medical staff to obtain analytics on medical predictions drawn from patient profiles.

2. Wealth and investment management

State Street is issuing blockchain-based indices. Data is stored and made secure using blockchain and analyzed using AI. It reports that 64% of wealth and asset managers polled expected their firms to adopt blockchain in the next five years. Further, 49% of firms said they expect to employ AI. As of 01.2017, State Street had 10 blockchain POC’s in the works.

3. Smart urbanity

To supply the energy, distributed blockchain technology is implemented for transparent and cost-effective transactions between producers and consumers, while machine learning algorithms can even hone in on transactions to estimate pricing. Green-friendly AI and blockchain help reduce energy waste and optimize energy trade. For example, an AI system governing a building can oversee energy use by counting in factors like the presence and number of residents, seasons, and traffic information.

4. Legal diamonds

IBM Watson is developing Everledger using blockchain technology to tackle fraud in the diamond industry, and deploying cognitive analytics to heavily “cross-check” regulations, records, supply-chain, and IoT data in the blockchain environment.

5. More efficient science

The  “file-drawer problem“ in academia is when researchers don’t publish “non-result” experiments. Duplicate experiments and a lack of knowledge follow, trampling scientific discourse. To resolve this, experimental data can be stored in a publicly accessible blockchain. Data analytics could also help identifying elements like how many times the same experiment has happened or what the probable outcome of a certain experiment is.

There are forecasts that AI will play a big role in science once “smart contracts” transacted by blockchain require smarter “nodes” that function in a semi-autonomous way. Smart contracts (essentially, pieces of software) simulate, enforce and manage contractual agreements and can have wide-ranging applications when academics embrace the blockchain for knowledge transfer and development.

6. IP rights management

Digitalization has introduced complicated digital rights to  IP management, and when AI learns the rules of the game, it can identify actors who break IP laws. As for IP contract management, for music (and other content) industry, blockchain enables immediate payment methods to artists and authors. One artist recently suggested the blockchain could help musicians simplify creative collaboration and making money.  Ujo Music is making use of the Ethereum blockchain platform for song distribution.

7. Computational finance

Smart contracts could take center stage where transparent information is crucial for trust in financial services. Financial transactions may no longer rely on a human “clearing agent” as they automatized, performing better and faster. But since confidence in transactions remains dependent on people, AI can help monitor human emotions and predict the most optimal trading environment. Thus, “algotrading” can be powered by algorithms that trade based on investment patterns correlated with emotions.

8. Data and IoT management

Organizations are increasingly looking to adopt blockchain technologies for alternative data storage. And with heaps of data distributed across blockchain ledgers, the need for data analytics with AI is growing. IBM Watson merged blockchain with AI via the Watson IoT group. In this, an artificially intelligent blockchain lets joint parties collectively agree on the state of the device and make decisions on what to do based on language coded into a smart contract. Using blockchain tech, artificially intelligent software solutions are implemented autonomously. Risk management and self-diagnosis are other use cases being explored.

9. Blockchain-As-A-Service software

Microsoft is integrating “BaaS modules” (based on the public Ethereum) in its Azure that users can create test environments for. Blockchains are cheaper to create and test, and in Azure they come with reusable templates and artifacts.

10. Governance 3.0

Blockchain and AI could contribute to the development of direct democracy. They can transfer big hordes of data globally, tracing e-voting procedures and displaying them publicly so that citizens can engage in real-time. Democracy Earth Foundation aspires to “hack democracy“ by advocating open-source software, peer-to-peer networks, and smart contracts. The organization also aims to fight fake identities and reclaim individual accountability in the political sphere. IPDB is a planetary-scale blockchain database built on BigchainDB. It’s a ready-to-use public network with a focus on strong governance.

Is self-play the future of (most) AI?

Go is game whose number of possible moves – more than chess at 10170 – is greater than the number of atoms in the universe.

AlphaGo, the predecessor to AlphaGo Zero, crushed 18-time world champion Lee Sedol and the reigning world number one player, Ke Jie. After beating Jie earlier this year, DeepMind announced AlphaGo was retiring from future competitions.

Now, an even more superior competitor, AlphaGo Zero, could beat the version of AlphaGo that faced Lee Sedol after training for just 36 hours and earned beat its predecessor by 100-0 score after 72 hours. Interestingly, AlphaGo Zero didn’t learn from observing humans playing against each other – unlike AlphaGo – but instead, its neural network relies on an old technique in reinforcement learning: self-play. Self-play means agents can learn behaviours that are not hand-coded on any reinforcement learning task, but the sophistication of the learned behaviour is limited by the sophistication of the environment. In order for an agent to learn intelligent behaviour in a particular environment, the environment has to be challenging, but not too challenging.

Essentially, self-play means that AlphaGo Zero plays against itself. During training, it sits on each side of the table: two instances of the same software face off against each other. A match starts with the game’s black and white stones scattered on the board, placed following a random set of moves from their starting positions. The two computer players are given the list of moves that led to the positions of the stones, and then are each told to come up with multiple chains of next moves along with estimates of the probability they will win by following through each chain. The next move from the best possible chain is then played, and the computer players repeat the above steps, coming up with chains of moves ranked by strength. This repeats over and over, with the software feeling its way through the game and internalizing which strategies turn out to be the strongest.

AlphaGo Zero did start from scratch with no experts guiding it. And it is much more efficient: it only uses a single computer and four of Google’s custom TPU1 chips to play matches, compared to AlphaGo’s several machines and 48 TPUs. Since Zero didn’t rely on human gameplay, and a smaller number of matches, its Monte Carlo tree search is smaller. The self-play algorithm also combined both the value and policy neural networks into one, and was trained on 64 GPUs and 19 CPUs by playing nearly five million games against itself. In comparison, AlphaGo needed months of training and used 1,920 CPUs and 280 GPUs to beat Lee Sedol.

AlphaGo combines the two most powerful ideas about learning to emerge from the past few decades: deep learning and reinforcement learning. In the human brain, sensory information is processed in a series of layers. For instance, visual information is first transformed in the retina, then in the midbrain, and then through many different areas of the cerebral cortex. This creates a hierarchy of representations where simple, local features are extracted first, and then more complex, global features are built from these. The AI equivalent is called deep learning; deep because it involves many layers of processing in simple neuron-like computing units.

But to survive in the world, animals need to not only recognise sensory information, but also act on it. Generations of scientists have studied how animals learn to take a series of actions that maximise their reward. This has led to mathematical theories of reinforcement learning that can now be implemented in AI systems. The most powerful of these is temporal difference learning, which improves actions by maximising its expectation of future reward. It is thus that, among others, it even discovered for itself, without human intervention, classic Go moves such as fuseki opening tactics and life and death.

So are there problems to which the current algorithms can be fairly immediately applied?

One example may be optimisation in controlled industrial settings. Here the goal is often to complete a complex series of tasks while satisfying multiple constraints and minimising cost. As long as the possibilities can be accurately simulated, self-play-based algorithms can explore and learn from a vastly larger space of outcomes than will ever be possible for humans.

Researchers at OpenAI have already experimented with the same technique to train bots to play Dota 2, and published a paper on competitive self play. There are other experiments, such as this one, showing how self-play/teaching AI is better at predicting heart attacks.

AlphaGo Zero’s success bodes well for AI’s mastery of games. But it would be a mistake to believe that we’ve learned something general about thinking and about learning for general intelligence. This approach won’t work in more ill-structured problems like natural-language understanding or robotics, where the state space is more complex and there isn’t a clear objective function.

Unsupervised training is the key to ultimately creating AI that can think for itself, but more research is needed outside of the confines of board games and predefined objective functions” before computers can really begin to think outside the box.

DeepMind says the research team behind AlphaGo is looking to pursue other complex problems, such as finding new cures for diseases, quantum chemistry and material design.

Although it couldn’t sample every possible board position, AlphaGo’s neural networks extracted key ideas about strategies that work well in any position. Unfortunately, as yet there is no known way to interrogate the network to directly read out what these key ideas are. If we learn the game of Go purely through supervised learning, the best one could hope to do would be as good as the human one is imitating. Through self-play (and thus unsupervised learning), one could learn something completely novel and create or catalyse emergence.

DeepMind’s self-play approach is not the only way to push the boundaries of AI. Gary Marcus, a neuroscientist at NYU, has co-founded Geometric Intelligence (acquired by Uber), to explore learning techniques that extrapolate from a small number of examples, inspired by how children learn. He claimed to outperform both Google’s and Microsoft’s deep-learning algorithms.

Top 13 challenges AI is facing in 2017

AI and ML feed on data, and companies that center their business around the technology are growing a penchant for collecting user data, with or without the latter’s consent, in order to make their services more targeted and efficient. Already implementations of AI/ML are making it possible to impersonate people by imitating their handwritingvoice and conversation style, an unprecedented power that can come in handy in a number of dark scenarios. However, despite large amounts of previously collected data, early AI pilots have challenges producing  dramatic results that technology enthusiasts predicted. For example, early efforts of companies developing chatbots for Facebook’s Messenger platform saw 70% failure rates in handling user requests.

One of main challenges of AI goes beyond data: false positives. For example, a name-ranking algorithm ended up favoring white-sounding names, and advertising algorithms preferred to show high-paying job ads to male visitors.

Another challenge that caused much controversy in the past year was the “filter bubble” phenomenon that was seen in Facebook and other social media that tailored content to the biases and preferences of users, effectively shutting them out from other viewpoints and realities that were out there.

Additionally, as we give more control and decision-making power to AI algorithms, not only technological, but moral/philosophical considerations become important – when a self-driving car has to choose between the life of a passenger and a pedestrian.

To sum up, following are the challenges that AI still faces, despite creating and processing increasing amounts of of data and unprecedented amounts of other resources (number of people working on algorithms, CPUs, storage, better algorithms, etc.):

  1. Unsupervised Learning: Deep neural networks have afforded huge leaps in performance across a variety of image, sound and text problems. Most noticeably in 2015, the application of RNNs to text problems (NLP, language translation, etc.) have exploded. A major bottleneck in unsupervised learning is labeled data acquisition. It is known humans learn about objects and navigation with relatively little labeled “training” data. How is this performed? How can this be efficiently implemented in machines?
  2. Select Induction Vs. Deduction Vs. Abduction Based Approach: Induction is almost always a default choice when it comes to building an AI model for data analysis. However, it – as well as deduction, abduction, transduction – has its limitations which need serious consideration.
  3. Model Building: TensorFlow has opened the door for conversations about  building scalable ML platforms. There are plenty of companies working on data-science-in-the-cloud (H2O, Dato, MetaMind, …) but the question remains, what is the best way to build ML pipelines? This includes ETL, data storage and  optimisation algorithms.
  4. Smart Search: How can deep learning create better vector spaces and algorithms than Tf-Idf? What are some better alternative candidates?
  5. Optimise Reinforced Learning: As this approach avoids the problems of getting labelled data, the system needs to get data, learn from it and improve. While AlphaGo used RL to win against the Go champion, RL isn’t without its own issues: discussion on a more lightweight and conceptual level one on a more technical aspect.

  6. Build Domain Expertise: How to build and sustain domain knowledge in industries and for problems, which involve reasoning based on a complex body of knowledge like Legal, Financial, etc. and then formulate a process where machines can simulate an expert in the field.
  7. Grow Domain Knowledge: How can AI tackle problems, which involve extending a complex body of knowledge by suggesting new insights to the domain itself – for example new drugs to cure diseases?
  8. Complex Task Analyser and Planner: How can AI tackle complex tasks requiring data analysis, planning and execution? Many logistics and scheduling tasks can be done by current (non-AI) algorithms. A good example is the use of AI techniques in IoT for Sparse datasets . AI techniques help this case because there are large and complex datasets where human beings cannot detect patterns but machines can do so easily.
  9. Better Communication: While proliferation of smart chatbots and AI-powered communication tools is a trend since several years, these communication tools are still far from being smart, and may at times fail at recognising even a simple human language.
  10. Better Perception and Understanding: While Alibaba, Face+ create facial recognition software, visual perception and labelling are still generally problematic. There are few good examples, like this Russian face recognition app  that is good enough to be considered a potential tool for oppressive regimes seeking to identify and crack down on dissidents. Another algorithm proved to be effective at peeking behind masked images and blurred pictures.
  11. Anticipate Second-Order (and higher) Consequences: AI and deep learning have improved computer vision, for example, to the point that autonomous vehicles (cars and trucks) are viable (Otto, Waymo) . But what will their impact be on economy and society? What’s scary is that with advance of AI and related technologies, we might know less on AI’s data analysis and decision making process. Starting in 2012, Google used LSTMs to power the speech recognition system in Android, and in December 2016, Microsoft reported their system reached a word error rate of 5.9%  —  a figure roughly equal to that of human abilities for the first time in history. The goal-post continues to be moved rapidly .. for example loom.ai is building an avatar that can capture your personality. Preempting what’s to come, starting in the summer of 2018, EU is considering to require that companies be able to give users an explanation for decisions that their automated systems reach.
  12. Evolution of Expert SystemsExpert systems have been around for a long time.  Much of the vision of expert systems could be implemented in AI/deep learning algorithms in the near future. The architecture of IBM Watson is an indicative example.
  13. Better Sentiment Analysis: Catching up but still far from lexicon-based model for sentiment analysis, it is still pretty much a nascent and unchartered space for most AI applications. There are some small steps in this regard though, including OpenAI’s usage of mLSTM methodology to conduct sentiment analysis of text. The main issue is that there are many conceptual and contextual rules (rooted and steeped in particulars of culture, society, upbringing, etc of individuals) that govern sentiment and there are even more clues (possibly unlimited) that can convey these concepts.

Thoughts/comments?

Reinforcement Learning vs. Evolutionary Strategy: combine, aggregate, multiply

A birds-eye view of main ML algorithms

In statistics, we have descriptive and inferential statistics. ML deals with the same problems and claims any problem where the solution isn’t programmed directly, but is learned by the program. ML generally works by numerically minimising something: a cost function or error.

Supervised learning – You have labeled data: a sample of ground truth with features and labels. You estimate a model that predicts the labels using the features. Alternative terminology: predictor variables and target variables. You predict the values of the target using the predictors.

  • Regression. The target variable is numeric. Example: you want to predict the crop yield based on remote sensing data. Recurrent neural networks result in a “regression” since they usually output a number (a sequence or a vector) instead of a class (e.g. sentence generation, curve plotting). Algorithms: linear regression, polynomial regression, generalised linear models.
  • Classification. The target variable is categorical. Example: you want to detect the crop type that was planted using remote sensing data. Or Silicon Valley’s “Not Hot Dog” application. Algorithms: Naïve Bayes, logistic regression, discriminant analysis, decision trees, random forests, support vector machines, neural networks (NN) of many variations: feed-forward NNs, convolutional NNs, recurrent NNs.

Unsupervised learning – You have a sample with unlabeled information. No single variable is the specific target of prediction. You want to learn interesting features of the data:

  • Clustering. Which of these things are similar? Example: group consumers into relevant psychographics. Algorithms – k-means, hierarchical clustering.
  • Anomaly detection. Which of these things are different? Example: credit card fraud detection. Algorithms: k-nearest-neighbor.
  • Dimensionality reduction. How can you summarise the data in a high-dimensional data set using a lower-dimensional dataset which captures as much of the useful information as possible (possibly for further modelling with supervised or unsupervised algorithms)? Example: image compression. Algorithms: principal component analysis (PCA), neural network auto-encoders.

Reinforcement Learning  (Policy Gradients, DQN, A3C,..) – You are presented with a game/environment that responds sequentially or continuously to your inputs, and you learn to maximise an objective through trial and error.

Evolutionary Strategy – This approach consists of maintaining a distribution over network weight values, and having a large number of agents act in parallel using parameters sampled from this distribution. With this score, the parameter distribution can be moved toward that of the more successful agents, and away from that of the unsuccessful ones. By repeating this approach millions of times, with hundreds of agents, the weight distribution moves to a space that provides the agents with a good policy for solving the task at hand.

All the complex tasks in ML, from self-driving cars to machine translation, are solved by combining these building blocks into complex stacks.

Pro/cons of RL and ES

One step towards building safe AI systems is to remove the need for humans to write goal functions, since using a simple proxy for a complex goal, or getting the complex goal a bit wrong, can lead to undesirable and even dangerous behaviour.

RL is known to be unstable or even to diverge when a nonlinear function approximator such as a NN is used to represent the action-value (also known as Q) function. This instability has several causes: the correlations present in the sequence of observations, the fact that small updates to Q may significantly change the policy and therefore change the data distribution, and the correlations between the action-values and the target values.

RL’s other challenge is generalisation. In typical deep RL methods, this is achieved by approximating the optimal value function with a low-dimensional representation using a deep network. While this approach works well in many domains, in domains where the optimal value function cannot easily be reduced to a low-dimensional representation, learning can be very slow and unstable.

Whereas RL methods such as A3C need to communicate gradients back and forth between workers and a parameter server, ES only requires fitness scores and high-level parameter distribution information to be communicated. It is this simplicity that allows the technique to scale up in ways current RL methods cannot. However, in situations with richer feedback signals however, things don’t go so well for ES.

Contextualising and combining the RL and ES

Appealing to nature for inspiration in AI can sometimes be seen as a problematic approach. Nature, after all, is working under constraints that computer scientists simply don’t have. If we look at intelligent behaviour in mammals, we find that it comes from a complex interplay of two ultimately intertwined processes, inter-life learning, and intra-life learning. Roughly speaking these two approaches in nature can be compared to the two in neural network optimisation. ES for which no gradient information is used to update the organism, is related to inter-life learning. Likewise, the gradient based methods (RL), for which specific experiences change the agent in specific ways, can be compared to intra-life learning.

The techniques employed in RL are in many ways inspired directly by the psychological literature on operant conditioning to come out of animal psychology. (In fact, Richard Sutton, one of the two founders of RL actually received his Bachelor’s degree in Psychology). In operant conditioning animals learn to associate rewarding or punishing outcomes with specific behaviour patterns. Animal trainers and researchers can manipulate this reward association in order to get animals to demonstrate their intelligence or behave in certain ways.

The central role of prediction in intra-life learning changes the dynamics quite a bit. What was before a somewhat sparse signal (occasional reward), becomes an extremely dense signal. At each moment mammalian brains are predicting the results of the complex flux of sensory stimuli and actions which the animal is immersed in. The outcome of the animals behaviour then provides a dense signal to guide the change in predictions and behaviour going forward. All of these signals are put to use in the brain in order to improve predictions (and consequently the quality of actions) going forward. If we apply this way of thinking to learning in artificial agents, we find that RL isn’t somehow fundamentally flawed, rather it is that the signal being used isn’t nearly as rich as it could (or should) be. In cases where the signal can’t be made more rich, (perhaps because it is inherently sparse, or to do with low-level reactivity) it is likely the case that learning through a highly parallelizable method such as ES is instead better.

Combining many

It is clear that for many reactive policies, or situations with extremely sparse rewards, ES is a strong candidate, especially if you have access to the computational resources that allow for massively parallel training.  On the other hand, gradient-based methods using RL or supervision are going to be useful when a rich feedback signal is available, and we need to learn quickly with less data.

An extreme example is combining more than just ES and RL and Microsoft’s Maluuba is a an illustrative example, which used many algorithms to beat the game Ms. Pac-Man. When the agent (Ms. Pac-Man) starts to learn, it moves randomly; it knows nothing about the game board. As it discovers new rewards (the little pellets and fruit Ms. Pac-Man eats) it begins placing little algorithms in those spots, which continuously learn how best to avoid ghosts and get more points based on Ms. Pac-Man’s interactions, according to the Maluuba research paper.

As the 163 potential algorithms are mapped, they continually send which movement they think would generate the highest reward to the agent, which averages the inputs and moves Ms. Pac-Man. Each time the agent dies, all the algorithms process what generated rewards. These helper algorithms were carefully crafted by humans to understand how to learn, however.

Instead of having one algorithm learn one complex problem, the AI distributes learning over many smaller algorithms, each tackling simpler problems, Maluuba says in a video. This research could be applied to other highly complex problems, like financial trading, according to the company.

But it’s worth noting that since more than 100 algorithms are being used to tell Ms. Pac-Man where to move and win the game, this technique is likely to be extremely computationally intensive.