How GANs can turn AI into a massive force

ai-robot-face

 

Deep learning models can already achieve state-of-the-art results in some applications, but their capabilities are still limited. Unlike humans, deep learning models are unable to handle minor changes, and hence can only be applied for specific and narrowly defined tasks.

Consider this conversation of what might be the most sophisticated negotiation software on the planet, which occurred between two AI agents developed at Facebook:

Bob: “I can can I I everything else.”

Alice: “Balls have zero to me to me to me to me to me to me to me to me to.”

At first, they were speaking in plain old English, but researchers realized they forgot to include a reward for sticking to the language. So, the AI agents began to diverge, eventually rearranging legible words into seemingly nonsensical (but, in their perspective, highly efficient) sentences. They invented their own codewords, abbreviations, and structures.

This phenomenon is observed again and again and again.

A vanguard AI technology that can learn, recognize, and generate information on a nearly human level doesn’t exist yet, but we have taken steps toward that direction.

What are generative adversarial networks (GANs)?

Generally intelligent systems must be able to generalize from limited data and learning causal relationships. In 2016, Ian Goodfellow, a fellow at Google Brain, suggested using generative adversarial networks (GANs) as an alternative unsupervised machine learning method. This aimed to address many of the ailing points of the existing methods.

GANs consist of two deep neural networks: generator and discriminator. The generator’s goal is to create data samples that are so indistinguishable to the real ones. The discriminator’s goal is to identify which of the generator’s data samples are real and which are fake.

These two networks compete against each other in a zero-sum game (i.e. one’s loss implies another’s win). Both networks would then become stronger in a relatively short period of time.

gan-chart

Backpropagation is used to update the model parameters and train the neural networks. Over time, the networks learn many features of the provided data. To create realistic forged samples, the generator needs to learn the data’s features and patterns, while the discriminator does the same to correctly distinguish between real and fake samples.

GANs are thus able to overcome the above weaknesses by training (i.e. playing) neural networks against each other, thus learning from each other (which necessitates less data) and eventually performing better in a broader range of problems.

Applications of GANs

There are several types of GANs, and some of its most obvious applications include high-resolution or interactive image generation/blendingimage inpaintingimage-to-image translation, abstract reasoning, semantic segmentation, video generation, and text-to-image synthesis, among others.

The video game industry is the first area of entertainment to start seriously experimentingusing AI to generate raw content. There’s a huge cost incentive to invest in video game development automation given the US$300 million+ budget of modern AAA video games.

GANs have also been used for text, with less success⏤a bot developed to speak like Friedrich Nietzsche started to speak in a manner similar to the philosopher, but the sentences did not make sense. GANs for voice applications are able to reproduce a given text string to life-like voices with approximately 20 minutes of voice samples, such as these popular impersonations of American presidents Donald Trump and Barack Obama. In the near future, videos can likely be generated just by providing a script.

Goodfellow and his colleagues used GANs for image generation, recognition, and classification by teaching one of the networks to create images of handwritten digits (humans were not able to distinguish real handwritten digits). They also trained a neural network to create images of objects, which humans could only differentiate (from real ones) 78.7 percent of the time. Below are some sample images of faces created entirely by deep convolutional GANs.
face-samples-gan

Despite all the above achievements, GANs still have weaknesses:

  • Instability (the generator and the discriminator losses keep oscillating) and non-convergence (to optimum) of the objective function in GANs
  • Mode collapse (this happens when the generator doesn’t produce diverse images or information)
  • The possibility that either the generator or the discriminator becomes too strong as compared to the others during training
  • The possibility that either the generator or the discriminator never learns beyond a certain point

An existential threat

Do GANs and AI in general pose an existential threat to humanity? Elon Musk thinks so. Since 2014, he has been advocating adoption of AI regulations by authorities around the world. Recently, he reiterated the urgent need to be proactive in regulation.

“AI is a fundamental risk to the existence of human civilization,” Musk tells US politicians recently.

His concerns stem from the rapid developments related to GANs, which might push humanity toward the inception of artificial general intelligence. While AI regulations may serve as safeguards, AI is still far from the fictitious depictions seen frequently in Hollywood sci-fi movies.

(By the way, Facebook ultimately opted to require its negotiation bots to speak in plain old English.)

Here are some recommended resources for GAN:

This article originally appeared on Tech in Asia.

Reinforcement learning and its new frontiers

RL’s origins and historic context

RL copies a very simple principle from nature. The psychologist Edward Thorndike documented it more than 100 years ago. Thorndike placed cats inside boxes from which they could escape only by pressing a lever. After a considerable amount of pacing around and meowing, the animals would eventually step on the lever by chance. After they learned to associate this behaviour with the desired outcome, they eventually escaped with increasing speed.

Some of earliest AI researchers believed that this process might be usefully reproduced in machines. In 1951, Marvin Minsky, a student at Harvard who would become one of the founding fathers of AI, built a machine that used a simple form of reinforcement learning to mimic a rat learning to navigate a maze. Minsky’s Stochastic Neural Analogy Reinforcement Computer (SNARC), consisted of dozens of tubes, motors, and clutches that simulated the behaviour of 40 neurons and synapses. As a simulated rat made its way out of a virtual maze, the strength of some synaptic connections would increase, thereby reinforcing the underlying behaviour.

There were few successes over the next few decades. In 1992, Gerald Tesauro demonstrated a program that used the technique to play backgammon. It became skilled enough to rival the best human players, a landmark achievement in AI. But RL proved difficult to scale to more complex problems.

In March 2016, however, AlphaGo, a program trained using RL, won against one of the best Go players of all time, South Korea’s Lee Sedol. This milestone event opened again teh pandora’s box of research about RL. Turns out the key to having a strong RL is to combine it with deep learning.

Current usage and major methods of RL

Thanks to current RL research, computers can now automatically learn to play ATARI games, are beating world champions at Go, simulated quadrupeds are learning to run and leap, and robots learn how to perform complex manipulation tasks that defy explicit programming.

However, while RL saw its advancements accelerate, progress in RL has not been driven as much by new ideas or additional research as just by more of data, processing power and infrastructure. In general, there are four separate factors that hold back AI:

  1. Processing power (the obvious one: Moore’s Law, GPUs, ASICs),
  2. Data (in a specific form, not just somewhere on the internet – e.g. ImageNet),
  3. Algorithms (research and ideas, e.g. backprop, CNN, LSTM), and
  4. Infrastructure (Linux, TCP/IP, Git, AWS, TensorFlow,..).

Similarly for RL, for example for computer vision, the 2012 AlexNet (deeper and wider version of 1990’s Convolutional Neural Networks – CNNs). Or, ATARI’s Deep Q Learning is an implementation of a standard Q Learning algorithm with function approximation, where the function approximator is a CNN. AlphaGo uses Policy Gradients with Monte Carlo tree search (MCTS).

RL’s most optimal method vs. human learning

Generally, RL approaches can be divided into two core categories. The first focuses on finding the optimum mappings that perform well in the problem of interest. Genetic algorithmgenetic programming and simulated annealing have been commonly employed in this class of RL approaches. The second category is to estimate the utility function of taking an action for the given problem via statistical techniques or dynamic programming methods, such as TD(λ) and Q-learning. To date, RL has been successfully applied in many real-world complex applications, including autonomous helicopterhumanoid roboticsautonomous vehicles, etc.

Policy Gradients (PGs), one of RL’s most used methods, is shown to work better than Q Learning when tuned well. PG is preferred because there’s an explicit policy and a principled approach that directly optimises the expected reward.

Before trying PGs (canon), it is recommended to first try to use cross-entropy method (CEM) (normal gun), a simple stochastic hill-climbing “guess and check” approach inspired loosely by evolution. And if you really need to or insist on using PGs for your problem, use a variation called TRPO, which usually works better and more consistently than vanilla PG in practice. The main idea is to avoid parameter updates that change the policy dramatically, as enforced by a constraint on the KL divergence between the distributions predicted by old and the new policies on data.

PGs, however have few disadvantages: they typically converge to a local rather than a global optimum and they display inefficient and high variance while evaluating a policy. PGs also require lot of training samples, take lot of time to train, and are hard to debug debug when they don’t work.

PG is a fancy form of guess-and-check, where the “guess” refers to sampling rollouts from a current policy and encouraging actions that lead to good outcomes. This represents the state of the art in how we currently approach RL problems. But compare that to how a human might learn (e.g. a game of Pong). You show him/her the game and say something along the lines of “You’re in control of a paddle and you can move it up or down, and your goal is to bounce the ball past the other player”, and you’re set and ready to go. Notice some of the differences:

  • Humans communicate the task/goal in a language (e.g. English), but in a standard RL case, you assume an arbitrary reward function that you have to discover through environment interactions. It can be argued that if a human went into a game without knowing anything about the reward function, the human would have a lot of difficulty learning what to do but PGs would be indifferent, and likely work much better.
  • A human brings in a huge amount of prior knowledge, such as elementary physics (concepts of gravity, constant velocity,..), and intuitive psychology. He/she also understands the concept of being “in control” of a paddle, and that it responds to your UP/DOWN key commands. In contrast, algorithms start from scratch which is simultaneously impressive (because it works) and depressing (because we lack concrete ideas for how not to).
  • PGs are a brute force solution, where the correct actions are eventually discovered and internalised into a policy. Humans build a rich, abstract model and plan within it.
  • PGs have to actually experience a positive reward, and experience it very often in order to eventually shift the policy parameters towards repeating moves that give high rewards. On the other hand, humans can figure out what is likely to give rewards without ever actually experiencing the rewarding or unrewarding transition.

In games/situations with frequent reward signals that requires precise play, fast reflexes, and not much planning, PGs quite easily can beat humans. So once we understand the “trick” by which these algorithms work you can reason through their strengths and weaknesses.

PGs don’t easily scale to settings where huge amounts of exploration are difficult to obtain. Instead of requiring samples from a stochastic policy and encouraging the ones that get higher scores, deterministic policy gradients use a deterministic policy and get the gradient information directly from a second network (called a critic) that models the score function. This approach can in principle be much more efficient in settings with  high-dimensional actions where sampling actions provide poor coverage, but so far seems empirically slightly finicky to get working.

There is also a line of work that tries to make the search process less hopeless by adding additional supervision. In many practical cases, for instance, one can obtain expert trajectories from a human. For example AlphaGo first uses supervised learning to predict human moves from expert Go games and the resulting human mimicking policy is later fine-tuned with PGs on the “real” goal of winning the game.

RL’s new frontiers: MAS, PTL, evolution, memetics and eTL

There is another method called Parallel Transfer Learning (PTL), which aims to optimize RL in multi-agent systems (MAS). MAS are computer systems composed of many interacting and autonomous agents within an environment of interests for problem-solving. MAS have a wide array of applications in industrial and scientific fields, such as resource management and computer games.

In MAS, as agents interact with and learn from one another, the challenge is to identify suitable source tasks from multiple agents that will contain mutually useful information to transfer. In conventional MAS (cMAS), which are optimal for simple environments, actions of each agent are pre-defined for possible states in the environment. Normal RL methodologies have been used as the learning processes of (cMAS) agents through trial-and-error interactions in a dynamic environment.

In PTL, each agent will broadcast its knowledge to all other agents while deciding whose knowledge to accept based on the reward received from other agents vs. expected rewards it predicts. Nevertheless, agents in this approach tend to infer incorrect actions on unseen circumstances or complex environments.

However, for more complex or changing environments, it is necessary to endow the agents with intelligence capable of adapting to an environment’s dynamics. A complex environment, almost by definition, implies complex interactions and necessitated learning of MAS, which current RL methodologies are hard-pressed to meet. A more recent machine learning paradigm of Transfer Learning (TL) was introduced as an approach of leveraging valuable knowledge from related and well studied problem domains to enhance problem-solving abilities of MAS in complex environments. Since then, TL has been successfully used for enhancing RL tasks via methodologies such as instance transferaction-value transferfeature transfer and advice exchanging (AE).

Most RL systems aim to train a single agent or cMAS. Evolutionary Transfer Learning framework (eTL) aims to develop intelligent and social agents capable of adapting to the dynamic environment of MAS and more efficient problem solving. It’s inspired by Darwin’s theory of evolution (natural selection + random variation) by principles that govern the evolutionary knowledge transfer process. eTL constructs social selection mechanisms that are modelled after the principles of human evolution. It mimics natural learning and errors that are introduced due to the physiological limits of the agents’ ability to perceive differences, thus generating “growth” and “variation” of knowledge that agents have, thus exhibiting higher adaptability capabilities for complex problem solving. Essential backbone of eTL comprises of memetic automaton, which includes evolutionary mechanisms such as meme representation, meme expression, etc.

Memetics

 

The term “meme” can be traced back to Dawkins’ “The Selfish Gene”, where he defined it as “a unit of information residing in the brain and is the replicator in human cultural evolution.” For the past few decades, the meme-inspired science of Memetics has attracted increasing attention in fields including anthropology, biology, psychology, sociology and computer science. Particularly, one of the most direct and simplest applications in computer science for problem solving has become memetic algorithm. Further  research of meme-inspired computational models resulted in concept of memetic automaton, which integrates memes into units of domain information useful for problem-solving. Recently, memes have been defined as transformation matrixes that can be reused across different problem domains for enhanced evolutionary search. As with genes serving as “instructions for building proteins”, memes carry “behavioural instructions,” constructing models for problem solving.

 

Memetics in eTL

 

Meme representation and meme evolution form the two core aspects of eTL. It then undergoes meme expression and meme assimilation. Meme representation is related to what a meme is, while meme expression is defined for an agent to express its stored memes as behavioural actions, and meme assimilation captures new memes by translating corresponding behaviours into knowledge that blends into the agent’s mind-universe. The meme evolution processes (i.e. meme internal and meme external evolutions) comprise the main behavioural learning aspects of eTL. To be specific, meme internal evolution denotes the process for agents to update their mind-universe via self learning or personal grooming. In eTL, all agents undergo meme internal evolution by exploring the common environment simultaneously. During meme internal evolution, meme external evolution might happen to model the social interaction among agents mainly via imitation, which takes place when memes are transmitted. Meme external evolution happens whenever the current agent identifies a suitable teacher agent via a meme selection process. Once the teacher agent is selected, meme transmission occurs to instruct how the agent imitates others. During this process, meme variation facilitates knowledge transfer among agents. Upon receiving feedback from the environment after performing an action, the agent then proceeds to update its mind-universe accordingly.

 

eTL implementation with learning agents

 

There are two implementations of learning agents that take the form of neurally-inspired learning structures, namely a FALCON and a BP multilayer neural network. Specifically, FALCON is a natural extension of self-organizing neural models proposed for real-time RL, while BP is a classical multi-layer network that has been widely used in various learning systems.
  1. MASs with TL vs. MAS without TL: Most TL approaches outperform cMAS. This is due to TL endowing agents with capacities to benefit from the knowledge transferred from the better performing agents, thus accelerating the learning rate of the agents in solving the complex task more efficiently and effectively.
  2. eTL vs. PTL and other TL approaches: FALCON and BP agents with the eTL outperform PTL and other TL approaches due to the reason that, when deciding whether to accept  information broadcasted by the others, agents in PTL tend to make incorrect predictions on previously unseen circumstances. Further, eTL also demonstrates superiority in attaining higher success rates than all AE models thanks to meme selection operator of eTL, which considers a fusion of the “imitate-from-elitist” and “like-attracts-like” principles so as to give agents the option of choosing more reliable teacher agents over the AE model.

Conclusions

While popularisation of RL is traced back to Edward Thorndike and Marvin Minsky, it’s been inspired by nature and present with us humans since ages long gone. This is how we effectively teach children and want to now teach our computer systems, real (neural networks) or simulated (MAS).

RL reentered human consciousness and rekindled our interest again in 2016 when AlphaGo beat Go champion Lee Sedol. RL has, via its currently successful PGs, DQNs and other methodologies, already contributed and continues to accelerate, turn more intelligent and optimise humanoid robotics, autonomous vehicles, hedge funds, and other endeavours, industries and aspect of human life.

However, what is that optimises or accelerates RL itself? Its new frontiers represent PTLs, Memetics and a holistic eTL methodology inspired by natural evolution and spreading of memes. This latter evolutionary (and revolutionary!) approach is governed by several meme-inspired evolutionary operators (implemented using FALCON and BP multi-layer neural network), including meme evolutions.

The performance efficacy of eTL seems to have outperformed even most state-of-the-art MAS TL systems (PTL).

What future does RL hold? We don’t know. But the amount of research resources, experimentation and imaginative thinking will surely not disappoint us.

Reinforcement Learning vs. Evolutionary Strategy: combine, aggregate, multiply

A birds-eye view of main ML algorithms

In statistics, we have descriptive and inferential statistics. ML deals with the same problems and claims any problem where the solution isn’t programmed directly, but is learned by the program. ML generally works by numerically minimising something: a cost function or error.

Supervised learning – You have labeled data: a sample of ground truth with features and labels. You estimate a model that predicts the labels using the features. Alternative terminology: predictor variables and target variables. You predict the values of the target using the predictors.

  • Regression. The target variable is numeric. Example: you want to predict the crop yield based on remote sensing data. Recurrent neural networks result in a “regression” since they usually output a number (a sequence or a vector) instead of a class (e.g. sentence generation, curve plotting). Algorithms: linear regression, polynomial regression, generalised linear models.
  • Classification. The target variable is categorical. Example: you want to detect the crop type that was planted using remote sensing data. Or Silicon Valley’s “Not Hot Dog” application. Algorithms: Naïve Bayes, logistic regression, discriminant analysis, decision trees, random forests, support vector machines, neural networks (NN) of many variations: feed-forward NNs, convolutional NNs, recurrent NNs.

Unsupervised learning – You have a sample with unlabeled information. No single variable is the specific target of prediction. You want to learn interesting features of the data:

  • Clustering. Which of these things are similar? Example: group consumers into relevant psychographics. Algorithms – k-means, hierarchical clustering.
  • Anomaly detection. Which of these things are different? Example: credit card fraud detection. Algorithms: k-nearest-neighbor.
  • Dimensionality reduction. How can you summarise the data in a high-dimensional data set using a lower-dimensional dataset which captures as much of the useful information as possible (possibly for further modelling with supervised or unsupervised algorithms)? Example: image compression. Algorithms: principal component analysis (PCA), neural network auto-encoders.

Reinforcement Learning  (Policy Gradients, DQN, A3C,..) – You are presented with a game/environment that responds sequentially or continuously to your inputs, and you learn to maximise an objective through trial and error.

Evolutionary Strategy – This approach consists of maintaining a distribution over network weight values, and having a large number of agents act in parallel using parameters sampled from this distribution. With this score, the parameter distribution can be moved toward that of the more successful agents, and away from that of the unsuccessful ones. By repeating this approach millions of times, with hundreds of agents, the weight distribution moves to a space that provides the agents with a good policy for solving the task at hand.

All the complex tasks in ML, from self-driving cars to machine translation, are solved by combining these building blocks into complex stacks.

Pro/cons of RL and ES

One step towards building safe AI systems is to remove the need for humans to write goal functions, since using a simple proxy for a complex goal, or getting the complex goal a bit wrong, can lead to undesirable and even dangerous behaviour.

RL is known to be unstable or even to diverge when a nonlinear function approximator such as a NN is used to represent the action-value (also known as Q) function. This instability has several causes: the correlations present in the sequence of observations, the fact that small updates to Q may significantly change the policy and therefore change the data distribution, and the correlations between the action-values and the target values.

RL’s other challenge is generalisation. In typical deep RL methods, this is achieved by approximating the optimal value function with a low-dimensional representation using a deep network. While this approach works well in many domains, in domains where the optimal value function cannot easily be reduced to a low-dimensional representation, learning can be very slow and unstable.

Whereas RL methods such as A3C need to communicate gradients back and forth between workers and a parameter server, ES only requires fitness scores and high-level parameter distribution information to be communicated. It is this simplicity that allows the technique to scale up in ways current RL methods cannot. However, in situations with richer feedback signals however, things don’t go so well for ES.

Contextualising and combining the RL and ES

Appealing to nature for inspiration in AI can sometimes be seen as a problematic approach. Nature, after all, is working under constraints that computer scientists simply don’t have. If we look at intelligent behaviour in mammals, we find that it comes from a complex interplay of two ultimately intertwined processes, inter-life learning, and intra-life learning. Roughly speaking these two approaches in nature can be compared to the two in neural network optimisation. ES for which no gradient information is used to update the organism, is related to inter-life learning. Likewise, the gradient based methods (RL), for which specific experiences change the agent in specific ways, can be compared to intra-life learning.

The techniques employed in RL are in many ways inspired directly by the psychological literature on operant conditioning to come out of animal psychology. (In fact, Richard Sutton, one of the two founders of RL actually received his Bachelor’s degree in Psychology). In operant conditioning animals learn to associate rewarding or punishing outcomes with specific behaviour patterns. Animal trainers and researchers can manipulate this reward association in order to get animals to demonstrate their intelligence or behave in certain ways.

The central role of prediction in intra-life learning changes the dynamics quite a bit. What was before a somewhat sparse signal (occasional reward), becomes an extremely dense signal. At each moment mammalian brains are predicting the results of the complex flux of sensory stimuli and actions which the animal is immersed in. The outcome of the animals behaviour then provides a dense signal to guide the change in predictions and behaviour going forward. All of these signals are put to use in the brain in order to improve predictions (and consequently the quality of actions) going forward. If we apply this way of thinking to learning in artificial agents, we find that RL isn’t somehow fundamentally flawed, rather it is that the signal being used isn’t nearly as rich as it could (or should) be. In cases where the signal can’t be made more rich, (perhaps because it is inherently sparse, or to do with low-level reactivity) it is likely the case that learning through a highly parallelizable method such as ES is instead better.

Combining many

It is clear that for many reactive policies, or situations with extremely sparse rewards, ES is a strong candidate, especially if you have access to the computational resources that allow for massively parallel training.  On the other hand, gradient-based methods using RL or supervision are going to be useful when a rich feedback signal is available, and we need to learn quickly with less data.

An extreme example is combining more than just ES and RL and Microsoft’s Maluuba is a an illustrative example, which used many algorithms to beat the game Ms. Pac-Man. When the agent (Ms. Pac-Man) starts to learn, it moves randomly; it knows nothing about the game board. As it discovers new rewards (the little pellets and fruit Ms. Pac-Man eats) it begins placing little algorithms in those spots, which continuously learn how best to avoid ghosts and get more points based on Ms. Pac-Man’s interactions, according to the Maluuba research paper.

As the 163 potential algorithms are mapped, they continually send which movement they think would generate the highest reward to the agent, which averages the inputs and moves Ms. Pac-Man. Each time the agent dies, all the algorithms process what generated rewards. These helper algorithms were carefully crafted by humans to understand how to learn, however.

Instead of having one algorithm learn one complex problem, the AI distributes learning over many smaller algorithms, each tackling simpler problems, Maluuba says in a video. This research could be applied to other highly complex problems, like financial trading, according to the company.

But it’s worth noting that since more than 100 algorithms are being used to tell Ms. Pac-Man where to move and win the game, this technique is likely to be extremely computationally intensive.