Consciousness, quantum physics and Buddhism

What is consciousness?

And how do I really know you are conscious? This is the problem of solipsism. I know your brain is very similar to mine as you look like a human, sound like one and give an expression of someone with brain like other humans. By mathematical induction then, there is a perfectly reasonable inference that you too are conscious.

Some 10,000 laboratories worldwide are pursuing distinct questions about the brain and consciousness across a myriad of scales and in a dizzying variety of animals and behaviours. According to most computer scientists, consciousness is a characteristic that emerges from of technological developments. Some believe that consciousness involves accepting new information, storing and retrieving old information and cognitive processing of it all into perceptions and actions. If that’s right, then one day machines will indeed be the ultimate consciousness. They’ll be able to gather more information than a human, store more than many libraries, access vast databases in milliseconds and compute all of it into decisions more complex, and yet more logical, than any person ever could.

Consciousness could be explained by “integrated information theory,” which asserts that consciousness is a product of structures, such as a brain, that can store a large amounts of information, have a critical density of interconnections and thus enable many informational feedback loops. This theory provides a means to assess degrees of consciousness in people, animals (lesser degree than humans) and even machines/programs (for example, IBM Watson and Google’s self-taught visual system). It proposes a way to measure it in a single value called Φ (phi) and helps explain why certain relatively complicated neural structures don’t seem critical for consciousness. For example, the cerebellum, which encodes information about motor movements, contains a huge number of neurons, but doesn’t appear to integrate the diverse range of internal states that the prefrontal cortex does.

The more distinctive the information (of the system), and the more specialised and integrated the system is, the higher its Φ (and anything with a Φ>0 possesses at least a shred of consciousness). Over the past few years, this theory has become increasingly influential and is championed by the eminent neuroscientist Christof Koch. The problem is that even though Φ promises to be precise, it’s so far impossible to use it for practical calculations related to human or animal brains, because an unthinkably large number of possibilities would have to be evaluated.

Accordingly, consciousness is a property of complex systems that have a particular “cause-effect” connections. If you were to build a computer that has the same circuitry as the brain, this computer would also have consciousness associated with it. It would feel like something to be this computer, like each human does. Hofstadter’s Mind’s I has a collection of essays about mind (an emerging property of brain function) and how feedback loops are essential for this emergence.

Another viewpoint on consciousness comes from quantum theory, the most profound and thorough theory about nature of things. According to the orthodox Copenhagen Interpretation, consciousness and the physical world are complementary aspects of the same reality. When a person observes, or experiments on, some aspect of the physical world, that person’s conscious interaction causes discernible change. Since it takes consciousness as a given and no attempt is made to derive it from physics, the Copenhagen theory postulates that consciousness exists by itself but requires brains to become real. This view was popular with the pioneers of quantum theory such as Niels Bohr, Werner Heisenberg and Erwin Schrödinger.

The interaction between consciousness and matter leads to paradoxes that remain unresolved after 80 years of debate. A well-known example of this is the paradox of Schrödinger’s cat, in which a cat is placed in a situation that results in it being equally likely to survive or die – and the act of observation itself is what makes the outcome certain.

The opposing view is that consciousness emerges from biology, just as biology itself emerges from chemistry which, in turn, emerges from dissipative systems, according to physicist Jeremy England. It agrees with the neuroscientists’ view that the processes of the mind are identical to states/processes of the brain. It also agrees with a more recent interpretation of quantum theory motivated by an attempt to rid it of paradoxes, the Many Worlds Interpretation.

Modern quantum physics views of consciousness have parallels in ancient philosophy. For example, Copenhagen theory is similar to the theory of mind in Vedanta – in which consciousness is the fundamental basis of reality, on par with the physical universe. On the other hand, England’s theory resembles Buddhism as Buddhist hold that mind and consciousness arise out of emptiness or nothingness.

A strong evidence in favour of Copenhagen theory is the life of Indian mathematician Srinivasa Ramanujan, who died in 1920 at the age of 32. His notebook, which was lost and forgotten for about 50 years and published only in 1988, contains several thousand formulas, without proof in different areas of mathematics, that were well ahead of their time. Furthermore, the methods by which he found formulas remain elusive. He claimed they were revealed to him by a goddess while he was asleep.

Thinking deeper about consciousness leads to the question of how matter and mind influence each other. Consciousness alone cannot make physical changes to the world, but perhaps it can change probabilities in the evolution of life and thus quantum processes? The act of observation can freeze and even influence atoms’ movements, as shown in 2015. This may very well be an explanation of how matter and mind interact.


Creepy or cool? some AI breakthroughs and formula of life

You keep on hearing about AI is bad and once AGI is around, it will kill us off – paperclip maximization principle is a telling example? Check the below tidbits pushing the envelop.


Historically, people line up to attend concerts of famous artists. Now, there is AI that generates pure gold jazz or what sounds like a mix of jazz and classic. Would you line up to hear these pieces? Would you still line up if you didn’t know whether it’s an algorithm or a human?


Do you like Harry Potter? What about this Harry Potter? This algorithm learnt from the first few chapters of J.K.Rowling’s Harry Potter and created a novel of its own. Forget about J.K.Rowling, move on.


TV series are great. Here is a script of Silicon Valley, generated by AI. Or a credibly-looking video from few dozen words (and some prior video training). Hollywood took heed.

Human behaviour

MIT researches created their AI system, which predicts human behaviour by approximating human “intuition” from myriads of data, and pitted it against human teams at data science competitions. The algorithm didn’t get the top score but it beat 615 of the 906 human teams competing. In two of the competitions, it created models that were 94% and 96% as accurate as the winning teams. Whereas the teams of humans required months to build their prediction algorithms, this algorithm trained 2-12 hours.


Once virtual Adam and Eve (AI bots) were done with apples, they ate Stan, an innocent bystander (another AI bot) that happened to look like an apple.

Formula of life

OK, all the above are creepy, cool, scary, depending on your knowledge, interest and approach to life. But could these AI concepts eventually yield or create actual or natural life forms?

Even Artificial Life community acknowledges that the definition of “life” is contentious.

What Darwin’s theory talks about and what we believe is that there is clear difference between living organisms (in how they come to be and evolve) and everything else (from water vortexes to AI systems to coastal lines of England). Popular hypotheses credit a primordial soup, big bang and a colossal stroke of luck for creation of of life. Erwin Schrödinger framed life merely as physical processes in his treatise “What is Life?”.

But till now we had hard time explaining how (open) thermodynamic systems like our universe and even Earth evolved and how lifeforms evolved in them. We have answers for (close and weak open) ones. Till now.

According to Jeremy England from MIT given it a thermodynamic framing: it’s all about entropy (to create life, one has to decrease entropy). Carbon is not God. In his view, there is one essential difference between living things and inanimate chunks of carbon atoms: the former tend to be much better at capturing energy from their environment and dissipating that energy as heat. He has math formula, which indicates that when a group of atoms is driven by an external source of energy (like the sun) and surrounded by heat (like the ocean or atmosphere), it will often gradually restructure itself in order to dissipate increasingly more energy. This implies that under certain conditions, matter may acquire key physical attribute associated with life.

Now back to AI craze above. Imagine if we could introduce systems that artificially decrease entropy in AI systems as per Jeremy England’s prescriptions, near future could see a new Cambrian explosion of artificially constructed forms of life, which are….. songs, movies, fiction, ….. and perhaps new and better beings!

Here are more creepy/cool AI applications or here. Enjoy!

P.S.  Ralph Merkle think of Bitcoin as life:

Bitcoin is the first example of a new form of life. It lives and breathes on the internet. It lives because it can pay people to keep it alive. It lives because it performs a useful service that people will pay it to perform. … It can’t be stopped. It can’t even be interrupted. If nuclear war destroyed half of our planet, it would continue to live, uncorrupted.

Blockchain + AI = ?

What happens when two major technological trends see an synergy or overlap in usage or co-development?

We have blockchain’s promise of near-frictionless value exchange and AI’s ability to conduct analysis of massive amounts of data. The joining of the two could mark the beginning of an entirely new paradigm. We can maximize security while remaining immutable by employing AI agents that govern the chain. With more companies and institutions adopting blockchain-based solutions, and more complex, potentially critical data stored in distributed ledgers, there’s a growing need for sophisticated analysis methods, which AI technology can provide.

The combination of AI and blockchain is fueling the onset of the “Fourth Industrial Revolution“ by reinventing economics and information exchange.

1. Precision medicine

Google DeepMind is developing an “auditing system for healthcare data”. Blockchain will enable the system to remain secure and shareable, while AI will allow medical staff to obtain analytics on medical predictions drawn from patient profiles.

2. Wealth and investment management

State Street is issuing blockchain-based indices. Data is stored and made secure using blockchain and analyzed using AI. It reports that 64% of wealth and asset managers polled expected their firms to adopt blockchain in the next five years. Further, 49% of firms said they expect to employ AI. As of 01.2017, State Street had 10 blockchain POC’s in the works.

3. Smart urbanity

To supply the energy, distributed blockchain technology is implemented for transparent and cost-effective transactions between producers and consumers, while machine learning algorithms can even hone in on transactions to estimate pricing. Green-friendly AI and blockchain help reduce energy waste and optimize energy trade. For example, an AI system governing a building can oversee energy use by counting in factors like the presence and number of residents, seasons, and traffic information.

4. Legal diamonds

IBM Watson is developing Everledger using blockchain technology to tackle fraud in the diamond industry, and deploying cognitive analytics to heavily “cross-check” regulations, records, supply-chain, and IoT data in the blockchain environment.

5. More efficient science

The  “file-drawer problem“ in academia is when researchers don’t publish “non-result” experiments. Duplicate experiments and a lack of knowledge follow, trampling scientific discourse. To resolve this, experimental data can be stored in a publicly accessible blockchain. Data analytics could also help identifying elements like how many times the same experiment has happened or what the probable outcome of a certain experiment is.

There are forecasts that AI will play a big role in science once “smart contracts” transacted by blockchain require smarter “nodes” that function in a semi-autonomous way. Smart contracts (essentially, pieces of software) simulate, enforce and manage contractual agreements and can have wide-ranging applications when academics embrace the blockchain for knowledge transfer and development.

6. IP rights management

Digitalization has introduced complicated digital rights to  IP management, and when AI learns the rules of the game, it can identify actors who break IP laws. As for IP contract management, for music (and other content) industry, blockchain enables immediate payment methods to artists and authors. One artist recently suggested the blockchain could help musicians simplify creative collaboration and making money.  Ujo Music is making use of the Ethereum blockchain platform for song distribution.

7. Computational finance

Smart contracts could take center stage where transparent information is crucial for trust in financial services. Financial transactions may no longer rely on a human “clearing agent” as they automatized, performing better and faster. But since confidence in transactions remains dependent on people, AI can help monitor human emotions and predict the most optimal trading environment. Thus, “algotrading” can be powered by algorithms that trade based on investment patterns correlated with emotions.

8. Data and IoT management

Organizations are increasingly looking to adopt blockchain technologies for alternative data storage. And with heaps of data distributed across blockchain ledgers, the need for data analytics with AI is growing. IBM Watson merged blockchain with AI via the Watson IoT group. In this, an artificially intelligent blockchain lets joint parties collectively agree on the state of the device and make decisions on what to do based on language coded into a smart contract. Using blockchain tech, artificially intelligent software solutions are implemented autonomously. Risk management and self-diagnosis are other use cases being explored.

9. Blockchain-As-A-Service software

Microsoft is integrating “BaaS modules” (based on the public Ethereum) in its Azure that users can create test environments for. Blockchains are cheaper to create and test, and in Azure they come with reusable templates and artifacts.

10. Governance 3.0

Blockchain and AI could contribute to the development of direct democracy. They can transfer big hordes of data globally, tracing e-voting procedures and displaying them publicly so that citizens can engage in real-time. Democracy Earth Foundation aspires to “hack democracy“ by advocating open-source software, peer-to-peer networks, and smart contracts. The organization also aims to fight fake identities and reclaim individual accountability in the political sphere. IPDB is a planetary-scale blockchain database built on BigchainDB. It’s a ready-to-use public network with a focus on strong governance.

Is self-play the future of (most) AI?

Go is game whose number of possible moves – more than chess at 10170 – is greater than the number of atoms in the universe.

AlphaGo, the predecessor to AlphaGo Zero, crushed 18-time world champion Lee Sedol and the reigning world number one player, Ke Jie. After beating Jie earlier this year, DeepMind announced AlphaGo was retiring from future competitions.

Now, an even more superior competitor, AlphaGo Zero, could beat the version of AlphaGo that faced Lee Sedol after training for just 36 hours and earned beat its predecessor by 100-0 score after 72 hours. Interestingly, AlphaGo Zero didn’t learn from observing humans playing against each other – unlike AlphaGo – but instead, its neural network relies on an old technique in reinforcement learning: self-play. Self-play means agents can learn behaviours that are not hand-coded on any reinforcement learning task, but the sophistication of the learned behaviour is limited by the sophistication of the environment. In order for an agent to learn intelligent behaviour in a particular environment, the environment has to be challenging, but not too challenging.

Essentially, self-play means that AlphaGo Zero plays against itself. During training, it sits on each side of the table: two instances of the same software face off against each other. A match starts with the game’s black and white stones scattered on the board, placed following a random set of moves from their starting positions. The two computer players are given the list of moves that led to the positions of the stones, and then are each told to come up with multiple chains of next moves along with estimates of the probability they will win by following through each chain. The next move from the best possible chain is then played, and the computer players repeat the above steps, coming up with chains of moves ranked by strength. This repeats over and over, with the software feeling its way through the game and internalizing which strategies turn out to be the strongest.

AlphaGo Zero did start from scratch with no experts guiding it. And it is much more efficient: it only uses a single computer and four of Google’s custom TPU1 chips to play matches, compared to AlphaGo’s several machines and 48 TPUs. Since Zero didn’t rely on human gameplay, and a smaller number of matches, its Monte Carlo tree search is smaller. The self-play algorithm also combined both the value and policy neural networks into one, and was trained on 64 GPUs and 19 CPUs by playing nearly five million games against itself. In comparison, AlphaGo needed months of training and used 1,920 CPUs and 280 GPUs to beat Lee Sedol.

AlphaGo combines the two most powerful ideas about learning to emerge from the past few decades: deep learning and reinforcement learning. In the human brain, sensory information is processed in a series of layers. For instance, visual information is first transformed in the retina, then in the midbrain, and then through many different areas of the cerebral cortex. This creates a hierarchy of representations where simple, local features are extracted first, and then more complex, global features are built from these. The AI equivalent is called deep learning; deep because it involves many layers of processing in simple neuron-like computing units.

But to survive in the world, animals need to not only recognise sensory information, but also act on it. Generations of scientists have studied how animals learn to take a series of actions that maximise their reward. This has led to mathematical theories of reinforcement learning that can now be implemented in AI systems. The most powerful of these is temporal difference learning, which improves actions by maximising its expectation of future reward. It is thus that, among others, it even discovered for itself, without human intervention, classic Go moves such as fuseki opening tactics and life and death.

So are there problems to which the current algorithms can be fairly immediately applied?

One example may be optimisation in controlled industrial settings. Here the goal is often to complete a complex series of tasks while satisfying multiple constraints and minimising cost. As long as the possibilities can be accurately simulated, self-play-based algorithms can explore and learn from a vastly larger space of outcomes than will ever be possible for humans.

Researchers at OpenAI have already experimented with the same technique to train bots to play Dota 2, and published a paper on competitive self play. There are other experiments, such as this one, showing how self-play/teaching AI is better at predicting heart attacks.

AlphaGo Zero’s success bodes well for AI’s mastery of games. But it would be a mistake to believe that we’ve learned something general about thinking and about learning for general intelligence. This approach won’t work in more ill-structured problems like natural-language understanding or robotics, where the state space is more complex and there isn’t a clear objective function.

Unsupervised training is the key to ultimately creating AI that can think for itself, but more research is needed outside of the confines of board games and predefined objective functions” before computers can really begin to think outside the box.

DeepMind says the research team behind AlphaGo is looking to pursue other complex problems, such as finding new cures for diseases, quantum chemistry and material design.

Although it couldn’t sample every possible board position, AlphaGo’s neural networks extracted key ideas about strategies that work well in any position. Unfortunately, as yet there is no known way to interrogate the network to directly read out what these key ideas are. If we learn the game of Go purely through supervised learning, the best one could hope to do would be as good as the human one is imitating. Through self-play (and thus unsupervised learning), one could learn something completely novel and create or catalyse emergence.

DeepMind’s self-play approach is not the only way to push the boundaries of AI. Gary Marcus, a neuroscientist at NYU, has co-founded Geometric Intelligence (acquired by Uber), to explore learning techniques that extrapolate from a small number of examples, inspired by how children learn. He claimed to outperform both Google’s and Microsoft’s deep-learning algorithms.

why do we get old?

There are many theories of aging (hormonal, wear-and-tear, etc.), but only the overmineralization theory (video) explains why humans age at three different speeds:

  1. no biological aging during childhood years, characterized by using calcium, iron and copper in making new bones, red blood-cells and collagen;
  2. accumulation of minerals once childhood growth ceases and progressive aging as evidenced by buildup of lipofuscin;
  3. slight decline in the rate of aging in late life, which has been correlated with reaching a steady state of minerals.

Harvard research postulates that “root cause of aging” are sirtuins.

high dose of “magic mushrooms” causes personality openness

Single high dose of the hallucinogen psilocybin, active ingredient of “magic mushrooms,” was enough to cause a measureable personality change lasting at least a year in nearly 60% of the 51 participants, according to a new study.

Personality was measured on a scientifically validated personality scale, which psychologists consider constituents of personality: openness, extroversion, agreeableness, neuroticism, conscientiousness.

Lasting change was found only in “openness,” which includes traits related to imagination, aesthetics, feelings, abstract ideas and general broad-mindedness.

Researchers will now explore possibilities of using psilocybin for helping cancer patients handle the depression and long-time cigarette smokers to overcome their addiction.

hip-hop, creativity and brain functionality

Hip-hop, an artistic expression/culture formed during 70s in Bronx, is a combination of terms — “hip” was used in African-American vernacular English starting in 1898, meaning current or in the know, and “hop” from “to hop.”

Hip-hop was the creative coalescence of the then popular funk music, self-appointed disk-scratching DJs, break-dancing MCs, improv lyricist-rappers and complementary street art (graffiti) which visualized a culture tinged with social bias, racism and ethnic rebellion. It went mainstream in 1979 by “Rapper’s Delight.”

Creativity in street (hip-hop) and classic (jazz) musical traditions is now being employed by neuroscience in exploring brain performance during creative processes.

future of human life and biomimicry

4.5 billion years of evolution taught nature what works and what lasts.

We’ve been increasingly distancing ourselves from nature: agricultural revolution – grow stock and abandon hunting/gathering; scientific revolution – “torture nature for her secrets;” industrial revolution – machines replace muscles.

Biomimicry is the study of nature for solutions to our problems. Having 96% of our bodies built upon carbon, hydrogen, oxygen, nitrogen, nature can teach us how to:

  • use only the energy needed
  • fit form to function
  • recycle everything
  • curb excesses from within
  • tap the power of limits
  • devise systems that can face unknown situations
  • update ourselves by feedback loops

fungi can save the world

Humans are more closely related to fungi than other kingdoms. Humans share same pathogens with fungi. Fungi don’t like rot from bacteria – our best antibiotics come from fungi. Fungi don’t need light, using radiation as energy source.

Fungi were first (1.3 billion years) organisms on Earth. Plants followed few hundred million years later.

Mycelium reduces oxalic acids/enzymes that pockmark rocks, forming calcium oxalates from minerals and CO2. First step to soil creation. Mycelium also converts cellulose into fungal sugar (ethanol).

Agarikon is essential for human health; it’s highly efficient against pox viruses and  flu viruses. Entomopathogenic fungi kill insects (ants/termites).