Self-Organizing Neural Networks

Thomson Lab Research Detail Page Banner

Inspired by Biology

Living neural networks in the brain perform an array of computational and information processing tasks including sensory input processing, storing and retrieving memory, decision making, and, more globally, generate the general phenomena of “intelligence”. In addition to their remarkable information processing feats, brains are unique because they are computational devices that actually self-organize their intelligence. In fact, during development, the entire brain ultimately grows from just a few single cells!

Artificial neural networks have already demonstrated human-like performance, but fall short of the biological equivalent in several key ways. Neural networks are:

  1. Built by hand, requiring significant engineering investments
  2. Highly memory/power inefficient
  3. Not equipped for real-time learning
  4. Susceptible to catastrophic forgetting

Our research pursues biologically inspired solutions to these problems by developing artificial neural networks that self-organize themselves, growing from single computational cells, just like the brain.


Growing Artificial Neural Networks

We designed a developmental algorithm that exploits spontaneous activity waves to grow and self-organize a convolutional neural network architecture (CNNs), one of the main neural network architectures used in modern machine learning. The algorithm is adaptable to a wide-range of input-layer geometries, robust to malfunctioning units in the first layer, and can grow pooling architectures of different shapes and sizes, making it capable of countering the key challenges accompanying growth. As CNN’s represent a model class of deepworks, we believe the developmental strategy can be broadly implemented for the self-organization of intelligent systems.

Watch this video to learn how we grow an artificial neural network from a single node.

View more research at Thomson Lab

Active Matter Single Cell Biology