How do neural networks optimize their performance? The answer may lie within the free-energy principle, according to a study by the RIKEN Center for Brain Science (CBS) in Japan. This finding could help determine how neural networks optimize their functionality, providing key insights into the function of an impaired brain, and optimizing the design and applications of artificial intelligence in the future.
Every living organism follows the process of biological optimization, which states that any organism or system will always aim to act in the most efficient way possible. If a person can either walk or run to a place, for example, they will act in the manner that exerts the least amount of energy with the highest level of efficiency.
How much energy is exerted at any given moment, then, is dependent on a variety of factors, regardless of the organism or system in use. This is where the free-energy principle comes into play. The free-energy principle is dictated by a concept known as Bayesian inference, which says that any system will continually update itself, and its behavior, by combining new incoming data with past decisions, experiences, and knowledge.
This means that any organism or system, including neural networks, will constantly change and adapt to ensure peak performance at all times.
Through an analysis of neural networks, it was determined that they appear to follow the free-energy principle, just as any other system. “We were able to demonstrate that standard neural networks, which feature delayed modulation of Hebbian plasticity, perform planning and adaptive behavioral control by taking their previous ‘decisions’ into account,” says lead author Takuya Isomura in a statement.
With this finding, the team was then able to demonstrate a proof-of-concept that further clarified the link between the free-energy principle and neural networks.
Using simulated neural networks to solve mazes, the networks demonstrated the ability to adapt and grow, optimizing their skills and efforts as they learned through trial-and-error. This illustrates that neural networks, like any other system, will act in a manner that minimizes energy exertion, maximizing its efficiency at all times.
Additionally, a set of universal mathematical principles may exist that dictate how neural networks self-optimize, the team concludes. While the specifics remain unclear, the fact that a universally applicable equation may exist to explain how neural networks make decisions is certainly intriguing.
These findings could also lead to further advancement in treatment of brain disorders, as well as the development of enhanced artificial intelligence. “Our theory can dramatically reduce the complexity of designing self-learning neuromorphic hardware to perform various types of tasks, which will be important for a next-generation artificial intelligence,” says Isomura.
Article written by Adam Swierk