Machine Learning In Spectral Domain Nature Communications

Leo Migdal
-
machine learning in spectral domain nature communications

Nature Communications volume 12, Article number: 1330 (2021) Cite this article Deep neural networks are usually trained in the space of the nodes, by adjusting the weights of existing links via suitable optimization protocols. We here propose a radically new approach which anchors the learning process to reciprocal space. Specifically, the training acts on the spectral domain and seeks to modify the eigenvalues and eigenvectors of transfer operators in direct space. The proposed method is ductile and can be tailored to return either linear or non-linear classifiers. Adjusting the eigenvalues, when freezing the eigenvectors entries, yields performances that are superior to those attained with standard methods restricted to operate with an identical number of free parameters.

To recover a feed-forward architecture in direct space, we have postulated a nested indentation of the eigenvectors. Different non-orthogonal basis could be employed to export the spectral learning to other frameworks, as e.g. reservoir computing. Machine learning (ML)1,2,3,4 refers to a broad field of study, with multifaceted applications of cross-disciplinary breadth. ML is a subset of artificial intelligence (AI) which ultimately aims at developing computer algorithms that improve automatically through experience. The core idea is that systems can learn from data, so as to identify distinctive patterns and make consequent decisions, with minimal human intervention.

The range of applications of ML methodologies is extremely vast5,6,7,8, and still growing at a steady pace due to the pressing need to cope with the efficiently handling of big data9. Biomimetic approaches to sub-symbolic AI10 inspired the design of powerful algorithms. These latter sought to reproduce the unconscious process underlying fast perception, the neurological paths for rapid decision making, as e.g. employed for faces11 or spoken words12 recognition. An early example of a sub-symbolic brain-inspired AI was the perceptron13, the influential ancestor of deep neural networks (NN)14,15. The perceptron is indeed an algorithm for supervised learning of binary classifiers.

It is a linear classifier, meaning that its forecasts are based on a linear prediction function which combines a set of weights with the feature vector. Analogous to neurons, the perceptron adds up its input: if the resulting sum is above a given threshold the perceptron fires (returns the output value one) otherwise it does not (and the output equals... Modern multilayer perceptrons account for multiple hidden layers with non-linear activation functions. The learning is achieved via minimising the classification error. Single or multilayered perceptrons should be trained by examples14,16,17. Supervised learning requires indeed a large set of positive and negative examples, the training set, labelled with their reference category.

The perceptrons’ acquired ability to perform classification is eventually stored in a finite collection of numbers, the weights and thresholds that were learned during the successive epochs of the supervised training. To date, it is not clear how such a huge collection of numbers (hundred-millions of weights in state of the art ML applications) are synergistically interlaced for the deep networks to execute the assigned... Deep neural networks are usually trained in the space of the nodes, by adjusting the weights of existing links via suitable optimization protocols. We here propose a radically new approach which anchors the learning process to reciprocal space. Specifically, the training acts on the spectral domain and seeks to modify the eigenvalues and eigenvectors of transfer operators in direct space. The proposed method is ductile and can be tailored to return either linear or non-linear classifiers.

Adjusting the eigenvalues, when freezing the eigenvectors entries, yields performances that are superior to those attained with standard methods restricted to operate with an identical number of free parameters. To recover a feed-forward architecture in direct space, we have postulated a nested indentation of the eigenvectors. Different non-orthogonal basis could be employed to export the spectral learning to other frameworks, as e.g. reservoir computing. My research interests include Spectral analysis of Deep Neural Network (DNN), Structura Pruning, Bayesian Inference in DNN, Simplicial Complexes Dynamics, Theoretical Neuroscience © 2025 Me.

This work is licensed under CC BY NC ND 4.0 Published with Hugo Blox Builder — the free, open source website builder that empowers creators. Received 2020 Jun 5; Accepted 2021 Jan 21; Collection date 2021. Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit... The images or other third party material in this article are included in the article’s Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from...

To view a copy of this license, visit http://creativecommons.org/licenses/by/4.0/. Deep neural networks are usually trained in the space of the nodes, by adjusting the weights of existing links via suitable optimization protocols. We here propose a radically new approach which anchors the learning process to reciprocal space. Specifically, the training acts on the spectral domain and seeks to modify the eigenvalues and eigenvectors of transfer operators in direct space. The proposed method is ductile and can be tailored to return either linear or non-linear classifiers. Adjusting the eigenvalues, when freezing the eigenvectors entries, yields performances that are superior to those attained with standard methods restricted to operate with an identical number of free parameters.

To recover a feed-forward architecture in direct space, we have postulated a nested indentation of the eigenvectors. Different non-orthogonal basis could be employed to export the spectral learning to other frameworks, as e.g. reservoir computing. Subject terms: Information technology, Information theory and computation, Complex networks Theoretical aspects of automated learning from data involving deep neural networks have open questions. Here Giambagli et al.

show that training the neural networks in the spectral domain of the network coupling matrices can reduce the amount of learning parameters and improve the pre-training process. arXivLabs is a framework that allows collaborators to develop and share new arXiv features directly on our website. Both individuals and organizations that work with arXivLabs have embraced and accepted our values of openness, community, excellence, and user data privacy. arXiv is committed to these values and only works with partners that adhere to them. Have an idea for a project that will add value for arXiv's community? Learn more about arXivLabs.

Theoretical aspects of automated learning from data involving deep neural networks have open questions. Here Giambagli et al. show that training the neural networks in the spectral domain of the network coupling matrices can reduce the amount of learning parameters and improve the pre-training process. These are possibly similar items as determined by title/reference text matching only. Deep neural networks are usually trained in the space of the nodes, by adjusting the weights of existing links via suitable optimization protocols. We here propose a radically new approach which anchors the learning process to reciprocal space.

Specifically, the training acts on the spectral domain and seeks to modify the eigenvalues and eigenvectors of transfer operators in direct space. The proposed method is ductile and can be tailored to return either linear or non-linear classifiers. Adjusting the eigenvalues, when freezing the eigenvectors entries, yields performances that are superior to those attained with standard methods restricted to operate with an identical number of free parameters. To recover a feed-forward architecture in direct space, we have postulated a nested indentation of the eigenvectors. Different non-orthogonal basis could be employed to export the spectral learning to other frameworks, as e.g. reservoir computing.

The authors declare no competing interests. Each image of the training set is mapped into… The non-linear version of the training scheme returns a multilayered… Distribution of the weights of a perceptron. The red…

People Also Search

Nature Communications Volume 12, Article Number: 1330 (2021) Cite This

Nature Communications volume 12, Article number: 1330 (2021) Cite this article Deep neural networks are usually trained in the space of the nodes, by adjusting the weights of existing links via suitable optimization protocols. We here propose a radically new approach which anchors the learning process to reciprocal space. Specifically, the training acts on the spectral domain and seeks to modify t...

To Recover A Feed-forward Architecture In Direct Space, We Have

To recover a feed-forward architecture in direct space, we have postulated a nested indentation of the eigenvectors. Different non-orthogonal basis could be employed to export the spectral learning to other frameworks, as e.g. reservoir computing. Machine learning (ML)1,2,3,4 refers to a broad field of study, with multifaceted applications of cross-disciplinary breadth. ML is a subset of artificia...

The Range Of Applications Of ML Methodologies Is Extremely Vast5,6,7,8,

The range of applications of ML methodologies is extremely vast5,6,7,8, and still growing at a steady pace due to the pressing need to cope with the efficiently handling of big data9. Biomimetic approaches to sub-symbolic AI10 inspired the design of powerful algorithms. These latter sought to reproduce the unconscious process underlying fast perception, the neurological paths for rapid decision ma...

It Is A Linear Classifier, Meaning That Its Forecasts Are

It is a linear classifier, meaning that its forecasts are based on a linear prediction function which combines a set of weights with the feature vector. Analogous to neurons, the perceptron adds up its input: if the resulting sum is above a given threshold the perceptron fires (returns the output value one) otherwise it does not (and the output equals... Modern multilayer perceptrons account for m...

The Perceptrons’ Acquired Ability To Perform Classification Is Eventually Stored

The perceptrons’ acquired ability to perform classification is eventually stored in a finite collection of numbers, the weights and thresholds that were learned during the successive epochs of the supervised training. To date, it is not clear how such a huge collection of numbers (hundred-millions of weights in state of the art ML applications) are synergistically interlaced for the deep networks ...