This paper extends the multi-time optimal control theory to deep learning frameworks, developing a mathematical foundation for understanding neural network training as a process evolving across multiple time scales. We formulate neural network optimization as a multi-time evolution problem where different components of the network evolve at different rates. This perspective yields two novel theorems: the first establishes conditions for path-independent convergence in multi-time gradient descent, while the second provides a framework for analyzing the interplay between feature extraction and classification layers in deep networks. Experimental validation on synthetic data demonstrates the practical implications of our theoretical framework, showing improved convergence properties and enabling adaptive learning rate scheduling based on multi-time principles. Our approach opens new avenues for understanding the dynamics of deep learning systems and suggests practical improvements to optimization algorithms.

Multi-Time Evolution Models in Deep Learning Dynamics / Ferrara, M.. - In: THE JOURNAL OF THE INDIAN ACADEMY OF MATHEMATICS. - ISSN 0970-5120. - 47:2(2025), pp. 282-294.

Multi-Time Evolution Models in Deep Learning Dynamics

Ferrara, M.
Conceptualization
2025-01-01

Abstract

This paper extends the multi-time optimal control theory to deep learning frameworks, developing a mathematical foundation for understanding neural network training as a process evolving across multiple time scales. We formulate neural network optimization as a multi-time evolution problem where different components of the network evolve at different rates. This perspective yields two novel theorems: the first establishes conditions for path-independent convergence in multi-time gradient descent, while the second provides a framework for analyzing the interplay between feature extraction and classification layers in deep networks. Experimental validation on synthetic data demonstrates the practical implications of our theoretical framework, showing improved convergence properties and enabling adaptive learning rate scheduling based on multi-time principles. Our approach opens new avenues for understanding the dynamics of deep learning systems and suggests practical improvements to optimization algorithms.
2025
Multi-time dynamics; Deep learning Optimization; Multi-time Learning Algorithm (MTAG)
File in questo prodotto:
File Dimensione Formato  
Ferrara_2025_JIAM_Multi-time_editor.pdf

accesso aperto

Descrizione: Articolo
Tipologia: Versione Editoriale (PDF)
Licenza: Copyright dell'editore
Dimensione 4.04 MB
Formato Adobe PDF
4.04 MB Adobe PDF Visualizza/Apri

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/20.500.12318/160726
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus ND
  • ???jsp.display-item.citation.isi??? ND
social impact