# Improving Spiking Network Training using Dynamical Systems ## Goal The goal is to overcome a key obstacle in training brain-inspired **Spiking Neural Networks (SNNs)**. While SNNs are promising, their all-or-nothing β€œspikes” are **non-differentiable**, clashing with standard training algorithms. The common workaround, **surrogate gradients**, often fails on temporal tasks due to **unstable error signals** that either **explode or vanish**. This project reframes the problem using **dynamical systems theory**. You will evaluate a novel technique called **surrogate gradient flossing**, which measures the stability of the training process by calculating **surrogate Lyapunov exponents**. By actively tuning the network to control these exponents, you will stabilize learning and enable SNNs to tackle challenging **temporal credit assignment** problems. --- ## Project Rationale The **surrogate gradient flossing** idea has been implemented and showed initial promise, but it requires **rigorous evaluation** and a **deeper theoretical understanding** of its mechanics. The central question is whether **explicitly controlling the stability of the gradient flow** is a viable and robust strategy for improving learning in SNNs. This project offers a chance to explore the **fundamental link between system dynamics and learning capacity**. --- ## Key Outcomes - βœ… **Train a simple SNN** with surrogate gradient flossing and perform gradient analysis. - πŸš€ **(Stretch)**: Compare performance on **benchmark temporal tasks** such as *spiking spoken digits classification* and analyze the resulting network dynamics. - 🧠 **(Stretch)**: Implement a **multi-layer SNN** and quantify gradients and performance **with and without** surrogate gradient flossing. - πŸ”§ **(Stretch)**: **Optimize the gradient flossing schedule** to improve stability and convergence. --- ## Project Profile | Category | Intensity | |--------------------|------------| | Analytical Intensity | ⭐⭐⭐ | | Coding Intensity | ⭐⭐⭐ | | Computational Neuroscience | ⭐⭐⭐ | | Machine Learning | ⭐⭐⭐ | **Starter Toolkit:** Reference example implementations available in **Julia** and **Python**. --- ## Literature 1. **Neftci, E. O., Mostafa, H., & Zenke, F. (2019).** *Surrogate gradient learning in spiking neural networks.* _IEEE Signal Processing Magazine, 36(6), 51–63._ 2. **Engelken, R. (2023).** *Gradient Flossing: Improving Gradient Descent through Dynamic Control of Jacobians.* _NeurIPS._ [https://doi.org/10.48550/arXiv.2312.17306](https://doi.org/10.48550/arXiv.2312.17306) 3. **Gygax, J., & Zenke, F. (2025).** *Elucidating the theoretical underpinnings of surrogate gradient learning in spiking neural networks.* _Neural Computation, 37(5), 886–925._ 4. **Rossbroich, J., Gygax, J., & Zenke, F. (2022).** *Fluctuation-driven initialization for spiking neural network training.* _Neuromorphic Computing and Engineering, 2(4), 044016._ 5. **Yik, J., et al. (2025).** *The Neurobench framework for benchmarking neuromorphic computing algorithms and systems.* _Nature Communications, 16(1), 1545._