1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
|
# Improving Spiking Network Training using Dynamical Systems
## Goal
The goal is to overcome a key obstacle in training brain-inspired **Spiking Neural Networks (SNNs)**.
While SNNs are promising, their all-or-nothing βspikesβ are **non-differentiable**, clashing with standard training algorithms.
The common workaround, **surrogate gradients**, often fails on temporal tasks due to **unstable error signals** that either **explode or vanish**.
This project reframes the problem using **dynamical systems theory**.
You will evaluate a novel technique called **surrogate gradient flossing**, which measures the stability of the training process by calculating **surrogate Lyapunov exponents**.
By actively tuning the network to control these exponents, you will stabilize learning and enable SNNs to tackle challenging **temporal credit assignment** problems.
---
## Project Rationale
The **surrogate gradient flossing** idea has been implemented and showed initial promise, but it requires **rigorous evaluation** and a **deeper theoretical understanding** of its mechanics.
The central question is whether **explicitly controlling the stability of the gradient flow** is a viable and robust strategy for improving learning in SNNs.
This project offers a chance to explore the **fundamental link between system dynamics and learning capacity**.
---
## Key Outcomes
- β
**Train a simple SNN** with surrogate gradient flossing and perform gradient analysis.
- π **(Stretch)**: Compare performance on **benchmark temporal tasks** such as *spiking spoken digits classification* and analyze the resulting network dynamics.
- π§ **(Stretch)**: Implement a **multi-layer SNN** and quantify gradients and performance **with and without** surrogate gradient flossing.
- π§ **(Stretch)**: **Optimize the gradient flossing schedule** to improve stability and convergence.
---
## Project Profile
| Category | Intensity |
|--------------------|------------|
| Analytical Intensity | βββ |
| Coding Intensity | βββ |
| Computational Neuroscience | βββ |
| Machine Learning | βββ |
**Starter Toolkit:** Reference example implementations available in **Julia** and **Python**.
---
## Literature
1. **Neftci, E. O., Mostafa, H., & Zenke, F. (2019).**
*Surrogate gradient learning in spiking neural networks.*
_IEEE Signal Processing Magazine, 36(6), 51β63._
2. **Engelken, R. (2023).**
*Gradient Flossing: Improving Gradient Descent through Dynamic Control of Jacobians.*
_NeurIPS._
[https://doi.org/10.48550/arXiv.2312.17306](https://doi.org/10.48550/arXiv.2312.17306)
3. **Gygax, J., & Zenke, F. (2025).**
*Elucidating the theoretical underpinnings of surrogate gradient learning in spiking neural networks.*
_Neural Computation, 37(5), 886β925._
4. **Rossbroich, J., Gygax, J., & Zenke, F. (2022).**
*Fluctuation-driven initialization for spiking neural network training.*
_Neuromorphic Computing and Engineering, 2(4), 044016._
5. **Yik, J., et al. (2025).**
*The Neurobench framework for benchmarking neuromorphic computing algorithms and systems.*
_Nature Communications, 16(1), 1545._
|