diff options
| author | YurenHao0426 <blackhao0426@gmail.com> | 2026-01-13 23:50:59 -0600 |
|---|---|---|
| committer | YurenHao0426 <blackhao0426@gmail.com> | 2026-01-13 23:50:59 -0600 |
| commit | 00cf667cee7ffacb144d5805fc7e0ef443f3583a (patch) | |
| tree | 77d20a3adaecf96bf3aff0612bdd3b5fa1a7dc7e /docs/description-from-rainer.md | |
| parent | c53c04aa1d6ff75cb478a9498c370baa929c74b6 (diff) | |
| parent | cd99d6b874d9d09b3bb87b8485cc787885af71f1 (diff) | |
Merge master into main
Diffstat (limited to 'docs/description-from-rainer.md')
| -rw-r--r-- | docs/description-from-rainer.md | 66 |
1 files changed, 66 insertions, 0 deletions
diff --git a/docs/description-from-rainer.md b/docs/description-from-rainer.md new file mode 100644 index 0000000..2304e7e --- /dev/null +++ b/docs/description-from-rainer.md @@ -0,0 +1,66 @@ +# Improving Spiking Network Training using Dynamical Systems + +## Goal +The goal is to overcome a key obstacle in training brain-inspired **Spiking Neural Networks (SNNs)**. +While SNNs are promising, their all-or-nothing βspikesβ are **non-differentiable**, clashing with standard training algorithms. +The common workaround, **surrogate gradients**, often fails on temporal tasks due to **unstable error signals** that either **explode or vanish**. + +This project reframes the problem using **dynamical systems theory**. +You will evaluate a novel technique called **surrogate gradient flossing**, which measures the stability of the training process by calculating **surrogate Lyapunov exponents**. +By actively tuning the network to control these exponents, you will stabilize learning and enable SNNs to tackle challenging **temporal credit assignment** problems. + +--- + +## Project Rationale +The **surrogate gradient flossing** idea has been implemented and showed initial promise, but it requires **rigorous evaluation** and a **deeper theoretical understanding** of its mechanics. + +The central question is whether **explicitly controlling the stability of the gradient flow** is a viable and robust strategy for improving learning in SNNs. + +This project offers a chance to explore the **fundamental link between system dynamics and learning capacity**. + +--- + +## Key Outcomes + +- β
**Train a simple SNN** with surrogate gradient flossing and perform gradient analysis. +- π **(Stretch)**: Compare performance on **benchmark temporal tasks** such as *spiking spoken digits classification* and analyze the resulting network dynamics. +- π§ **(Stretch)**: Implement a **multi-layer SNN** and quantify gradients and performance **with and without** surrogate gradient flossing. +- π§ **(Stretch)**: **Optimize the gradient flossing schedule** to improve stability and convergence. + +--- + +## Project Profile + +| Category | Intensity | +|--------------------|------------| +| Analytical Intensity | βββ | +| Coding Intensity | βββ | +| Computational Neuroscience | βββ | +| Machine Learning | βββ | + +**Starter Toolkit:** Reference example implementations available in **Julia** and **Python**. + +--- + +## Literature + +1. **Neftci, E. O., Mostafa, H., & Zenke, F. (2019).** + *Surrogate gradient learning in spiking neural networks.* + _IEEE Signal Processing Magazine, 36(6), 51β63._ + +2. **Engelken, R. (2023).** + *Gradient Flossing: Improving Gradient Descent through Dynamic Control of Jacobians.* + _NeurIPS._ + [https://doi.org/10.48550/arXiv.2312.17306](https://doi.org/10.48550/arXiv.2312.17306) + +3. **Gygax, J., & Zenke, F. (2025).** + *Elucidating the theoretical underpinnings of surrogate gradient learning in spiking neural networks.* + _Neural Computation, 37(5), 886β925._ + +4. **Rossbroich, J., Gygax, J., & Zenke, F. (2022).** + *Fluctuation-driven initialization for spiking neural network training.* + _Neuromorphic Computing and Engineering, 2(4), 044016._ + +5. **Yik, J., et al. (2025).** + *The Neurobench framework for benchmarking neuromorphic computing algorithms and systems.* + _Nature Communications, 16(1), 1545._ |
