<aside> ❓ How can the human brain perform general and complex tasks?
</aside>
By taking a probabilistic approach, our works [1] provide a framework of probabilistic SNNs via probabilistic graphical models (see Figure 1) by covering probabilistic models, learning rules, and applications. The probabilistic framework can leverage multiple random samples during both learning and inference [2], enabling online and local training and robustifying decisions to quantify uncertainty, which are features that deterministic models cannot provide (see Figure 2).
Figure 1. Illustration of Spiking Neural Networks (SNNs), where dynamic spiking neurons process and communicate binary sparse spiking signals over time in a probabilistic mechanism with exponential feedforward and feedback filters.
Figure 2. Classification task trained on MNIST-DVS dataset: estimated log-likelihood, classification accuracy and expected calibration error.
With the Bayesian approach, we propose to equip each synaptic weight with a probabilistic distribution [3], capturing the epistemic uncertainty induced by the lack of knowledge of the data (see Figures 3 and 4).
Figure 3. Predictive probabilities on the two-moons dataset after training for BayesSNN. (top) real-valued synapses; (bottom) binary synapses.
Figure 4. Predicted by Bayesian and frequentist SNN models on the DVS-Gestures dataset. The correct class is indicated in bold font.
A hybrid SNN-ANN autoencoder [4], based on a directed, causal, variant of the information bottleneck, enables a shift from conventional hand-crafted decoding rules to optimized learned decoding, which will be useful for applications in neuromorphic sensing, mobile edge computing, personal healthcare, etc.
Figure 5. Illustration of the hybrid SNN-ANN variational autoencoder: (top) An encoding SNN processes inputs from a spiking image source to produce a spiking latent representation, and a decoding ANN learns to decode it to reconstruct a natural target signal; (bottom) some results on MNIST-DVS dataset.