Skip to content

Conversation

@clinssen
Copy link
Contributor

@clinssen clinssen commented Oct 18, 2023

With thanks to @tomtetzlaff!

Negative weights are now simply disallowed on STDP synapses. I could imagine there could be a way to define negative weights in STDP, but it might be confusing, as more negative weights would mean a stronger (inhibitory) synapse. @tomtetzlaff: what do you think?

@clinssen clinssen marked this pull request as draft October 18, 2023 14:04
@github-actions
Copy link

github-actions bot commented Jul 14, 2025

🐰 Bencher Report

Branch973/merge
Testbedubuntu-latest

🚨 1 Alert

IterationBenchmarkMeasure
Units
ViewBenchmark Result
(Result Δ%)
Upper Boundary
(Limit %)
0tests/nest_continuous_benchmarking/test_nest_continuous_benchmarking.py::TestNESTContinuousBenchmarking::test_stdp_nn_synapseLatency
seconds (s)
📈 plot
🚷 threshold
🚨 alert (🔔)
3.85 s
(+12.14%)Baseline: 3.43 s
3.77 s
(101.95%)

Click to view all benchmark results
BenchmarkLatencyBenchmark Result
seconds (s)
(Result Δ%)
Upper Boundary
seconds (s)
(Limit %)
tests/nest_continuous_benchmarking/test_nest_continuous_benchmarking.py::TestNESTContinuousBenchmarking::test_stdp_nn_synapse📈 view plot
🚷 view threshold
🚨 view alert (🔔)
3.85 s
(+12.14%)Baseline: 3.43 s
3.77 s
(101.95%)

BenchmarkLatencyBenchmark Result
seconds (s)
(Result Δ%)
Upper Boundary
seconds (s)
(Limit %)
tests/nest_continuous_benchmarking/test_nest_continuous_benchmarking.py::TestNESTContinuousBenchmarking::test_stdp_nn_synapse📈 view plot
🚷 view threshold
🚨 view alert (🔔)
3.17 s
(-7.15%)Baseline: 3.42 s
3.76 s
(84.41%)

BenchmarkLatencyBenchmark Result
seconds (s)
(Result Δ%)
Upper Boundary
seconds (s)
(Limit %)
tests/nest_continuous_benchmarking/test_nest_continuous_benchmarking.py::TestNESTContinuousBenchmarking::test_stdp_nn_synapse📈 view plot
🚷 view threshold
🚨 view alert (🔔)
3.18 s
(-6.84%)Baseline: 3.42 s
3.76 s
(84.69%)

🐰 View full continuous benchmarking report in Bencher

Copy link
Collaborator

@tomtetzlaff tomtetzlaff left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hi Charl,

both excitatory and inhibitory synapses undergo spike-timing-dependent plasticity, see for example

D’amour, J. A., & Froemke, R. C. (2015). Inhibitory and excitatory spike-timing-dependent plasticity in the auditory cortex. Neuron, 86(2), 514-528.

And there are quite a few modeling studies employing inhibitory plasticity, such as

Vogels, T. P., Sprekeler, H., Zenke, F., Clopath, C., & Gerstner, W. (2011). Inhibitory plasticity balances excitation and inhibition in sensory pathways and memory networks. Science, 334(6062), 1569-1573. https://doi.org/10.1126/science.1211095.

So I think we should not prohibit this. This is how I handled this in the past:

state:
    w real = 1.   # synaptic weight(>0 for excitatory and <0 for inhibitory synapses)
    ...

parameters:
    ...
    Wmax  real = 1.   # maximum absolute value of synaptic weight (>0)
    Wmin  real = 0.   # minimum absolute value of synaptic weight (>=0)
    
internals:
    w_sign real = w/abs(w)   # sign of synaptic weight

onReceive(post_spikes):
    ...
    w_ real = Wmax * (abs(w) / Wmax  + (lambda * ( 1. - ( abs(w) / Wmax ) )**mu_plus * pre_trace ))
    w = w_sign * min(w_, Wmax)
	
onReceive(pre_spikes):
    ...
    w_ real = Wmax * (abs(w) / Wmax  - ( alpha * lambda * ( abs(w) / Wmax )**mu_minus * post_trace))
    w = w_sign * max(Wmin, w_)
    ...       

Perhaps not the most beautiful way of describing the model, but a starting point.

And yes, potentiating an inhibitory synapse would correspond to a more negative synaptic weight.

In real life, synapses are better described as conductances, which are always positive. The sign of the synaptic weight is then determined by the driving force, i.e., the difference between the reversal potential and the membrane potential of the target compartment. The above mentioned study by Vogels et al. (2011) indeed uses conductance based synapses. Still, I think we should permit inhibitory plastricity also for current-based synapses. Some time ago, there was also a discussion on this in the NEST mailing list:

https://www.nest-simulator.org/mailinglist/hyperkitty/list/[email protected]/thread/ONCFPYFAA3LXNLQCF7ZINODYA4ANYLPR/

@clinssen
Copy link
Contributor Author

Indeed, inhibitory STDP is an absolutely crucial to model. However, I thought that this could be captured by the postsynaptic neuron having a separate input port for inhibitory spikes, which are coming in with weight >=0, and are subsequently handled with the appropriate sign on the postsynaptic side.

However, if the STDP synapse model is to support postsynaptic neuron models that have only one spiking input port and differentiate excitatory from inhibitory spikes on the basis of the sign of the weight, then indeed an adjustment is needed.

About the implementation: if all is as it should be, then internals should not be allowed to depend on state variables (only on parameters and other internals). Thus the piece of code

internals:
    w_sign real = w/abs(w)   # sign of synaptic weight

would result in an error. This may also have an issue with division by zero when the weight becomes equal to 0.

We could make w_sign a user-settable parameter, and additionally verify that so long as the weight is not equal to zero, it holds that w_sign==sign(w) as a sanity check.

Alternatively, we could allow a signed Wmin and Wmax to indicate the sign of the weight.

@tomtetzlaff
Copy link
Collaborator

tomtetzlaff commented Jul 31, 2025

An additional parameter w_sign would be redundant if the weights carry a sign, right? In principle, one could extract the sign just from the initial weight, rather than each time the synapse is updated. Allowing positive or negative Wmin and Wmax is also problematic: users may be tempted to use Wmin<0 and Wmax>0.

Defining weights to be always positive and then adding the sign of the synapse on the postsynaptic side via the input port is actually a quite nice idea. It's also closer to biology where "weights" primarily correspond to (effective) "conductances" (which are always >0) and the sign is determined by the reversal potential (see above).

Do we actually need to account for cases where the sign of the weight may change during the learning? This is certainly not what we have in nature, but I'm thinking of studies comparing the performance of a network respecting Dale's principle with an alternative (machine learning) solution which doesn't care about nature.

@clinssen
Copy link
Contributor Author

clinssen commented Aug 4, 2025

Thanks for the comments!

An additional parameter w_sign would be redundant if the weights carry a sign, right? In principle, one could extract the sign just from the initial weight, rather than each time the synapse is updated.

Yes, but in NESTML this runs into the problem that principally, parameter initial values cannot depend on state initial values. This is how the initialisation order is defined right now in the language. So an extra parameter W_sign would have to be introduced (or we would have to re-consider the initialisation sequence in the language).

Defining weights to be always positive and then adding the sign of the synapse on the postsynaptic side via the input port is actually a quite nice idea. It's also closer to biology where "weights" primarily correspond to (effective) "conductances" (which are always >0) and the sign is determined by the reversal potential (see above).

Cool! Does that mean that you would be okay with the PR as it is now? It enforces all STDP-type synapse weights to be >= 0 at all times (and the same for Wmin and Wmax).

Do we actually need to account for cases where the sign of the weight may change during the learning? This is certainly not what we have in nature, but I'm thinking of studies comparing the performance of a network respecting Dale's principle with an alternative (machine learning) solution which doesn't care about nature.

That would be an interesting use-case to research, but at the moment I guess it's exactly the thing we want to prevent from happening inadvertedly. Perhaps we could add a short paragraph mentioning Dale's law in the STDP synapse model docstring?

@clinssen clinssen requested a review from tomtetzlaff August 4, 2025 12:10
@clinssen clinssen marked this pull request as ready for review August 4, 2025 12:10
@tomtetzlaff
Copy link
Collaborator

tomtetzlaff commented Aug 11, 2025

Cool! Does that mean that you would be okay with the PR as it is now? It enforces all STDP-type synapse weights to be >= 0 at all times (and the same for Wmin and Wmax).

Yes, I think that should be fine. But I would suggest to add an example illustrating this strategy, i.e., how to to implement STDP for inhibitory synapses.

That would be an interesting use-case to research, but at the moment I guess it's exactly the thing we want to prevent from happening inadvertedly.

I agree that with STDP synapses this usecase will probably never happen, as most users employing an STDP model would certainly assume that synaptic weights cannot simply switch sign. But we may have this issue with other plasticity types. For eprob, for example, @akorgor is currently investigating exactly this, i.e., what the effect of the "fixed-sign" constraint is on the learning performance. But this is a different type of plasticity, so it may not be relevant for this PR, is it?

Perhaps we could add a short paragraph mentioning Dale's law in the STDP synapse model docstring?

Yes, it's a good idea to document this. I wouldn't refer to this as "Dale's law" though, but rather to the fact that an individual synapse cannot simply switch from one type to another, say, from glutamatergic to GABAergic. Dale's principle is an even stronger statement on the ensemble of all outgoing synapses of a given neuron (all of them having the same sign).

Copy link
Collaborator

@tomtetzlaff tomtetzlaff left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This looks very good. Thanks a lot!

@clinssen
Copy link
Contributor Author

Thank you very much for the review and initial suggestion!

@clinssen clinssen merged commit c2620c3 into nest:master Sep 19, 2025
12 of 13 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants