-
Notifications
You must be signed in to change notification settings - Fork 54
Add plastic synapse weight sign test #973
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
…nto plastic_synapse_weight_sign
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hi Charl,
both excitatory and inhibitory synapses undergo spike-timing-dependent plasticity, see for example
D’amour, J. A., & Froemke, R. C. (2015). Inhibitory and excitatory spike-timing-dependent plasticity in the auditory cortex. Neuron, 86(2), 514-528.
And there are quite a few modeling studies employing inhibitory plasticity, such as
Vogels, T. P., Sprekeler, H., Zenke, F., Clopath, C., & Gerstner, W. (2011). Inhibitory plasticity balances excitation and inhibition in sensory pathways and memory networks. Science, 334(6062), 1569-1573. https://doi.org/10.1126/science.1211095.
So I think we should not prohibit this. This is how I handled this in the past:
state:
w real = 1. # synaptic weight(>0 for excitatory and <0 for inhibitory synapses)
...
parameters:
...
Wmax real = 1. # maximum absolute value of synaptic weight (>0)
Wmin real = 0. # minimum absolute value of synaptic weight (>=0)
internals:
w_sign real = w/abs(w) # sign of synaptic weight
onReceive(post_spikes):
...
w_ real = Wmax * (abs(w) / Wmax + (lambda * ( 1. - ( abs(w) / Wmax ) )**mu_plus * pre_trace ))
w = w_sign * min(w_, Wmax)
onReceive(pre_spikes):
...
w_ real = Wmax * (abs(w) / Wmax - ( alpha * lambda * ( abs(w) / Wmax )**mu_minus * post_trace))
w = w_sign * max(Wmin, w_)
...
Perhaps not the most beautiful way of describing the model, but a starting point.
And yes, potentiating an inhibitory synapse would correspond to a more negative synaptic weight.
In real life, synapses are better described as conductances, which are always positive. The sign of the synaptic weight is then determined by the driving force, i.e., the difference between the reversal potential and the membrane potential of the target compartment. The above mentioned study by Vogels et al. (2011) indeed uses conductance based synapses. Still, I think we should permit inhibitory plastricity also for current-based synapses. Some time ago, there was also a discussion on this in the NEST mailing list:
|
Indeed, inhibitory STDP is an absolutely crucial to model. However, I thought that this could be captured by the postsynaptic neuron having a separate input port for inhibitory spikes, which are coming in with weight >=0, and are subsequently handled with the appropriate sign on the postsynaptic side. However, if the STDP synapse model is to support postsynaptic neuron models that have only one spiking input port and differentiate excitatory from inhibitory spikes on the basis of the sign of the weight, then indeed an adjustment is needed. About the implementation: if all is as it should be, then internals should not be allowed to depend on state variables (only on parameters and other internals). Thus the piece of code would result in an error. This may also have an issue with division by zero when the weight becomes equal to 0. We could make Alternatively, we could allow a signed Wmin and Wmax to indicate the sign of the weight. |
|
An additional parameter Defining weights to be always positive and then adding the sign of the synapse on the postsynaptic side via the input port is actually a quite nice idea. It's also closer to biology where "weights" primarily correspond to (effective) "conductances" (which are always >0) and the sign is determined by the reversal potential (see above). Do we actually need to account for cases where the sign of the weight may change during the learning? This is certainly not what we have in nature, but I'm thinking of studies comparing the performance of a network respecting Dale's principle with an alternative (machine learning) solution which doesn't care about nature. |
|
Thanks for the comments!
Yes, but in NESTML this runs into the problem that principally, parameter initial values cannot depend on state initial values. This is how the initialisation order is defined right now in the language. So an extra parameter
Cool! Does that mean that you would be okay with the PR as it is now? It enforces all STDP-type synapse weights to be >= 0 at all times (and the same for Wmin and Wmax).
That would be an interesting use-case to research, but at the moment I guess it's exactly the thing we want to prevent from happening inadvertedly. Perhaps we could add a short paragraph mentioning Dale's law in the STDP synapse model docstring? |
Yes, I think that should be fine. But I would suggest to add an example illustrating this strategy, i.e., how to to implement STDP for inhibitory synapses.
I agree that with STDP synapses this usecase will probably never happen, as most users employing an STDP model would certainly assume that synaptic weights cannot simply switch sign. But we may have this issue with other plasticity types. For eprob, for example, @akorgor is currently investigating exactly this, i.e., what the effect of the "fixed-sign" constraint is on the learning performance. But this is a different type of plasticity, so it may not be relevant for this PR, is it?
Yes, it's a good idea to document this. I wouldn't refer to this as "Dale's law" though, but rather to the fact that an individual synapse cannot simply switch from one type to another, say, from glutamatergic to GABAergic. Dale's principle is an even stronger statement on the ensemble of all outgoing synapses of a given neuron (all of them having the same sign). |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This looks very good. Thanks a lot!
|
Thank you very much for the review and initial suggestion! |
With thanks to @tomtetzlaff!
Negative weights are now simply disallowed on STDP synapses. I could imagine there could be a way to define negative weights in STDP, but it might be confusing, as more negative weights would mean a stronger (inhibitory) synapse. @tomtetzlaff: what do you think?