A computational role for asymmetric learning rules
Paul Munro, Gerardina Fernandez
Networks that modify weights according to
asymmetric learning rules (such as the typical kernal based on STDP),
must have different computational properties than are exhibited by
networks with symmetric rules, such as Hopfield networks. The dynamics
of such networks remain to be fully analyzed. While the LTP component of
the rule is consistent with Hebb-type learning in the temporal domain, the
LTD component is not so easily explained. Here, we examine the hypothesis
that the role of activity-dependent LTD is to build a network that can learn
robustly in the presence of temporal noise.