Please use this identifier to cite or link to this item:https://hdl.handle.net/20.500.12259/54303
Type of publication: research article
Type of publication (PDB): Straipsnis Clarivate Analytics Web of Science / Article in Clarivate Analytics Web of Science (S1)
Field of Science: Informatika / Informatics (N009)
Author(s): Tamošiūnaitė, Minija;Ainge, James;Kulvičius, Tomas;Porr, Bernd;Dudchenko, Paul;Wörgötter, Florentin
Title: Path-finding in real and simulated rats : assessing the influence of path characteristics on navigation learning
Is part of: Journal of vomputational neuroscience. Dordrecht: Springer, 2008, Vol. 25, no. 3
Extent: p. 562-582
Date: 2008
Keywords: Reinforcement learning;SARSA;Place field system;Function approximation;Weight decay
Abstract: A large body of experimental evidence suggests that the hippocampal place field system is involved in reward based navigation learning in rodents. Reinforcement learning (RL) mechanisms have been used to model this, associating the state space in an RL-algorithm to the place-field map in a rat. The convergence properties of RL-algorithms are affected by the exploration patterns of the learner. Therefore, we first analyzed the path characteristics of freely exploring rats in a test arena. We found that straight path segments with mean length 23 cm up to a maximal length of 80 cm take up a significant proportion of the total paths. Thus, rat paths are biased as compared to random exploration. Next we designed a RL system that reproduces these specific path characteristics. Our model arena is covered by overlapping, probabilistically firing place fields (PF) of realistic size and coverage. Because convergence of RL-algorithms is also influenced by the state space characteristics, different PF-sizes and densities, leading to a different degree of overlap, were also investigated. The model rat learns finding a reward opposite to its starting point. We observed that the combination of biased straight exploration, overlapping coverage and probabilistic firing will strongly impair the convergence of learning. When the degree of randomness in the exploration is increased, convergence improves, but the distribution of straight path segments becomes unrealistic and paths become 'wiggly'. To mend this situation without affecting the path characteristic two additional mechanisms are implemented: a gradual drop of the learned weights (weight decay) and path length limitation, which prevents learning if the reward is not found after some expected time. Both mechanisms limit the memory of the system and thereby counteract effects of getting trapped on a wrong path
Internet: https://doi.org/10.1007/s10827-008-0094-6
Affiliation(s): Vytauto Didžiojo universitetas
Appears in Collections:Universiteto mokslo publikacijos / University Research Publications

Files in This Item:
marc.xml14.29 kBXMLView/Open

MARC21 XML metadata

Show full item record
Export via OAI-PMH Interface in XML Formats
Export to Other Non-XML Formats


CORE Recommender

WEB OF SCIENCETM
Citations 1

8
checked on Apr 24, 2021

Page view(s)

81
checked on May 1, 2021

Download(s)

9
checked on May 1, 2021

Google ScholarTM

Check

Altmetric


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.