Cited 0 times in Scipus Cited Count

Model-based reinforcement learning under concurrent schedules of reinforcement in rodents.

DC Field Value Language
dc.contributor.authorHuh, N-
dc.contributor.authorJo, S-
dc.contributor.authorKim, H-
dc.contributor.authorSul, JH-
dc.contributor.authorJung, MW-
dc.date.accessioned2010-11-29T04:45:30Z-
dc.date.available2010-11-29T04:45:30Z-
dc.date.issued2009-
dc.identifier.issn1072-0502-
dc.identifier.urihttp://repository.ajou.ac.kr/handle/201003/332-
dc.description.abstractReinforcement learning theories postulate that actions are chosen to maximize a long-term sum of positive outcomes based on value functions, which are subjective estimates of future rewards. In simple reinforcement learning algorithms, value functions are updated only by trial-and-error, whereas they are updated according to the decision-maker's knowledge or model of the environment in model-based reinforcement learning algorithms. To investigate how animals update value functions, we trained rats under two different free-choice tasks. The reward probability of the unchosen target remained unchanged in one task, whereas it increased over time since the target was last chosen in the other task. The results show that goal choice probability increased as a function of the number of consecutive alternative choices in the latter, but not the former task, indicating that the animals were aware of time-dependent increases in arming probability and used this information in choosing goals. In addition, the choice behavior in the latter task was better accounted for by a model-based reinforcement learning algorithm. Our results show that rats adopt a decision-making process that cannot be accounted for by simple reinforcement learning models even in a relatively simple binary choice task, suggesting that rats can readily improve their decision-making strategy through the knowledge of their environments.-
dc.formattext/plain-
dc.language.isoen-
dc.subject.MESHAlgorithms-
dc.subject.MESHAnimals-
dc.subject.MESHDecision Making-
dc.subject.MESHModels, Neurological-
dc.subject.MESHModels, Theoretical-
dc.subject.MESHRats-
dc.subject.MESHReinforcement (Psychology)-
dc.subject.MESHReward-
dc.titleModel-based reinforcement learning under concurrent schedules of reinforcement in rodents.-
dc.typeArticle-
dc.identifier.pmid19403794-
dc.identifier.urlhttp://www.learnmem.org/cgi/pmidlookup?view=long&pmid=19403794-
dc.contributor.affiliatedAuthor허, 남정-
dc.contributor.affiliatedAuthor정, 민환-
dc.type.localJournal Papers-
dc.identifier.doi10.1101/lm.1295509-
dc.citation.titleLearning & memory (Cold Spring Harbor, N.Y.)-
dc.citation.volume16-
dc.citation.number5-
dc.citation.date2009-
dc.citation.startPage315-
dc.citation.endPage323-
dc.identifier.bibliographicCitationLearning & memory (Cold Spring Harbor, N.Y.), 16(5). : 315-323, 2009-
dc.identifier.eissn1549-5485-
dc.relation.journalidJ010720502-
Appears in Collections:
Journal Papers > Research Organization > BK21
Journal Papers > Research Organization > Institute for Medical Sciences
Files in This Item:
There are no files associated with this item.

qrcode

해당 아이템을 이메일로 공유하기 원하시면 인증을 거치시기 바랍니다.

Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.

Browse