WEKO3
AND
アイテム
{"_buckets": {"deposit": "3cbcac06-23e9-4d56-aad6-ea58a7f4b32a"}, "_deposit": {"id": "8954", "owners": [], "pid": {"revision_id": 0, "type": "depid", "value": "8954"}, "status": "published"}, "_oai": {"id": "oai:kitami-it.repo.nii.ac.jp:00008954", "sets": ["1:86"]}, "author_link": ["273", "90493", "90494", "90495", "90496", "90497", "90498", "90499"], "item_1646810750418": {"attribute_name": "\u51fa\u7248\u30bf\u30a4\u30d7", "attribute_value_mlt": [{"subitem_version_resource": "http://purl.org/coar/version/c_970fb48d4fbd8a85", "subitem_version_type": "VoR"}]}, "item_3_alternative_title_198": {"attribute_name": "\u305d\u306e\u4ed6\u306e\u30bf\u30a4\u30c8\u30eb", "attribute_value_mlt": [{"subitem_alternative_title": "The Optimal Algorithms for the Reinforcement Learning Problem Separated into a Learning Period and a Control Period", "subitem_alternative_title_language": "en"}]}, "item_3_biblio_info_186": {"attribute_name": "\u66f8\u8a8c\u60c5\u5831", "attribute_value_mlt": [{"bibliographicIssueDates": {"bibliographicIssueDate": "1998-04-15", "bibliographicIssueDateType": "Issued"}, "bibliographicIssueNumber": "4", "bibliographicPageEnd": "1126", "bibliographicPageStart": "1116", "bibliographicVolumeNumber": "39", "bibliographic_titles": [{"bibliographic_title": "\u60c5\u5831\u51e6\u7406\u5b66\u4f1a\u8ad6\u6587\u8a8c"}, {"bibliographic_title": "Transactions of Information Processing Society of Japan", "bibliographic_titleLang": "en"}]}]}, "item_3_description_184": {"attribute_name": "\u6284\u9332", "attribute_value_mlt": [{"subitem_description": "\u672c\u7814\u7a76\u3067\u306f\uff0c\u9077\u79fb\u78ba\u7387\u884c\u5217\u304c\u672a\u77e5\u3067\u3042\u308b\u3088\u3046\u306a\u30de\u30eb\u30b3\u30d5\u6c7a\u5b9a\u904e\u7a0b\u306b\u3088\u3063\u3066\u30e2\u30c7\u30eb\u5316\u3055\u308c\u3066\u3044\u308b\uff0c\u5b66\u7fd2\u671f\u9593\u3068\u5236\u5fa1\u671f\u9593\u306b\u5206\u5272\u3055\u308c\u305f\u5f37\u5316\u5b66\u7fd2\u554f\u984c\u306b\u304a\u3051\u308b\uff0c\u6700\u9069\u30a2\u30eb\u30b4\u30ea\u30ba\u30e0\u306e\u63d0\u6848\u3092\u884c\u3063\u3066\u3044\u308b\uff0e\u5f93\u6765\u7814\u7a76\u3067\u306f\uff0c\u771f\u306e\u9077\u79fb\u78ba\u7387\u884c\u5217\u3092\u540c\u5b9a\u3067\u304d\u308c\u3070\u5236\u5fa1\u671f\u9593\u306e\u53ce\u76ca\u3092\u6700\u5927\u5316\u3067\u304d\u308b\u305f\u3081\uff0c\u5b66\u7fd2\u671f\u9593\u306e\u76ee\u7684\u3092\u5358\u306b\u672a\u77e5\u306e\u9077\u79fb\u78ba\u7387\u884c\u5217\u306e\u63a8\u5b9a\u3068\u3057\u3066\u3044\u308b\u304c\uff0c\u6709\u9650\u306e\u5b66\u7fd2\u671f\u9593\u306e\u3082\u3068\u3067\u306f\u63a8\u5b9a\u8aa4\u5dee\u304c\u3042\u308b\u305f\u3081\uff0c\u53ce\u76ca\u6700\u5927\u5316\u306e\u53b3\u5bc6\u306a\u4fdd\u8a3c\u306f\u306a\u3044\uff0e\u305d\u3053\u3067\u672c\u7814\u7a76\u3067\u306f\uff0c\u6709\u9650\u306e\u5b66\u7fd2\u671f\u9593\u3068\u6709\u9650\u306e\u5236\u5fa1\u671f\u9593\u306e\u5f37\u5316\u5b66\u7fd2\u554f\u984c\u306b\u304a\u3044\u3066\uff0c\u5236\u5fa1\u671f\u9593\u306e\u53ce\u76ca\u3092\u30d9\u30a4\u30ba\u57fa\u6e96\u306e\u3082\u3068\u3067\u6700\u5927\u5316\u3059\u308b\u57fa\u672c\u6700\u9069\u30a2\u30eb\u30b4\u30ea\u30ba\u30e0\u3092\u63d0\u6848\u3059\u308b\uff0e\u3057\u304b\u3057\uff0c\u57fa\u672c\u6700\u9069\u30a2\u30eb\u30b4\u30ea\u30ba\u30e0\u306e\u8a08\u7b97\u91cf\u304c\u6307\u6570\u30aa\u30fc\u30c0\u30fc\u306e\u305f\u3081\uff0c\u3055\u3089\u306b\u305d\u306e\u6539\u826f\u3092\u884c\u3044\uff0c\u6539\u826f\u6700\u9069\u30a2\u30eb\u30b4\u30ea\u30ba\u30e0\u3092\u63d0\u6848\u3059\u308b\uff0e\u6539\u826f\u6700\u9069\u30a2\u30eb\u30b4\u30ea\u30ba\u30e0\u306f\u57fa\u672c\u6700\u9069\u30a2\u30eb\u30b4\u30ea\u30ba\u30e0\u540c\u69d8\u306b\u53ce\u76ca\u3092\u30d9\u30a4\u30ba\u57fa\u6e96\u306e\u3082\u3068\u3067\u6700\u5927\u5316\u3059\u308b\u3053\u3068\u304c\u3067\u304d\uff0c\u304b\u3064\u305d\u306e\u8a08\u7b97\u91cf\u306f\u591a\u9805\u5f0f\u30aa\u30fc\u30c0\u30fc\u306b\u8efd\u6e1b\u3055\u308c\u3066\u3044\u308b\uff0e", "subitem_description_type": "Abstract"}, {"subitem_description": "[ENG]\nIn this paper,new algorithms are proposed based on statistical decision theory in the field of Markov decision processes under the condition that a transition probability matrix is unknown.In previous researches on RL(reinforcement learning),learning is based on only the estimation of an unknown transition probability matrix and the maximum reward is not received in a finite period,though their purpose is to maximize a reward.In our algorithms it is possible to maximize the reward within a finite period with respect to Bayes criterion.Moreover, we propose some techniques to reduce the computational complexity of our algorithm from exponential order to polynomial order", "subitem_description_type": "Abstract"}]}, "item_3_publisher_212": {"attribute_name": "\u51fa\u7248\u8005", "attribute_value_mlt": [{"subitem_publisher": "\u60c5\u5831\u51e6\u7406\u5b66\u4f1a"}]}, "item_3_relation_208": {"attribute_name": "\u8ad6\u6587ID\uff08NAID\uff09", "attribute_value_mlt": [{"subitem_relation_type_id": {"subitem_relation_type_id_text": "110002722119", "subitem_relation_type_select": "NAID"}}]}, "item_3_select_195": {"attribute_name": "\u8457\u8005\u7248\u30d5\u30e9\u30b0", "attribute_value_mlt": [{"subitem_select_item": "publisher"}]}, "item_3_source_id_187": {"attribute_name": "ISSN", "attribute_value_mlt": [{"subitem_source_identifier": "1882-7764", "subitem_source_identifier_type": "PISSN"}]}, "item_3_source_id_189": {"attribute_name": "\u66f8\u8a8c\u30ec\u30b3\u30fc\u30c9ID", "attribute_value_mlt": [{"subitem_source_identifier": "AN00116647", "subitem_source_identifier_type": "NCID"}]}, "item_access_right": {"attribute_name": "\u30a2\u30af\u30bb\u30b9\u6a29", "attribute_value_mlt": [{"subitem_access_right": "open access", "subitem_access_right_uri": "http://purl.org/coar/access_right/c_abf2"}]}, "item_creator": {"attribute_name": "\u8457\u8005", "attribute_type": "creator", "attribute_value_mlt": [{"creatorNames": [{"creatorName": "\u524d\u7530, \u5eb7\u6210", "creatorNameLang": "ja"}], "nameIdentifiers": [{"nameIdentifier": "273", "nameIdentifierScheme": "WEKO"}, {"nameIdentifier": "30422033", "nameIdentifierScheme": "KAKEN - \u7814\u7a76\u8005\u691c\u7d22", "nameIdentifierURI": "https://nrid.nii.ac.jp/ja/nrid/1000030422033/"}]}, {"creatorNames": [{"creatorName": "\u6d6e\u7530, \u5584\u6587", "creatorNameLang": "ja"}], "nameIdentifiers": [{"nameIdentifier": "90493", "nameIdentifierScheme": "WEKO"}]}, {"creatorNames": [{"creatorName": "\u677e\u5d8b, \u654f\u6cf0", "creatorNameLang": "ja"}], "nameIdentifiers": [{"nameIdentifier": "90494", "nameIdentifierScheme": "WEKO"}]}, {"creatorNames": [{"creatorName": "\u5e73\u6fa4, \u8302\u4e00", "creatorNameLang": "ja"}], "nameIdentifiers": [{"nameIdentifier": "90495", "nameIdentifierScheme": "WEKO"}]}, {"creatorNames": [{"creatorName": "MAEDA, Yasunari", "creatorNameLang": "en"}], "nameIdentifiers": [{"nameIdentifier": "90496", "nameIdentifierScheme": "WEKO"}]}, {"creatorNames": [{"creatorName": "UKITA, Yoshihumi", "creatorNameLang": "en"}], "nameIdentifiers": [{"nameIdentifier": "90497", "nameIdentifierScheme": "WEKO"}]}, {"creatorNames": [{"creatorName": "MATSUSHIMA, Toshiyasu", "creatorNameLang": "en"}], "nameIdentifiers": [{"nameIdentifier": "90498", "nameIdentifierScheme": "WEKO"}]}, {"creatorNames": [{"creatorName": "HIRASAWA, Shigeichi", "creatorNameLang": "en"}], "nameIdentifiers": [{"nameIdentifier": "90499", "nameIdentifierScheme": "WEKO"}]}]}, "item_files": {"attribute_name": "\u30d5\u30a1\u30a4\u30eb\u60c5\u5831", "attribute_type": "file", "attribute_value_mlt": [{"accessrole": "open_date", "date": [{"dateType": "Available", "dateValue": "2021-01-20"}], "displaytype": "detail", "download_preview_message": "", "file_order": 0, "filename": "\u60c5\u5831\u51e6\u7406\u5b66\u4f1a\u8ad6\u6587\u8a8c, 39(4), pp.1116-1126.pdf", "filesize": [{"value": "1.3 MB"}], "format": "application/pdf", "future_date_message": "", "is_thumbnail": false, "licensetype": "license_note", "mimetype": "application/pdf", "size": 1300000.0, "url": {"label": "\u60c5\u5831\u51e6\u7406\u5b66\u4f1a\u8ad6\u6587\u8a8c, 39(4), pp.1116-1126", "url": "https://kitami-it.repo.nii.ac.jp/record/8954/files/\u60c5\u5831\u51e6\u7406\u5b66\u4f1a\u8ad6\u6587\u8a8c, 39(4), pp.1116-1126.pdf"}, "version_id": "3a2e7452-3188-4193-af96-5dd34e88cf82"}]}, "item_language": {"attribute_name": "\u8a00\u8a9e", "attribute_value_mlt": [{"subitem_language": "jpn"}]}, "item_resource_type": {"attribute_name": "\u8cc7\u6e90\u30bf\u30a4\u30d7", "attribute_value_mlt": [{"resourcetype": "journal article", "resourceuri": "http://purl.org/coar/resource_type/c_6501"}]}, "item_title": "\u5b66\u7fd2\u671f\u9593\u3068\u5236\u5fa1\u671f\u9593\u306b\u5206\u5272\u3055\u308c\u305f\u5f37\u5316\u5b66\u7fd2\u554f\u984c\u306b\u304a\u3051\u308b\u6700\u9069\u30a2\u30eb\u30b4\u30ea\u30ba\u30e0\u306e\u63d0\u6848", "item_titles": {"attribute_name": "\u30bf\u30a4\u30c8\u30eb", "attribute_value_mlt": [{"subitem_title": "\u5b66\u7fd2\u671f\u9593\u3068\u5236\u5fa1\u671f\u9593\u306b\u5206\u5272\u3055\u308c\u305f\u5f37\u5316\u5b66\u7fd2\u554f\u984c\u306b\u304a\u3051\u308b\u6700\u9069\u30a2\u30eb\u30b4\u30ea\u30ba\u30e0\u306e\u63d0\u6848", "subitem_title_language": "ja"}, {"subitem_title": "The Optimal Algorithms for the Reinforcement Learning Problem Separated into a Learning Period and a Control Period", "subitem_title_language": "en"}]}, "item_type_id": "3", "owner": "1", "path": ["1/86"], "permalink_uri": "https://kitami-it.repo.nii.ac.jp/records/8954", "pubdate": {"attribute_name": "PubDate", "attribute_value": "2021-01-20"}, "publish_date": "2021-01-20", "publish_status": "0", "recid": "8954", "relation": {}, "relation_version_is_last": true, "title": ["\u5b66\u7fd2\u671f\u9593\u3068\u5236\u5fa1\u671f\u9593\u306b\u5206\u5272\u3055\u308c\u305f\u5f37\u5316\u5b66\u7fd2\u554f\u984c\u306b\u304a\u3051\u308b\u6700\u9069\u30a2\u30eb\u30b4\u30ea\u30ba\u30e0\u306e\u63d0\u6848"], "weko_shared_id": -1}
学習期間と制御期間に分割された強化学習問題における最適アルゴリズムの提案
https://kitami-it.repo.nii.ac.jp/records/8954
1de18bef-0621-468e-a4da-dbee3390cb2c
名前 / ファイル | ライセンス | アクション | |
---|---|---|---|
![]() |
|
Item type | 学術雑誌論文 / Journal Article(1) | |||||
---|---|---|---|---|---|---|
公開日 | 2021-01-20 | |||||
タイトル | ||||||
言語 | ja | |||||
タイトル | 学習期間と制御期間に分割された強化学習問題における最適アルゴリズムの提案 | |||||
タイトル | ||||||
言語 | en | |||||
タイトル | The Optimal Algorithms for the Reinforcement Learning Problem Separated into a Learning Period and a Control Period | |||||
言語 | ||||||
言語 | jpn | |||||
資源タイプ | ||||||
資源 | http://purl.org/coar/resource_type/c_6501 | |||||
タイプ | journal article | |||||
アクセス権 | ||||||
アクセス権 | open access | |||||
アクセス権URI | http://purl.org/coar/access_right/c_abf2 | |||||
その他のタイトル | ||||||
その他のタイトル | The Optimal Algorithms for the Reinforcement Learning Problem Separated into a Learning Period and a Control Period | |||||
言語 | en | |||||
著者 |
前田, 康成
× 前田, 康成× 浮田, 善文× 松嶋, 敏泰× 平澤, 茂一× MAEDA, Yasunari× UKITA, Yoshihumi× MATSUSHIMA, Toshiyasu× HIRASAWA, Shigeichi |
|||||
抄録 | ||||||
内容記述タイプ | Abstract | |||||
内容記述 | 本研究では,遷移確率行列が未知であるようなマルコフ決定過程によってモデル化されている,学習期間と制御期間に分割された強化学習問題における,最適アルゴリズムの提案を行っている.従来研究では,真の遷移確率行列を同定できれば制御期間の収益を最大化できるため,学習期間の目的を単に未知の遷移確率行列の推定としているが,有限の学習期間のもとでは推定誤差があるため,収益最大化の厳密な保証はない.そこで本研究では,有限の学習期間と有限の制御期間の強化学習問題において,制御期間の収益をベイズ基準のもとで最大化する基本最適アルゴリズムを提案する.しかし,基本最適アルゴリズムの計算量が指数オーダーのため,さらにその改良を行い,改良最適アルゴリズムを提案する.改良最適アルゴリズムは基本最適アルゴリズム同様に収益をベイズ基準のもとで最大化することができ,かつその計算量は多項式オーダーに軽減されている. | |||||
抄録 | ||||||
内容記述タイプ | Abstract | |||||
内容記述 | [ENG] In this paper,new algorithms are proposed based on statistical decision theory in the field of Markov decision processes under the condition that a transition probability matrix is unknown.In previous researches on RL(reinforcement learning),learning is based on only the estimation of an unknown transition probability matrix and the maximum reward is not received in a finite period,though their purpose is to maximize a reward.In our algorithms it is possible to maximize the reward within a finite period with respect to Bayes criterion.Moreover, we propose some techniques to reduce the computational complexity of our algorithm from exponential order to polynomial order |
|||||
書誌情報 |
情報処理学会論文誌 en : Transactions of Information Processing Society of Japan 巻 39, 号 4, p. 1116-1126, 発行日 1998-04-15 |
|||||
ISSN | ||||||
収録物識別子タイプ | PISSN | |||||
収録物識別子 | 1882-7764 | |||||
書誌レコードID | ||||||
収録物識別子タイプ | NCID | |||||
収録物識別子 | AN00116647 | |||||
論文ID(NAID) | ||||||
関連識別子 | ||||||
識別子タイプ | NAID | |||||
関連識別子 | 110002722119 | |||||
出版者 | ||||||
出版者 | 情報処理学会 | |||||
著者版フラグ | ||||||
値 | publisher | |||||
出版タイプ | ||||||
出版タイプ | VoR | |||||
出版タイプResource | http://purl.org/coar/version/c_970fb48d4fbd8a85 |