|
@@ -382,6 +382,7 @@
|
|
<h1>List of Publications<a class="headerlink" href="#list-of-publications" title="Permalink to this heading">#</a></h1>
|
|
<h1>List of Publications<a class="headerlink" href="#list-of-publications" title="Permalink to this heading">#</a></h1>
|
|
<p>List of publications & submissions using Minigrid or BabyAI (please open a pull request to add missing entries):</p>
|
|
<p>List of publications & submissions using Minigrid or BabyAI (please open a pull request to add missing entries):</p>
|
|
<ul class="simple">
|
|
<ul class="simple">
|
|
|
|
+<li><p><a class="reference external" href="https://arxiv.org/abs/2301.10119">Minimal Value-Equivalent Partial Models for Scalable and Robust Planning in Lifelong Reinforcement Learning</a> (Mila, McGill University, CoLLAs 2023)</p></li>
|
|
<li><p><a class="reference external" href="https://arxiv.org/abs/2304.10770">DEIR: Efficient and Robust Exploration through Discriminative-Model-Based Episodic Intrinsic Rewards</a> (U-Tokyo, Google Brain, IJCAI 2023)</p></li>
|
|
<li><p><a class="reference external" href="https://arxiv.org/abs/2304.10770">DEIR: Efficient and Robust Exploration through Discriminative-Model-Based Episodic Intrinsic Rewards</a> (U-Tokyo, Google Brain, IJCAI 2023)</p></li>
|
|
<li><p><a class="reference external" href="https://arxiv.org/abs/2211.16838">Towards Improving Exploration in Self-Imitation Learning using Intrinsic Motivation</a> (TECNALIA, IEEE ADPRL 2022)</p></li>
|
|
<li><p><a class="reference external" href="https://arxiv.org/abs/2211.16838">Towards Improving Exploration in Self-Imitation Learning using Intrinsic Motivation</a> (TECNALIA, IEEE ADPRL 2022)</p></li>
|
|
<li><p><a class="reference external" href="https://arxiv.org/abs/2205.11184">An Evaluation Study of Intrinsic Motivation Techniques applied to Reinforcement Learning over Hard Exploration Environments</a> (TECNALIA, CD-MAKE 2022)</p></li>
|
|
<li><p><a class="reference external" href="https://arxiv.org/abs/2205.11184">An Evaluation Study of Intrinsic Motivation Techniques applied to Reinforcement Learning over Hard Exploration Environments</a> (TECNALIA, CD-MAKE 2022)</p></li>
|