|
@@ -381,6 +381,7 @@
|
|
|
<ul class="simple">
|
|
|
<li><p><a class="reference external" href="https://arxiv.org/abs/2402.03479">DRED: Zero-Shot Transfer in Reinforcement Learning via Data-Regularised Environment Design</a> (University of Edinburgh, ICML 2024)</p></li>
|
|
|
<li><p><a class="reference external" href="https://prl-theworkshop.github.io/prl2024-icaps/papers/8.pdf">Conviction-Based Planning for Sparse Reward Reinforcement Learning Problems</a>(UQÀM, PRL @ ICAPS 2024)</p></li>
|
|
|
+<li><p><a class="reference external" href="https://github.com/Pascalson/KGRL">Flexible Attention-Based Multi-Policy Fusion for Efficient Deep Reinforcement Learning</a>(UC San Diego, UC Santa Barbara, NeurIPS 2023)</p></li>
|
|
|
<li><p><a class="reference external" href="https://metadriverse.github.io/pvp/">Learning from Active Human Involvement through Proxy Value Propagation</a>(UCLA, NeurIPS Spotlight 2023)</p></li>
|
|
|
<li><p><a class="reference external" href="https://arxiv.org/pdf/2308.13661.pdf">Go Beyond Imagination: Maximizing Episodic Reachability with World Models</a> (UMich, ICML 2023)</p></li>
|
|
|
<li><p><a class="reference external" href="https://arxiv.org/abs/2205.15752">Hierarchies of Reward Machines</a> (Imperial College London, ILASP, Universitat Pompeu Fabra, ICML 2023)</p></li>
|