Quellcode durchsuchen

Create gh-pages branch via GitHub

Jiwon Kim vor 9 Jahren
Ursprung
Commit
41045fc603
2 geänderte Dateien mit 32 neuen und 32 gelöschten Zeilen
  1. 31 31
      index.html
  2. 1 1
      params.json

+ 31 - 31
index.html

@@ -103,19 +103,19 @@
 (from Alex Krizhevsky, Ilya Sutskever, Geoffrey E. Hinton, ImageNet Classification with Deep Convolutional Neural Networks, NIPS, 2012.)</p>
 
 <ul>
-<li>Microsoft (PReLu/Weight Initialization) <a href="http://arxiv.org/pdf/1502.01852v1">[Paper]</a>
+<li>Microsoft (PReLu/Weight Initialization) <a href="http://arxiv.org/pdf/1502.01852">[Paper]</a>
 
 <ul>
 <li>Kaiming He, Xiangyu Zhang, Shaoqing Ren, Jian Sun, Delving Deep into Rectifiers: Surpassing Human-Level Performance on ImageNet Classification, arXiv:1502.01852.</li>
 </ul>
 </li>
-<li>Batch Normalization <a href="http://arxiv.org/pdf/1502.03167v3">[Paper]</a>
+<li>Batch Normalization <a href="http://arxiv.org/pdf/1502.03167">[Paper]</a>
 
 <ul>
 <li>Sergey Ioffe, Christian Szegedy, Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift, arXiv:1502.03167.</li>
 </ul>
 </li>
-<li>GoogLeNet <a href="http://arxiv.org/pdf/1409.4842v1">[Paper]</a>
+<li>GoogLeNet <a href="http://arxiv.org/pdf/1409.4842">[Paper]</a>
 
 <ul>
 <li>Christian Szegedy, Wei Liu, Yangqing Jia, Pierre Sermanet, Scott Reed, Dragomir Anguelov, Dumitru Erhan, Vincent Vanhoucke, Andrew Rabinovich, CVPR, 2015. </li>
@@ -142,13 +142,13 @@
 (from Shaoqing Ren, Kaiming He, Ross Girshick, Jian Sun, Faster R-CNN: Towards Real-Time Object Detection with Region Proposal Networks, arXiv:1506.01497.)</p>
 
 <ul>
-<li>OverFeat, NYU <a href="http://arxiv.org/pdf/1311.2901v3">[Paper]</a>
+<li>OverFeat, NYU <a href="http://arxiv.org/pdf/1311.2901">[Paper]</a>
 
 <ul>
 <li>Matthrew Zeiler, Rob Fergus, Visualizing and Understanding Convolutional Networks, ECCV, 2014.</li>
 </ul>
 </li>
-<li>R-CNN, UC Berkeley <a href="http://www.cv-foundation.org/openaccess/content_cvpr_2014/papers/Girshick_Rich_Feature_Hierarchies_2014_CVPR_paper.pdf">[Paper-CVPR14]</a> <a href="http://arxiv.org/pdf/1311.2524v5">[Paper-arXiv14]</a>
+<li>R-CNN, UC Berkeley <a href="http://www.cv-foundation.org/openaccess/content_cvpr_2014/papers/Girshick_Rich_Feature_Hierarchies_2014_CVPR_paper.pdf">[Paper-CVPR14]</a> <a href="http://arxiv.org/pdf/1311.2524">[Paper-arXiv14]</a>
 
 <ul>
 <li>Ross Girshick, Jeff Donahue, Trevor Darrell, Jitendra Malik, Rich feature hierarchies for accurate object detection and semantic segmentation, CVPR, 2014.</li>
@@ -188,7 +188,7 @@
 </li>
 <li>Hanxi Li, Yi Li and Fatih Porikli, DeepTrack: Learning Discriminative Feature Representations by Convolutional Neural Networks for Visual Tracking, BMVC, 2014. <a href="http://www.bmva.org/bmvc/2014/files/paper028.pdf">[Paper]</a>
 </li>
-<li>N Wang, DY Yeung, Learning a Deep Compact Image Representation for Visual Tracking, NIPS, 2013. <a href="winsty.net/papers/dlt.pdf">[Paper]</a>
+<li>N Wang, DY Yeung, Learning a Deep Compact Image Representation for Visual Tracking, NIPS, 2013. <a href="http://winsty.net/papers/dlt.pdf">[Paper]</a>
 </li>
 </ul>
 
@@ -196,13 +196,13 @@
 <a id="low-level-vision" class="anchor" href="#low-level-vision" aria-hidden="true"><span class="octicon octicon-link"></span></a>Low-Level Vision</h3>
 
 <ul>
-<li>Optical Flow (FlowNet) <a href="http://arxiv.org/pdf/1504.06852v2">[Paper]</a>
+<li>Optical Flow (FlowNet) <a href="http://arxiv.org/pdf/1504.06852">[Paper]</a>
 
 <ul>
 <li>Philipp Fischer, Alexey Dosovitskiy, Eddy Ilg, Philip Häusser, Caner Hazırbaş, Vladimir Golkov, Patrick van der Smagt, Daniel Cremers, Thomas Brox, FlowNet: Learning Optical Flow with Convolutional Networks, arXiv:1504.06852.</li>
 </ul>
 </li>
-<li>Super-Resolution (SRCNN) <a href="http://mmlab.ie.cuhk.edu.hk/projects/SRCNN.html">[Web]</a> <a href="http://personal.ie.cuhk.edu.hk/%7Eccloy/files/eccv_2014_deepresolution.pdf">[Paper-ECCV14]</a> <a href="http://arxiv.org/pdf/1501.00092v1.pdf">[Paper-arXiv15]</a><a href="http://www.brml.org/uploads/tx_sibibtex/281.pdf">[Paper ICONIP-2014]</a>
+<li>Super-Resolution (SRCNN) <a href="http://mmlab.ie.cuhk.edu.hk/projects/SRCNN.html">[Web]</a> <a href="http://personal.ie.cuhk.edu.hk/%7Eccloy/files/eccv_2014_deepresolution.pdf">[Paper-ECCV14]</a> <a href="http://arxiv.org/pdf/1501.00092.pdf">[Paper-arXiv15]</a><a href="http://www.brml.org/uploads/tx_sibibtex/281.pdf">[Paper ICONIP-2014]</a>
 
 <ul>
 <li>Chao Dong, Chen Change Loy, Kaiming He, Xiaoou Tang, Learning a Deep Convolutional Network for Image Super-Resolution, ECCV, 2014.</li>
@@ -210,13 +210,13 @@
 <li>Osendorfer, Christian, Hubert Soyer, and Patrick van der Smagt, Image Super-Resolution with Fast Approximate Convolutional Sparse Coding, ICONIP, 2014. </li>
 </ul>
 </li>
-<li>Compression Artifacts Reduction <a href="http://arxiv.org/pdf/1504.06993v1">[Paper-arXiv15]</a>
+<li>Compression Artifacts Reduction <a href="http://arxiv.org/pdf/1504.06993">[Paper-arXiv15]</a>
 
 <ul>
 <li>Chao Dong, Yubin Deng, Chen Change Loy, Xiaoou Tang, Compression Artifacts Reduction by a Deep Convolutional Network, arXiv:1504.06993.</li>
 </ul>
 </li>
-<li>Non-Uniform Motion Blur Removal <a href="http://arxiv.org/pdf/1503.00593v3">[Paper]</a>
+<li>Non-Uniform Motion Blur Removal <a href="http://arxiv.org/pdf/1503.00593">[Paper]</a>
 
 <ul>
 <li>Jian Sun, Wenfei Cao, Zongben Xu, Jean Ponce, Learning a Convolutional Neural Network for Non-uniform Motion Blur Removal, CVPR, 2015. </li>
@@ -249,13 +249,13 @@
 (from Gedas Bertasius, Jianbo Shi, Lorenzo Torresani, DeepEdge: A Multi-Scale Bifurcated Deep Network for Top-Down Contour Detection, CVPR, 2015.)</p>
 
 <ul>
-<li>Holistically-Nested Edge Detection <a href="http://arxiv.org/pdf/1504.06375v1">[Paper]</a>
+<li>Holistically-Nested Edge Detection <a href="http://arxiv.org/pdf/1504.06375">[Paper]</a>
 
 <ul>
 <li>Saining Xie, Zhuowen Tu, Holistically-Nested Edge Detection, arXiv:1504.06375. </li>
 </ul>
 </li>
-<li>DeepEdge <a href="http://arxiv.org/pdf/1412.1123v3">[Paper]</a>
+<li>DeepEdge <a href="http://arxiv.org/pdf/1412.1123">[Paper]</a>
 
 <ul>
 <li>Gedas Bertasius, Jianbo Shi, Lorenzo Torresani, DeepEdge: A Multi-Scale Bifurcated Deep Network for Top-Down Contour Detection, CVPR, 2015.</li>
@@ -276,19 +276,19 @@
 (from Jifeng Dai, Kaiming He, Jian Sun, BoxSup: Exploiting Bounding Boxes to Supervise Convolutional Networks for Semantic Segmentation, arXiv:1503.01640.)</p>
 
 <ul>
-<li>BoxSup <a href="http://arxiv.org/pdf/1503.01640v2">[Paper]</a>
+<li>BoxSup <a href="http://arxiv.org/pdf/1503.01640">[Paper]</a>
 
 <ul>
 <li>Jifeng Dai, Kaiming He, Jian Sun, BoxSup: Exploiting Bounding Boxes to Supervise Convolutional Networks for Semantic Segmentation, arXiv:1503.01640.</li>
 </ul>
 </li>
-<li>Conditional Random Fields as Recurrent Neural Networks <a href="http://arxiv.org/pdf/1502.03240v2">[Paper]</a>
+<li>Conditional Random Fields as Recurrent Neural Networks <a href="http://arxiv.org/pdf/1502.03240">[Paper]</a>
 
 <ul>
 <li>Shuai Zheng, Sadeep Jayasumana, Bernardino Romera-Paredes, Vibhav Vineet, Zhizhong Su, Dalong Du, Chang Huang, Philip H. S. Torr, Conditional Random Fields as Recurrent Neural Networks, arXiv:1502.03240.</li>
 </ul>
 </li>
-<li>Fully Convolutional Networks for Semantic Segmentation <a href="http://www.cv-foundation.org/openaccess/content_cvpr_2015/papers/Long_Fully_Convolutional_Networks_2015_CVPR_paper.pdf">[Paper-CVPR15]</a> <a href="http://arxiv.org/pdf/1411.4038v2">[Paper-arXiv15]</a>
+<li>Fully Convolutional Networks for Semantic Segmentation <a href="http://www.cv-foundation.org/openaccess/content_cvpr_2015/papers/Long_Fully_Convolutional_Networks_2015_CVPR_paper.pdf">[Paper-CVPR15]</a> <a href="http://arxiv.org/pdf/1411.4038">[Paper-arXiv15]</a>
 
 <ul>
 <li>Jonathan Long, Evan Shelhamer, Trevor Darrell, Fully Convolutional Networks for Semantic Segmentation, CVPR, 2015.</li>
@@ -338,7 +338,7 @@
 <li>Saurabh Singh, Derek Hoiem, David Forsyth, Learning a Sequential Search for Landmarks, CVPR, 2015.</li>
 </ul>
 </li>
-<li>Multiple Object Recognition with Visual Attention <a href="http://arxiv.org/pdf/1412.7755v2.pdf">[Paper]</a>
+<li>Multiple Object Recognition with Visual Attention <a href="http://arxiv.org/pdf/1412.7755.pdf">[Paper]</a>
 
 <ul>
 <li>Jimmy Lei Ba, Volodymyr Mnih, Koray Kavukcuoglu, Multiple Object Recognition with Visual Attention, ICLR, 2015.</li>
@@ -404,25 +404,25 @@
 (from Andrej Karpathy, Li Fei-Fei, Deep Visual-Semantic Alignments for Generating Image Description, CVPR, 2015.)</p>
 
 <ul>
-<li>Baidu / UCLA <a href="http://arxiv.org/pdf/1410.1090v1">[Paper]</a>
+<li>UCLA / Baidu <a href="http://arxiv.org/pdf/1410.1090">[Paper]</a>
 
 <ul>
 <li>Junhua Mao, Wei Xu, Yi Yang, Jiang Wang, Alan L. Yuille, Explain Images with Multimodal Recurrent Neural Networks, arXiv:1410.1090.</li>
 </ul>
 </li>
-<li>Toronto <a href="http://arxiv.org/pdf/1411.2539v1">[Paper]</a>
+<li>Toronto <a href="http://arxiv.org/pdf/1411.2539">[Paper]</a>
 
 <ul>
 <li>Ryan Kiros, Ruslan Salakhutdinov, Richard S. Zemel, Unifying Visual-Semantic Embeddings with Multimodal Neural Language Models, arXiv:1411.2539.</li>
 </ul>
 </li>
-<li>Berkeley <a href="http://arxiv.org/pdf/1411.4389v3">[Paper]</a>
+<li>Berkeley <a href="http://arxiv.org/pdf/1411.4389">[Paper]</a>
 
 <ul>
 <li>Jeff Donahue, Lisa Anne Hendricks, Sergio Guadarrama, Marcus Rohrbach, Subhashini Venugopalan, Kate Saenko, Trevor Darrell, Long-term Recurrent Convolutional Networks for Visual Recognition and Description, arXiv:1411.4389.</li>
 </ul>
 </li>
-<li>Google <a href="http://arxiv.org/pdf/1411.4555v2">[Paper]</a>
+<li>Google <a href="http://arxiv.org/pdf/1411.4555">[Paper]</a>
 
 <ul>
 <li>Oriol Vinyals, Alexander Toshev, Samy Bengio, Dumitru Erhan, Show and Tell: A Neural Image Caption Generator, arXiv:1411.4555.</li>
@@ -434,19 +434,19 @@
 <li>Andrej Karpathy, Li Fei-Fei, Deep Visual-Semantic Alignments for Generating Image Description, CVPR, 2015.</li>
 </ul>
 </li>
-<li>UML / UT <a href="http://arxiv.org/pdf/1412.4729v3">[Paper]</a>
+<li>UML / UT <a href="http://arxiv.org/pdf/1412.4729">[Paper]</a>
 
 <ul>
 <li>Subhashini Venugopalan, Huijuan Xu, Jeff Donahue, Marcus Rohrbach, Raymond Mooney, Kate Saenko, Translating Videos to Natural Language Using Deep Recurrent Neural Networks, NAACL-HLT, 2015. </li>
 </ul>
 </li>
-<li>Microsoft / CMU <a href="http://arxiv.org/pdf/1411.5654v1">[Paper]</a>
+<li>CMU / Microsoft <a href="http://arxiv.org/pdf/1411.5654">[Paper]</a>
 
 <ul>
 <li>Xinlei Chen, C. Lawrence Zitnick, Learning a Recurrent Visual Representation for Image Caption Generation, arXiv:1411.5654.</li>
 </ul>
 </li>
-<li>Microsoft <a href="http://arxiv.org/pdf/1411.4952v3">[Paper]</a>
+<li>Microsoft <a href="http://arxiv.org/pdf/1411.4952">[Paper]</a>
 
 <ul>
 <li>Hao Fang, Saurabh Gupta, Forrest Iandola, Rupesh Srivastava, Li Deng, Piotr Dollár, Jianfeng Gao, Xiaodong He, Margaret Mitchell, John C. Platt, C. Lawrence Zitnick, Geoffrey Zweig, From Captions to Visual Concepts and Back, CVPR, 2015. </li>
@@ -458,25 +458,25 @@
 <a id="video-captioning" class="anchor" href="#video-captioning" aria-hidden="true"><span class="octicon octicon-link"></span></a>Video Captioning</h3>
 
 <ul>
-<li>Berkeley <a href="http://jeffdonahue.com/lrcn/">[Web]</a> <a href="http://arxiv.org/pdf/1411.4389v3.pdf">[Paper]</a>
+<li>Berkeley <a href="http://jeffdonahue.com/lrcn/">[Web]</a> <a href="http://arxiv.org/pdf/1411.4389.pdf">[Paper]</a>
 
 <ul>
 <li>Jeff Donahue, Lisa Anne Hendricks, Sergio Guadarrama, Marcus Rohrbach, Subhashini Venugopalan, Kate Saenko, Trevor Darrell, Long-term Recurrent Convolutional Networks for Visual Recognition and Description, CVPR, 2015.</li>
 </ul>
 </li>
-<li>UT / UML / Berkeley <a href="http://arxiv.org/pdf/1412.4729v3.pdf">[Paper]</a>
+<li>UT / UML / Berkeley <a href="http://arxiv.org/pdf/1412.4729">[Paper]</a>
 
 <ul>
 <li>Subhashini Venugopalan, Huijuan Xu, Jeff Donahue, Marcus Rohrbach, Raymond Mooney, Kate Saenko, Translating Videos to Natural Language Using Deep Recurrent Neural Networks, arXiv:1412.4729.</li>
 </ul>
 </li>
-<li>Microsoft <a href="http://arxiv.org/pdf/1505.01861v1.pdf">[Paper]</a>
+<li>Microsoft <a href="http://arxiv.org/pdf/1505.01861">[Paper]</a>
 
 <ul>
 <li>Yingwei Pan, Tao Mei, Ting Yao, Houqiang Li, Yong Rui, Joint Modeling Embedding and Translation to Bridge Video and Language, arXiv:1505.01861.</li>
 </ul>
 </li>
-<li>UT / Berkeley / UML <a href="http://arxiv.org/pdf/1505.00487v2.pdf">[Paper]</a>
+<li>UT / Berkeley / UML <a href="http://arxiv.org/pdf/1505.00487">[Paper]</a>
 
 <ul>
 <li>Subhashini Venugopalan, Marcus Rohrbach, Jeff Donahue, Raymond Mooney, Trevor Darrell, Kate Saenko, Sequence to Sequence--Video to Text, arXiv:1505.00487.</li>
@@ -491,25 +491,25 @@
 (from Stanislaw Antol, Aishwarya Agrawal, Jiasen Lu, Margaret Mitchell, Dhruv Batra, C. Lawrence Zitnick, Devi Parikh, VQA: Visual Question Answering, CVPR, 2015 SUNw:Scene Understanding workshop)</p>
 
 <ul>
-<li>MSR / Virginia Tech. <a href="http://www.visualqa.org/">[Web]</a> <a href="http://arxiv.org/pdf/1505.00468v1.pdf">[Paper]</a>
+<li>Virginia Tech / MSR <a href="http://www.visualqa.org/">[Web]</a> <a href="http://arxiv.org/pdf/1505.00468">[Paper]</a>
 
 <ul>
 <li>Stanislaw Antol, Aishwarya Agrawal, Jiasen Lu, Margaret Mitchell, Dhruv Batra, C. Lawrence Zitnick, Devi Parikh, VQA: Visual Question Answering, CVPR, 2015 SUNw:Scene Understanding workshop.</li>
 </ul>
 </li>
-<li>MPI / Berkeley <a href="https://www.mpi-inf.mpg.de/departments/computer-vision-and-multimodal-computing/research/vision-and-language/visual-turing-challenge/">[Web]</a> <a href="http://arxiv.org/pdf/1505.01121v2.pdf">[Paper]</a>
+<li>MPI / Berkeley <a href="https://www.mpi-inf.mpg.de/departments/computer-vision-and-multimodal-computing/research/vision-and-language/visual-turing-challenge/">[Web]</a> <a href="http://arxiv.org/pdf/1505.01121">[Paper]</a>
 
 <ul>
 <li>Mateusz Malinowski, Marcus Rohrbach, Mario Fritz, Ask Your Neurons: A Neural-based Approach to Answering Questions about Images, arXiv:1505.01121.</li>
 </ul>
 </li>
-<li>Toronto <a href="http://arxiv.org/pdf/1505.02074v1.pdf">[Paper]</a> <a href="http://www.cs.toronto.edu/%7Emren/imageqa/data/cocoqa/">[Dataset]</a>
+<li>Toronto <a href="http://arxiv.org/pdf/1505.02074">[Paper]</a> <a href="http://www.cs.toronto.edu/%7Emren/imageqa/data/cocoqa/">[Dataset]</a>
 
 <ul>
 <li>Mengye Ren, Ryan Kiros, Richard Zemel, Image Question Answering: A Visual Semantic Embedding Model and a New Dataset, arXiv:1505.02074 / ICML 2015 deep learning workshop.</li>
 </ul>
 </li>
-<li>Baidu / UCLA <a href="http://arxiv.org/pdf/1505.05612v1.pdf">[Paper]</a> <a href="">[Dataset]</a>
+<li>Baidu / UCLA <a href="http://arxiv.org/pdf/1505.05612">[Paper]</a> <a href="">[Dataset]</a>
 
 <ul>
 <li>Hauyuan Gao, Junhua Mao, Jie Zhou, Zhiheng Huang, Lei Wang, Wei Xu, Are You Talking to a Machine? Dataset and Methods for Multilingual Image Question Answering, arXiv:1505.05612.</li>

Datei-Diff unterdrückt, da er zu groß ist
+ 1 - 1
params.json