|
@@ -97,7 +97,7 @@
|
|
|
<a id="imagenet-classification" class="anchor" href="#imagenet-classification" aria-hidden="true"><span class="octicon octicon-link"></span></a>ImageNet Classification</h3>
|
|
|
|
|
|
<p><img src="https://cloud.githubusercontent.com/assets/5226447/8451949/327b9566-2022-11e5-8b34-53b4a64c13ad.PNG" alt="classification">
|
|
|
-(from Krizhevsky, A., Sutskever, I. and Hinton, G. E, ImageNet Classification with Deep Convolutional Neural Networks NIPS 2012.)</p>
|
|
|
+(from Alex Krizhevsky, Ilya Sutskever, Geoffrey E. Hinton, ImageNet Classification with Deep Convolutional Neural Networks, NIPS, 2012.)</p>
|
|
|
|
|
|
<ul>
|
|
|
<li>Microsoft (PReLu/Weight Initialization) <a href="http://arxiv.org/pdf/1502.01852v1">[Paper]</a>
|
|
@@ -115,20 +115,19 @@
|
|
|
<li>GoogLeNet <a href="http://arxiv.org/pdf/1409.4842v1">[Paper]</a>
|
|
|
|
|
|
<ul>
|
|
|
-<li>Christian Szegedy, Wei Liu, Yangqing Jia, Pierre Sermanet, Scott Reed, Dragomir Anguelov, Dumitru Erhan, Vincent Vanhoucke, Andrew Rabinovich, CVPR 2015. </li>
|
|
|
+<li>Christian Szegedy, Wei Liu, Yangqing Jia, Pierre Sermanet, Scott Reed, Dragomir Anguelov, Dumitru Erhan, Vincent Vanhoucke, Andrew Rabinovich, CVPR, 2015. </li>
|
|
|
</ul>
|
|
|
</li>
|
|
|
<li>VGG-Net <a href="http://www.robots.ox.ac.uk/%7Evgg/research/very_deep/">[Web]</a> <a href="http://arxiv.org/pdf/1409.1556">[Paper]</a>
|
|
|
|
|
|
<ul>
|
|
|
-<li>Karen Simonyan and Andrew Zisserman, Very Deep Convolutional Networks for Large-Scale Visual Recognition, ICLR 2015.</li>
|
|
|
+<li>Karen Simonyan and Andrew Zisserman, Very Deep Convolutional Networks for Large-Scale Visual Recognition, ICLR, 2015.</li>
|
|
|
</ul>
|
|
|
</li>
|
|
|
<li>AlexNet <a href="http://books.nips.cc/papers/files/nips25/NIPS2012_0534.pdf">[Paper]</a>
|
|
|
|
|
|
<ul>
|
|
|
-<li>Krizhevsky, A., Sutskever, I. and Hinton, G. E, ImageNet Classification with Deep Convolutional Neural Networks
|
|
|
-NIPS 2012.</li>
|
|
|
+<li>Alex Krizhevsky, Ilya Sutskever, Geoffrey E. Hinton, ImageNet Classification with Deep Convolutional Neural Networks, NIPS, 2012.</li>
|
|
|
</ul>
|
|
|
</li>
|
|
|
</ul>
|
|
@@ -137,13 +136,13 @@ NIPS 2012.</li>
|
|
|
<a id="object-detection" class="anchor" href="#object-detection" aria-hidden="true"><span class="octicon octicon-link"></span></a>Object Detection</h3>
|
|
|
|
|
|
<p><img src="https://cloud.githubusercontent.com/assets/5226447/8452063/f76ba500-2022-11e5-8db1-2cd5d490e3b3.PNG" alt="object_detection">
|
|
|
-(from Shaoqing Ren, Kaiming He, Ross Girshick, Jian Sun, Faster R-CNN: Towards Real-Time Object Detection with Region Proposal Networks, arXiv:1506.01497)</p>
|
|
|
+(from Shaoqing Ren, Kaiming He, Ross Girshick, Jian Sun, Faster R-CNN: Towards Real-Time Object Detection with Region Proposal Networks, arXiv:1506.01497.)</p>
|
|
|
|
|
|
<ul>
|
|
|
<li>OverFeat, NYU <a href="http://arxiv.org/pdf/1311.2901v3">[Paper]</a>
|
|
|
|
|
|
<ul>
|
|
|
-<li>Matthrew Zeiler, Rob Fergus, Visualizing and Understanding Convolutional Networks, ECCV 2014.</li>
|
|
|
+<li>Matthrew Zeiler, Rob Fergus, Visualizing and Understanding Convolutional Networks, ECCV, 2014.</li>
|
|
|
</ul>
|
|
|
</li>
|
|
|
<li>R-CNN, UC Berkeley <a href="http://www.cv-foundation.org/openaccess/content_cvpr_2014/papers/Girshick_Rich_Feature_Hierarchies_2014_CVPR_paper.pdf">[Paper-CVPR14]</a> <a href="http://arxiv.org/pdf/1311.2524v5">[Paper-arXiv14]</a>
|
|
@@ -155,25 +154,25 @@ NIPS 2012.</li>
|
|
|
<li>SPP, Microsoft Research <a href="http://arxiv.org/pdf/1406.4729">[Paper]</a>
|
|
|
|
|
|
<ul>
|
|
|
-<li>Kaiming He, Xiangyu Zhang, Shaoqing Ren, Jian Sun, Spatial Pyramid Pooling in Deep Convolutional Networks for Visual Recognition, ECCV 2014.</li>
|
|
|
+<li>Kaiming He, Xiangyu Zhang, Shaoqing Ren, Jian Sun, Spatial Pyramid Pooling in Deep Convolutional Networks for Visual Recognition, ECCV, 2014.</li>
|
|
|
</ul>
|
|
|
</li>
|
|
|
<li>Fast R-CNN, Microsoft Research <a href="http://arxiv.org/pdf/1504.08083">[Paper]</a>
|
|
|
|
|
|
<ul>
|
|
|
-<li>Ross Girshick, Fast R-CNN, arXiv:1504.08083</li>
|
|
|
+<li>Ross Girshick, Fast R-CNN, arXiv:1504.08083.</li>
|
|
|
</ul>
|
|
|
</li>
|
|
|
<li>Faster R-CNN, Microsoft Research <a href="http://arxiv.org/pdf/1506.01497">[Paper]</a>
|
|
|
|
|
|
<ul>
|
|
|
-<li>Shaoqing Ren, Kaiming He, Ross Girshick, Jian Sun, Faster R-CNN: Towards Real-Time Object Detection with Region Proposal Networks, arXiv:1506.01497</li>
|
|
|
+<li>Shaoqing Ren, Kaiming He, Ross Girshick, Jian Sun, Faster R-CNN: Towards Real-Time Object Detection with Region Proposal Networks, arXiv:1506.01497.</li>
|
|
|
</ul>
|
|
|
</li>
|
|
|
<li>R-CNN minus R, Oxford <a href="http://arxiv.org/pdf/1506.06981">[Paper]</a>
|
|
|
|
|
|
<ul>
|
|
|
-<li>Karel Lenc, Andrea Vedaldi, R-CNN minus R, arXiv:1506.06981</li>
|
|
|
+<li>Karel Lenc, Andrea Vedaldi, R-CNN minus R, arXiv:1506.06981.</li>
|
|
|
</ul>
|
|
|
</li>
|
|
|
</ul>
|
|
@@ -191,39 +190,39 @@ NIPS 2012.</li>
|
|
|
<li>Super-Resolution (SRCNN) <a href="http://mmlab.ie.cuhk.edu.hk/projects/SRCNN.html">[Web]</a> <a href="http://personal.ie.cuhk.edu.hk/%7Eccloy/files/eccv_2014_deepresolution.pdf">[Paper-ECCV14]</a> <a href="http://arxiv.org/pdf/1501.00092v1.pdf">[Paper-arXiv15]</a><a href="http://www.brml.org/uploads/tx_sibibtex/281.pdf">[Paper ICONIP-2014]</a>
|
|
|
|
|
|
<ul>
|
|
|
-<li>Chao Dong, Chen Change Loy, Kaiming He, Xiaoou Tang, Learning a Deep Convolutional Network for Image Super-Resolution, in ECCV 2014</li>
|
|
|
-<li>Chao Dong, Chen Change Loy, Kaiming He, Xiaoou Tang. Image Super-Resolution Using Deep Convolutional Networks, arXiv:1501.00092 (2015)</li>
|
|
|
-<li>Osendorfer, Christian, Hubert Soyer, and Patrick van der Smagt. Image Super-Resolution with Fast Approximate Convolutional Sparse Coding. Neural Information Processing. Springer International Publishing, 2014. </li>
|
|
|
+<li>Chao Dong, Chen Change Loy, Kaiming He, Xiaoou Tang, Learning a Deep Convolutional Network for Image Super-Resolution, ECCV, 2014.</li>
|
|
|
+<li>Chao Dong, Chen Change Loy, Kaiming He, Xiaoou Tang, Image Super-Resolution Using Deep Convolutional Networks, arXiv:1501.00092.</li>
|
|
|
+<li>Osendorfer, Christian, Hubert Soyer, and Patrick van der Smagt, Image Super-Resolution with Fast Approximate Convolutional Sparse Coding, ICONIP, 2014. </li>
|
|
|
</ul>
|
|
|
</li>
|
|
|
<li>Compression Artifacts Reduction <a href="http://arxiv.org/pdf/1504.06993v1">[Paper-arXiv15]</a>
|
|
|
|
|
|
<ul>
|
|
|
-<li>Chao Dong, Yubin Deng, Chen Change Loy, Xiaoou Tang, Compression Artifacts Reduction by a Deep Convolutional Network, arXiv:1504.06993</li>
|
|
|
+<li>Chao Dong, Yubin Deng, Chen Change Loy, Xiaoou Tang, Compression Artifacts Reduction by a Deep Convolutional Network, arXiv:1504.06993.</li>
|
|
|
</ul>
|
|
|
</li>
|
|
|
<li>Non-Uniform Motion Blur Removal <a href="http://arxiv.org/pdf/1503.00593v3">[Paper]</a>
|
|
|
|
|
|
<ul>
|
|
|
-<li>Jian Sun, Wenfei Cao, Zongben Xu, Jean Ponce, Learning a Convolutional Neural Network for Non-uniform Motion Blur Removal, CVPR 2015. </li>
|
|
|
+<li>Jian Sun, Wenfei Cao, Zongben Xu, Jean Ponce, Learning a Convolutional Neural Network for Non-uniform Motion Blur Removal, CVPR, 2015. </li>
|
|
|
</ul>
|
|
|
</li>
|
|
|
<li>Image Deconvolution <a href="http://lxu.me/projects/dcnn/">[Web]</a> <a href="http://lxu.me/mypapers/dcnn_nips14.pdf">[Paper]</a>
|
|
|
|
|
|
<ul>
|
|
|
-<li> Li Xu, Jimmy SJ. Ren, Ce Liu, Jiaya Jia, "Deep Convolutional Neural Network for Image Deconvolution" Advances in Neural Information Processing Systems (NIPS), 2014.</li>
|
|
|
+<li> Li Xu, Jimmy SJ. Ren, Ce Liu, Jiaya Jia, Deep Convolutional Neural Network for Image Deconvolution, NIPS, 2014.</li>
|
|
|
</ul>
|
|
|
</li>
|
|
|
<li> Deep Edge-Aware Filter <a href="http://jmlr.org/proceedings/papers/v37/xub15.pdf">[Paper]</a>
|
|
|
|
|
|
<ul>
|
|
|
-<li> Li Xu, Jimmy SJ. Ren, Qiong Yan, Renjie Liao, Jiaya Jia "Deep Edge-Aware Filters" International Conference on Machine Learning (ICML), 2015.</li>
|
|
|
+<li> Li Xu, Jimmy SJ. Ren, Qiong Yan, Renjie Liao, Jiaya Jia, Deep Edge-Aware Filters, ICML, 2015.</li>
|
|
|
</ul>
|
|
|
</li>
|
|
|
<li>Computing the Stereo Matching Cost with a Convolutional Neural Network <a href="http://www.cv-foundation.org/openaccess/content_cvpr_2015/papers/Zbontar_Computing_the_Stereo_2015_CVPR_paper.pdf">[Paper]</a>
|
|
|
|
|
|
<ul>
|
|
|
-<li> Jure Žbontar, Yann LeCun, Computing the Stereo Matching Cost with a Convolutional Neural Network, CVPR 2015.</li>
|
|
|
+<li> Jure Žbontar, Yann LeCun, Computing the Stereo Matching Cost with a Convolutional Neural Network, CVPR, 2015.</li>
|
|
|
</ul>
|
|
|
</li>
|
|
|
</ul>
|
|
@@ -232,7 +231,7 @@ NIPS 2012.</li>
|
|
|
<a id="edge-detection" class="anchor" href="#edge-detection" aria-hidden="true"><span class="octicon octicon-link"></span></a>Edge Detection</h3>
|
|
|
|
|
|
<p><img src="https://cloud.githubusercontent.com/assets/5226447/8452371/93ca6f7e-2025-11e5-90f2-d428fd5ff7ac.PNG" alt="edge_detection">
|
|
|
-(from Gedas Bertasius, Jianbo Shi, Lorenzo Torresani, DeepEdge: A Multi-Scale Bifurcated Deep Network for Top-Down Contour Detection, CVPR 2015.)</p>
|
|
|
+(from Gedas Bertasius, Jianbo Shi, Lorenzo Torresani, DeepEdge: A Multi-Scale Bifurcated Deep Network for Top-Down Contour Detection, CVPR, 2015.)</p>
|
|
|
|
|
|
<ul>
|
|
|
<li>Holistically-Nested Edge Detection <a href="http://arxiv.org/pdf/1504.06375v1">[Paper]</a>
|
|
@@ -244,13 +243,13 @@ NIPS 2012.</li>
|
|
|
<li>DeepEdge <a href="http://arxiv.org/pdf/1412.1123v3">[Paper]</a>
|
|
|
|
|
|
<ul>
|
|
|
-<li>Gedas Bertasius, Jianbo Shi, Lorenzo Torresani, DeepEdge: A Multi-Scale Bifurcated Deep Network for Top-Down Contour Detection, CVPR 2015.</li>
|
|
|
+<li>Gedas Bertasius, Jianbo Shi, Lorenzo Torresani, DeepEdge: A Multi-Scale Bifurcated Deep Network for Top-Down Contour Detection, CVPR, 2015.</li>
|
|
|
</ul>
|
|
|
</li>
|
|
|
<li>DeepContour <a href="http://mc.eistar.net/UpLoadFiles/Papers/DeepContour_cvpr15.pdf">[Paper]</a>
|
|
|
|
|
|
<ul>
|
|
|
-<li>Wei Shen, Xinggang Wang, Yan Wang, Xiang Bai, Zhijiang Zhang, DeepContour: A Deep Convolutional Feature Learned by Positive-Sharing Loss for Contour Detection, CVPR 2015.</li>
|
|
|
+<li>Wei Shen, Xinggang Wang, Yan Wang, Xiang Bai, Zhijiang Zhang, DeepContour: A Deep Convolutional Feature Learned by Positive-Sharing Loss for Contour Detection, CVPR, 2015.</li>
|
|
|
</ul>
|
|
|
</li>
|
|
|
</ul>
|
|
@@ -259,32 +258,32 @@ NIPS 2012.</li>
|
|
|
<a id="semantic-segmentation" class="anchor" href="#semantic-segmentation" aria-hidden="true"><span class="octicon octicon-link"></span></a>Semantic Segmentation</h3>
|
|
|
|
|
|
<p><img src="https://cloud.githubusercontent.com/assets/5226447/8452076/0ba8340c-2023-11e5-88bc-bebf4509b6bb.PNG" alt="semantic_segmantation">
|
|
|
-(from Jifeng Dai, Kaiming He, Jian Sun, BoxSup: Exploiting Bounding Boxes to Supervise Convolutional Networks for Semantic Segmentation, arXiv:1503.01640)</p>
|
|
|
+(from Jifeng Dai, Kaiming He, Jian Sun, BoxSup: Exploiting Bounding Boxes to Supervise Convolutional Networks for Semantic Segmentation, arXiv:1503.01640.)</p>
|
|
|
|
|
|
<ul>
|
|
|
-<li>Learning Hierarchical Features for Scene Labeling <a href="http://yann.lecun.com/exdb/publis/pdf/farabet-icml-12.pdf">[Paper-ICML12]</a> <a href="http://yann.lecun.com/exdb/publis/pdf/farabet-pami-13.pdf">[Paper-PAMI13]</a>
|
|
|
+<li>BoxSup <a href="http://arxiv.org/pdf/1503.01640v2">[Paper]</a>
|
|
|
|
|
|
<ul>
|
|
|
-<li>Clement Farabet, Camille Couprie, Laurent Najman, Yann LeCun, Scene Parsing with Multiscale Feature Learning, Purity Trees, and Optimal Covers, ICML, 2012.</li>
|
|
|
-<li>Clement Farabet, Camille Couprie, Laurent Najman, Yann LeCun, Learning Hierarchical Features for Scene Labeling, PAMI, 2013.</li>
|
|
|
+<li>Jifeng Dai, Kaiming He, Jian Sun, BoxSup: Exploiting Bounding Boxes to Supervise Convolutional Networks for Semantic Segmentation, arXiv:1503.01640.</li>
|
|
|
</ul>
|
|
|
</li>
|
|
|
-<li>Fully Convolutional Networks for Semantic Segmentation <a href="http://www.cv-foundation.org/openaccess/content_cvpr_2015/papers/Long_Fully_Convolutional_Networks_2015_CVPR_paper.pdf">[Paper-CVPR15]</a> <a href="http://arxiv.org/pdf/1411.4038v2">[Paper-arXiv15]</a>
|
|
|
+<li>Conditional Random Fields as Recurrent Neural Networks <a href="http://arxiv.org/pdf/1502.03240v2">[Paper]</a>
|
|
|
|
|
|
<ul>
|
|
|
-<li>Jonathan Long, Evan Shelhamer, Trevor Darrell, Fully Convolutional Networks for Semantic Segmentation, CVPR, 2015.</li>
|
|
|
+<li>Shuai Zheng, Sadeep Jayasumana, Bernardino Romera-Paredes, Vibhav Vineet, Zhizhong Su, Dalong Du, Chang Huang, Philip H. S. Torr, Conditional Random Fields as Recurrent Neural Networks, arXiv:1502.03240.</li>
|
|
|
</ul>
|
|
|
</li>
|
|
|
-<li>Conditional Random Fields as Recurrent Neural Networks <a href="http://arxiv.org/pdf/1502.03240v2">[Paper]</a>
|
|
|
+<li>Fully Convolutional Networks for Semantic Segmentation <a href="http://www.cv-foundation.org/openaccess/content_cvpr_2015/papers/Long_Fully_Convolutional_Networks_2015_CVPR_paper.pdf">[Paper-CVPR15]</a> <a href="http://arxiv.org/pdf/1411.4038v2">[Paper-arXiv15]</a>
|
|
|
|
|
|
<ul>
|
|
|
-<li>Shuai Zheng, Sadeep Jayasumana, Bernardino Romera-Paredes, Vibhav Vineet, Zhizhong Su, Dalong Du, Chang Huang, Philip H. S. Torr, Conditional Random Fields as Recurrent Neural Networks, arXiv:1502.03240</li>
|
|
|
+<li>Jonathan Long, Evan Shelhamer, Trevor Darrell, Fully Convolutional Networks for Semantic Segmentation, CVPR, 2015.</li>
|
|
|
</ul>
|
|
|
</li>
|
|
|
-<li>BoxSup <a href="http://arxiv.org/pdf/1503.01640v2">[Paper]</a>
|
|
|
+<li>Learning Hierarchical Features for Scene Labeling <a href="http://yann.lecun.com/exdb/publis/pdf/farabet-icml-12.pdf">[Paper-ICML12]</a> <a href="http://yann.lecun.com/exdb/publis/pdf/farabet-pami-13.pdf">[Paper-PAMI13]</a>
|
|
|
|
|
|
<ul>
|
|
|
-<li>Jifeng Dai, Kaiming He, Jian Sun, BoxSup: Exploiting Bounding Boxes to Supervise Convolutional Networks for Semantic Segmentation, arXiv:1503.01640</li>
|
|
|
+<li>Clement Farabet, Camille Couprie, Laurent Najman, Yann LeCun, Scene Parsing with Multiscale Feature Learning, Purity Trees, and Optimal Covers, ICML, 2012.</li>
|
|
|
+<li>Clement Farabet, Camille Couprie, Laurent Najman, Yann LeCun, Learning Hierarchical Features for Scene Labeling, PAMI, 2013.</li>
|
|
|
</ul>
|
|
|
</li>
|
|
|
</ul>
|
|
@@ -293,7 +292,7 @@ NIPS 2012.</li>
|
|
|
<a id="visual-attention-and-saliency" class="anchor" href="#visual-attention-and-saliency" aria-hidden="true"><span class="octicon octicon-link"></span></a>Visual Attention and Saliency</h3>
|
|
|
|
|
|
<p><img src="https://cloud.githubusercontent.com/assets/5226447/8452391/cdaa3c7e-2025-11e5-81be-ee5243fe9e7c.png" alt="saliency">
|
|
|
-(from Federico Perazzi, Philipp Krahenbuhl, Yael Pritch, Alexander Hornung, Saliency Filters: Contrast Based Filtering for Salient Region Detection, CVPR, 2012)</p>
|
|
|
+(from Federico Perazzi, Philipp Krahenbuhl, Yael Pritch, Alexander Hornung, Saliency Filters: Contrast Based Filtering for Salient Region Detection, CVPR, 2012.)</p>
|
|
|
|
|
|
<ul>
|
|
|
<li>Mr-CNN <a href="http://www.cv-foundation.org/openaccess/content_cvpr_2015/papers/Liu_Predicting_Eye_Fixations_2015_CVPR_paper.pdf">[Paper]</a>
|
|
@@ -314,7 +313,7 @@ NIPS 2012.</li>
|
|
|
<li>Jimmy Lei Ba, Volodymyr Mnih, Koray Kavukcuoglu, Multiple Object Recognition with Visual Attention, ICLR, 2015.</li>
|
|
|
</ul>
|
|
|
</li>
|
|
|
-<li>Recurrent Models of Visual Attention<a href="http://papers.nips.cc/paper/5542-recurrent-models-of-visual-attention.pdf">[Paper]</a>
|
|
|
+<li>Recurrent Models of Visual Attention <a href="http://papers.nips.cc/paper/5542-recurrent-models-of-visual-attention.pdf">[Paper]</a>
|
|
|
|
|
|
<ul>
|
|
|
<li>Volodymyr Mnih, Nicolas Heess, Alex Graves, Koray Kavukcuoglu, Recurrent Models of Visual Attention, NIPS, 2014.</li>
|
|
@@ -371,43 +370,43 @@ NIPS 2012.</li>
|
|
|
<a id="image-captioning" class="anchor" href="#image-captioning" aria-hidden="true"><span class="octicon octicon-link"></span></a>Image Captioning</h3>
|
|
|
|
|
|
<p><img src="https://cloud.githubusercontent.com/assets/5226447/8452051/e8f81030-2022-11e5-85db-c68e7d8251ce.PNG" alt="image_captioning">
|
|
|
-(from Andrej Karpathy, Li Fei-Fei, Deep Visual-Semantic Alignments for Generating Image Description, CVPR (2015).)</p>
|
|
|
+(from Andrej Karpathy, Li Fei-Fei, Deep Visual-Semantic Alignments for Generating Image Description, CVPR, 2015.)</p>
|
|
|
|
|
|
<ul>
|
|
|
<li>Baidu / UCLA <a href="http://arxiv.org/pdf/1410.1090v1">[Paper]</a>
|
|
|
|
|
|
<ul>
|
|
|
-<li>Junhua Mao, Wei Xu, Yi Yang, Jiang Wang, Alan L. Yuille, Explain Images with Multimodal Recurrent Neural Networks, arXiv:1410.1090 (2014).</li>
|
|
|
+<li>Junhua Mao, Wei Xu, Yi Yang, Jiang Wang, Alan L. Yuille, Explain Images with Multimodal Recurrent Neural Networks, arXiv:1410.1090.</li>
|
|
|
</ul>
|
|
|
</li>
|
|
|
<li>Toronto <a href="http://arxiv.org/pdf/1411.2539v1">[Paper]</a>
|
|
|
|
|
|
<ul>
|
|
|
-<li>Ryan Kiros, Ruslan Salakhutdinov, Richard S. Zemel, Unifying Visual-Semantic Embeddings with Multimodal Neural Language Models, arXiv:1411.2539 (2014).</li>
|
|
|
+<li>Ryan Kiros, Ruslan Salakhutdinov, Richard S. Zemel, Unifying Visual-Semantic Embeddings with Multimodal Neural Language Models, arXiv:1411.2539.</li>
|
|
|
</ul>
|
|
|
</li>
|
|
|
<li>Berkeley <a href="http://arxiv.org/pdf/1411.4389v3">[Paper]</a>
|
|
|
|
|
|
<ul>
|
|
|
-<li>Jeff Donahue, Lisa Anne Hendricks, Sergio Guadarrama, Marcus Rohrbach, Subhashini Venugopalan, Kate Saenko, Trevor Darrell, Long-term Recurrent Convolutional Networks for Visual Recognition and Description, arXiv:1411.4389 (2014).</li>
|
|
|
+<li>Jeff Donahue, Lisa Anne Hendricks, Sergio Guadarrama, Marcus Rohrbach, Subhashini Venugopalan, Kate Saenko, Trevor Darrell, Long-term Recurrent Convolutional Networks for Visual Recognition and Description, arXiv:1411.4389.</li>
|
|
|
</ul>
|
|
|
</li>
|
|
|
<li>Google <a href="http://arxiv.org/pdf/1411.4555v2">[Paper]</a>
|
|
|
|
|
|
<ul>
|
|
|
-<li>Oriol Vinyals, Alexander Toshev, Samy Bengio, Dumitru Erhan, Show and Tell: A Neural Image Caption Generator, arXiv:1411.4555 (2014). </li>
|
|
|
+<li>Oriol Vinyals, Alexander Toshev, Samy Bengio, Dumitru Erhan, Show and Tell: A Neural Image Caption Generator, arXiv:1411.4555.</li>
|
|
|
</ul>
|
|
|
</li>
|
|
|
<li>Stanford <a href="http://cs.stanford.edu/people/karpathy/deepimagesent/">[Web]</a> <a href="http://cs.stanford.edu/people/karpathy/cvpr2015.pdf">[Paper]</a>
|
|
|
|
|
|
<ul>
|
|
|
-<li>Andrej Karpathy, Li Fei-Fei, Deep Visual-Semantic Alignments for Generating Image Description, CVPR (2015).</li>
|
|
|
+<li>Andrej Karpathy, Li Fei-Fei, Deep Visual-Semantic Alignments for Generating Image Description, CVPR, 2015.</li>
|
|
|
</ul>
|
|
|
</li>
|
|
|
<li>UML / UT <a href="http://arxiv.org/pdf/1412.4729v3">[Paper]</a>
|
|
|
|
|
|
<ul>
|
|
|
-<li>Subhashini Venugopalan, Huijuan Xu, Jeff Donahue, Marcus Rohrbach, Raymond Mooney, Kate Saenko, Translating Videos to Natural Language Using Deep Recurrent Neural Networks, NAACL-HLT 2015. </li>
|
|
|
+<li>Subhashini Venugopalan, Huijuan Xu, Jeff Donahue, Marcus Rohrbach, Raymond Mooney, Kate Saenko, Translating Videos to Natural Language Using Deep Recurrent Neural Networks, NAACL-HLT, 2015. </li>
|
|
|
</ul>
|
|
|
</li>
|
|
|
<li>Microsoft / CMU <a href="http://arxiv.org/pdf/1411.5654v1">[Paper]</a>
|
|
@@ -419,7 +418,7 @@ NIPS 2012.</li>
|
|
|
<li>Microsoft <a href="http://arxiv.org/pdf/1411.4952v3">[Paper]</a>
|
|
|
|
|
|
<ul>
|
|
|
-<li>Hao Fang, Saurabh Gupta, Forrest Iandola, Rupesh Srivastava, Li Deng, Piotr Dollár, Jianfeng Gao, Xiaodong He, Margaret Mitchell, John C. Platt, C. Lawrence Zitnick, Geoffrey Zweig, From Captions to Visual Concepts and Back, CVPR 2015. </li>
|
|
|
+<li>Hao Fang, Saurabh Gupta, Forrest Iandola, Rupesh Srivastava, Li Deng, Piotr Dollár, Jianfeng Gao, Xiaodong He, Margaret Mitchell, John C. Platt, C. Lawrence Zitnick, Geoffrey Zweig, From Captions to Visual Concepts and Back, CVPR, 2015. </li>
|
|
|
</ul>
|
|
|
</li>
|
|
|
</ul>
|
|
@@ -428,28 +427,28 @@ NIPS 2012.</li>
|
|
|
<a id="video-captioning" class="anchor" href="#video-captioning" aria-hidden="true"><span class="octicon octicon-link"></span></a>Video Captioning</h3>
|
|
|
|
|
|
<ul>
|
|
|
-<li>Berkeley [<a href="http://jeffdonahue.com/lrcn/">Web</a>] [<a href="http://arxiv.org/pdf/1411.4389v3.pdf">Paper</a>]
|
|
|
+<li>Berkeley <a href="http://jeffdonahue.com/lrcn/">[Web]</a> <a href="http://arxiv.org/pdf/1411.4389v3.pdf">[Paper]</a>
|
|
|
|
|
|
<ul>
|
|
|
-<li>Jeff Donahue, Lisa Anne Hendricks, Sergio Guadarrama, Marcus Rohrbach, Subhashini Venugopalan, Kate Saenko, Trevor Darrell, Long-term Recurrent Convolutional Networks for Visual Recognition and Description, CVPR 2015</li>
|
|
|
+<li>Jeff Donahue, Lisa Anne Hendricks, Sergio Guadarrama, Marcus Rohrbach, Subhashini Venugopalan, Kate Saenko, Trevor Darrell, Long-term Recurrent Convolutional Networks for Visual Recognition and Description, CVPR, 2015.</li>
|
|
|
</ul>
|
|
|
</li>
|
|
|
-<li>UT / UML / Berkeley [<a href="http://arxiv.org/pdf/1412.4729v3.pdf">Paper</a>]
|
|
|
+<li>UT / UML / Berkeley <a href="http://arxiv.org/pdf/1412.4729v3.pdf">[Paper]</a>
|
|
|
|
|
|
<ul>
|
|
|
-<li>Subhashini Venugopalan, Huijuan Xu, Jeff Donahue, Marcus Rohrbach, Raymond Mooney, Kate Saenko, Translating Videos to Natural Language Using Deep Recurrent Neural Networks, arXiv:1412.4729</li>
|
|
|
+<li>Subhashini Venugopalan, Huijuan Xu, Jeff Donahue, Marcus Rohrbach, Raymond Mooney, Kate Saenko, Translating Videos to Natural Language Using Deep Recurrent Neural Networks, arXiv:1412.4729.</li>
|
|
|
</ul>
|
|
|
</li>
|
|
|
-<li>Microsoft [<a href="http://arxiv.org/pdf/1505.01861v1.pdf">Paper</a>]
|
|
|
+<li>Microsoft <a href="http://arxiv.org/pdf/1505.01861v1.pdf">[Paper]</a>
|
|
|
|
|
|
<ul>
|
|
|
-<li>Yingwei Pan, Tao Mei, Ting Yao, Houqiang Li, Yong Rui, Joint Modeling Embedding and Translation to Bridge Video and Language, arXiv:1505.01861</li>
|
|
|
+<li>Yingwei Pan, Tao Mei, Ting Yao, Houqiang Li, Yong Rui, Joint Modeling Embedding and Translation to Bridge Video and Language, arXiv:1505.01861.</li>
|
|
|
</ul>
|
|
|
</li>
|
|
|
-<li>UT / Berkeley / UML [<a href="http://arxiv.org/pdf/1505.00487v2.pdf">Paper</a>]
|
|
|
+<li>UT / Berkeley / UML <a href="http://arxiv.org/pdf/1505.00487v2.pdf">[Paper]</a>
|
|
|
|
|
|
<ul>
|
|
|
-<li>Subhashini Venugopalan, Marcus Rohrbach, Jeff Donahue, Raymond Mooney, Trevor Darrell, Kate Saenko, Sequence to Sequence--Video to Text, arXiv:1505.00487</li>
|
|
|
+<li>Subhashini Venugopalan, Marcus Rohrbach, Jeff Donahue, Raymond Mooney, Trevor Darrell, Kate Saenko, Sequence to Sequence--Video to Text, arXiv:1505.00487.</li>
|
|
|
</ul>
|
|
|
</li>
|
|
|
</ul>
|
|
@@ -458,31 +457,31 @@ NIPS 2012.</li>
|
|
|
<a id="question-answering" class="anchor" href="#question-answering" aria-hidden="true"><span class="octicon octicon-link"></span></a>Question Answering</h3>
|
|
|
|
|
|
<p><img src="https://cloud.githubusercontent.com/assets/5226447/8452068/ffe7b1f6-2022-11e5-87ab-4f6d4696c220.PNG" alt="question_answering">
|
|
|
-(from Stanislaw Antol, Aishwarya Agrawal, Jiasen Lu, Margaret Mitchell, Dhruv Batra, C. Lawrence Zitnick, Devi Parikh, VQA: Visual Question Answering, CVPR 2015 SUNw:Scene Understanding workshop)</p>
|
|
|
+(from Stanislaw Antol, Aishwarya Agrawal, Jiasen Lu, Margaret Mitchell, Dhruv Batra, C. Lawrence Zitnick, Devi Parikh, VQA: Visual Question Answering, CVPR, 2015 SUNw:Scene Understanding workshop)</p>
|
|
|
|
|
|
<ul>
|
|
|
-<li>MSR / Virginia Tech. [<a href="http://www.visualqa.org/">Web</a>] [<a href="http://arxiv.org/pdf/1505.00468v1.pdf">Paper</a>]
|
|
|
+<li>MSR / Virginia Tech. <a href="http://www.visualqa.org/">[Web]</a> <a href="http://arxiv.org/pdf/1505.00468v1.pdf">[Paper]</a>
|
|
|
|
|
|
<ul>
|
|
|
-<li>Stanislaw Antol, Aishwarya Agrawal, Jiasen Lu, Margaret Mitchell, Dhruv Batra, C. Lawrence Zitnick, Devi Parikh, VQA: Visual Question Answering, CVPR 2015 SUNw:Scene Understanding workshop</li>
|
|
|
+<li>Stanislaw Antol, Aishwarya Agrawal, Jiasen Lu, Margaret Mitchell, Dhruv Batra, C. Lawrence Zitnick, Devi Parikh, VQA: Visual Question Answering, CVPR, 2015 SUNw:Scene Understanding workshop.</li>
|
|
|
</ul>
|
|
|
</li>
|
|
|
-<li>MPI / Berkeley [<a href="https://www.mpi-inf.mpg.de/departments/computer-vision-and-multimodal-computing/research/vision-and-language/visual-turing-challenge/">Web</a>] [<a href="http://arxiv.org/pdf/1505.01121v2.pdf">Paper</a>]
|
|
|
+<li>MPI / Berkeley <a href="https://www.mpi-inf.mpg.de/departments/computer-vision-and-multimodal-computing/research/vision-and-language/visual-turing-challenge/">[Web]</a> <a href="http://arxiv.org/pdf/1505.01121v2.pdf">[Paper]</a>
|
|
|
|
|
|
<ul>
|
|
|
-<li>Mateusz Malinowski, Marcus Rohrbach, Mario Fritz, Ask Your Neurons: A Neural-based Approach to Answering Questions about Images, arXiv:1505.01121</li>
|
|
|
+<li>Mateusz Malinowski, Marcus Rohrbach, Mario Fritz, Ask Your Neurons: A Neural-based Approach to Answering Questions about Images, arXiv:1505.01121.</li>
|
|
|
</ul>
|
|
|
</li>
|
|
|
-<li>Toronto [<a href="http://arxiv.org/pdf/1505.02074v1.pdf">Paper</a>] [<a href="http://www.cs.toronto.edu/%7Emren/imageqa/data/cocoqa/">Dataset</a>]
|
|
|
+<li>Toronto <a href="http://arxiv.org/pdf/1505.02074v1.pdf">[Paper]</a> <a href="http://www.cs.toronto.edu/%7Emren/imageqa/data/cocoqa/">[Dataset]</a>
|
|
|
|
|
|
<ul>
|
|
|
-<li>Mengye Ren, Ryan Kiros, Richard Zemel, Image Question Answering: A Visual Semantic Embedding Model and a New Dataset, arXiv:1505.02074 / ICML 2015 deep learning workshop</li>
|
|
|
+<li>Mengye Ren, Ryan Kiros, Richard Zemel, Image Question Answering: A Visual Semantic Embedding Model and a New Dataset, arXiv:1505.02074 / ICML 2015 deep learning workshop.</li>
|
|
|
</ul>
|
|
|
</li>
|
|
|
-<li>Baidu / UCLA [<a href="http://arxiv.org/pdf/1505.05612v1.pdf">Paper</a>] [<a href="">Dataset</a>]
|
|
|
+<li>Baidu / UCLA <a href="http://arxiv.org/pdf/1505.05612v1.pdf">[Paper]</a> <a href="">[Dataset]</a>
|
|
|
|
|
|
<ul>
|
|
|
-<li>Hauyuan Gao, Junhua Mao, Jie Zhou, Zhiheng Huang, Lei Wang, Wei Xu, Are You Talking to a Machine? Dataset and Methods for Multilingual Image Question Answering, arXiv:1505.05612</li>
|
|
|
+<li>Hauyuan Gao, Junhua Mao, Jie Zhou, Zhiheng Huang, Lei Wang, Wei Xu, Are You Talking to a Machine? Dataset and Methods for Multilingual Image Question Answering, arXiv:1505.05612.</li>
|
|
|
</ul>
|
|
|
</li>
|
|
|
</ul>
|
|
@@ -518,8 +517,7 @@ NIPS 2012.</li>
|
|
|
<li>Generate image <a href="http://www.cv-foundation.org/openaccess/content_cvpr_2015/papers/Dosovitskiy_Learning_to_Generate_2015_CVPR_paper.pdf">[Paper]</a>
|
|
|
|
|
|
<ul>
|
|
|
-<li>Alexey Dosovitskiy, Jost Tobias Springenberg, Thomas Brox, Learning to Generate Chairs with Convolutional Neural Networks, CVPR, 2015.<br>
|
|
|
-</li>
|
|
|
+<li>Alexey Dosovitskiy, Jost Tobias Springenberg, Thomas Brox, Learning to Generate Chairs with Convolutional Neural Networks, CVPR, 2015.</li>
|
|
|
</ul>
|
|
|
</li>
|
|
|
</ul>
|
|
@@ -616,16 +614,16 @@ NIPS 2012.</li>
|
|
|
<li>Understanding and Visualizing
|
|
|
|
|
|
<ul>
|
|
|
-<li>Source code for "Understanding Deep Image Representations by Inverting Them", CVPR 2015. <a href="https://github.com/aravindhm/deep-goggle">[Web]</a>
|
|
|
+<li>Source code for "Understanding Deep Image Representations by Inverting Them," CVPR, 2015. <a href="https://github.com/aravindhm/deep-goggle">[Web]</a>
|
|
|
</li>
|
|
|
</ul>
|
|
|
</li>
|
|
|
<li>Semantic Segmentation
|
|
|
|
|
|
<ul>
|
|
|
-<li>Source code for the paper "Rich feature hierarchies for accurate object detection and semantic segmentation", CVPR 2014. <a href="https://github.com/rbgirshick/rcnn">[Web]</a>
|
|
|
+<li>Source code for the paper "Rich feature hierarchies for accurate object detection and semantic segmentation," CVPR, 2014. <a href="https://github.com/rbgirshick/rcnn">[Web]</a>
|
|
|
</li>
|
|
|
-<li>Source code for the paper "Fully Convolutional Networks for Semantic Segmentation", CVPR 2015. <a href="https://github.com/longjon/caffe/tree/future">[Web]</a>
|
|
|
+<li>Source code for the paper "Fully Convolutional Networks for Semantic Segmentation," CVPR, 2015. <a href="https://github.com/longjon/caffe/tree/future">[Web]</a>
|
|
|
</li>
|
|
|
</ul>
|
|
|
</li>
|
|
@@ -639,7 +637,7 @@ NIPS 2012.</li>
|
|
|
<li>Edge Detection
|
|
|
|
|
|
<ul>
|
|
|
-<li>Source code for the paper "DeepContour: A Deep Convolutional Feature Learned by Positive-Sharing Loss for Contour Detection" CVPR 2015. <a href="https://github.com/shenwei1231/DeepContour">[Web]</a>
|
|
|
+<li>Source code for the paper "DeepContour: A Deep Convolutional Feature Learned by Positive-Sharing Loss for Contour Detection," CVPR, 2015. <a href="https://github.com/shenwei1231/DeepContour">[Web]</a>
|
|
|
</li>
|
|
|
</ul>
|
|
|
</li>
|