|
@@ -32,11 +32,12 @@
|
|
|
<div id="main_content_wrap" class="outer">
|
|
|
<section id="main_content" class="inner">
|
|
|
<h1>
|
|
|
-<a id="awesome-deep-vision" class="anchor" href="#awesome-deep-vision" aria-hidden="true"><span class="octicon octicon-link"></span></a>Awesome Deep Vision</h1>
|
|
|
+<a id="awesome-deep-vision-" class="anchor" href="#awesome-deep-vision-" aria-hidden="true"><span class="octicon octicon-link"></span></a>Awesome Deep Vision <a href="https://github.com/sindresorhus/awesome"><img src="https://cdn.rawgit.com/sindresorhus/awesome/d7305f38d29fed78fa85652e3a63e154dd8e8829/media/badge.svg" alt="Awesome"></a>
|
|
|
+</h1>
|
|
|
|
|
|
<p>A curated list of deep learning resources for computer vision, inspired by <a href="https://github.com/ziadoz/awesome-php">awesome-php</a> and <a href="https://github.com/jbhuang0604/awesome-computer-vision">awesome-computer-vision</a>.</p>
|
|
|
|
|
|
-<p>Maintainers - <a href="http://github.com/kjw0612">Jiwon Kim</a>, <a href="https://github.com/hmyeong">Heesoo Myeong</a>, <a href="http://github.com/myungsub">Myungsub Choi</a>, <a href="https://github.com/JanghoonChoi">JanghoonChoi</a>, <a href="http://github.com/deruci">Jung Kwon Lee</a></p>
|
|
|
+<p>Maintainers - <a href="http://github.com/kjw0612">Jiwon Kim</a>, <a href="https://github.com/hmyeong">Heesoo Myeong</a>, <a href="http://github.com/myungsub">Myungsub Choi</a>, <a href="http://github.com/deruci">Jung Kwon Lee</a></p>
|
|
|
|
|
|
<h2>
|
|
|
<a id="contributing" class="anchor" href="#contributing" aria-hidden="true"><span class="octicon octicon-link"></span></a>Contributing</h2>
|
|
@@ -178,6 +179,12 @@
|
|
|
<li>Karel Lenc, Andrea Vedaldi, R-CNN minus R, arXiv:1506.06981.</li>
|
|
|
</ul>
|
|
|
</li>
|
|
|
+<li>End-to-end people detection in crowded scenes <a href="http://arxiv.org/abs/1506.04878">[Paper]</a>
|
|
|
+
|
|
|
+<ul>
|
|
|
+<li>Russell Stewart, Mykhaylo Andriluka, End-to-end people detection in crowded scenes, arXiv:1506.04878.</li>
|
|
|
+</ul>
|
|
|
+</li>
|
|
|
</ul>
|
|
|
|
|
|
<h3>
|
|
@@ -190,6 +197,8 @@
|
|
|
</li>
|
|
|
<li>N Wang, DY Yeung, Learning a Deep Compact Image Representation for Visual Tracking, NIPS, 2013. <a href="http://winsty.net/papers/dlt.pdf">[Paper]</a>
|
|
|
</li>
|
|
|
+<li>Chao Ma, Jia-Bin Huang, Xiaokang Yang and Ming-Hsuan Yang, "Hierarchical Convolutional Features for Visual Tracking", ICCV 2015 <a href="https://github.com/jbhuang0604/CF2">[GitHub]</a>
|
|
|
+</li>
|
|
|
</ul>
|
|
|
|
|
|
<h3>
|
|
@@ -276,16 +285,46 @@
|
|
|
(from Jifeng Dai, Kaiming He, Jian Sun, BoxSup: Exploiting Bounding Boxes to Supervise Convolutional Networks for Semantic Segmentation, arXiv:1503.01640.)</p>
|
|
|
|
|
|
<ul>
|
|
|
+<li>PASCAL VOC2012 Challenge Top 10 (14 Aug. 2015)
|
|
|
+<img src="http://cv.snu.ac.kr/hmyeong/files/150814_pascal_voc.png" alt="VOC2012_top_10">
|
|
|
+(from PASCAL VOC2012 <a href="http://host.robots.ox.ac.uk:8080/leaderboard/displaylb.php?challengeid=11&compid=6">leaderboards</a>)</li>
|
|
|
+<li>Adelaide
|
|
|
+
|
|
|
+<ul>
|
|
|
+<li>Guosheng Lin, Chunhua Shen, Ian Reid, Anton van dan Hengel, Efficient piecewise training of deep structured models for semantic segmentation, arXiv:1504.01013. <a href="http://arxiv.org/pdf/1504.01013">[Paper]</a> (1st ranked in VOC2012)</li>
|
|
|
+<li>Guosheng Lin, Chunhua Shen, Ian Reid, Anton van den Hengel, Deeply Learning the Messages in Message Passing Inference, arXiv:1508.02108. <a href="http://arxiv.org/pdf/1506.02108">[Paper]</a> (4th ranked in VOC2012)</li>
|
|
|
+</ul>
|
|
|
+</li>
|
|
|
<li>BoxSup <a href="http://arxiv.org/pdf/1503.01640">[Paper]</a>
|
|
|
|
|
|
<ul>
|
|
|
-<li>Jifeng Dai, Kaiming He, Jian Sun, BoxSup: Exploiting Bounding Boxes to Supervise Convolutional Networks for Semantic Segmentation, arXiv:1503.01640.</li>
|
|
|
+<li>Jifeng Dai, Kaiming He, Jian Sun, BoxSup: Exploiting Bounding Boxes to Supervise Convolutional Networks for Semantic Segmentation, arXiv:1503.01640. (2nd ranked in VOC2012)</li>
|
|
|
</ul>
|
|
|
</li>
|
|
|
<li>Conditional Random Fields as Recurrent Neural Networks <a href="http://arxiv.org/pdf/1502.03240">[Paper]</a>
|
|
|
|
|
|
<ul>
|
|
|
-<li>Shuai Zheng, Sadeep Jayasumana, Bernardino Romera-Paredes, Vibhav Vineet, Zhizhong Su, Dalong Du, Chang Huang, Philip H. S. Torr, Conditional Random Fields as Recurrent Neural Networks, arXiv:1502.03240.</li>
|
|
|
+<li>Shuai Zheng, Sadeep Jayasumana, Bernardino Romera-Paredes, Vibhav Vineet, Zhizhong Su, Dalong Du, Chang Huang, Philip H. S. Torr, Conditional Random Fields as Recurrent Neural Networks, arXiv:1502.03240. (3rd ranked in VOC2012)</li>
|
|
|
+</ul>
|
|
|
+</li>
|
|
|
+<li>DeepLab
|
|
|
+
|
|
|
+<ul>
|
|
|
+<li> Liang-Chieh Chen, George Papandreou, Kevin Murphy, Alan L. Yuille, Weakly-and semi-supervised learning of a DCNN for semantic image segmentation, arXiv:1502.02734. <a href="http://arxiv.org/pdf/1502.02734">[Paper]</a> (5th ranked in VOC2012)</li>
|
|
|
+</ul>
|
|
|
+</li>
|
|
|
+<li>POSTECH
|
|
|
+
|
|
|
+<ul>
|
|
|
+<li>Hyeonwoo Noh, Seunghoon Hong, Bohyung Han, Learning Deconvolution Network for Semantic Segmentation, arXiv:1505.04366. <a href="http://arxiv.org/pdf/1505.04366">[Paper]</a> (6th ranked in VOC2012)</li>
|
|
|
+<li>Seunghoon Hong, Hyeonwoo Noh, Bohyung Han, Decoupled Deep Neural Network for Semi-supervised Semantic Segmentation, arXiv:1506.04924. <a href="http://arxiv.org/pdf/1506.04924">[Paper]</a>
|
|
|
+</li>
|
|
|
+</ul>
|
|
|
+</li>
|
|
|
+<li>Joint Calibration <a href="http://arxiv.org/pdf/1507.01581">[Paper]</a>
|
|
|
+
|
|
|
+<ul>
|
|
|
+<li>Holger Caesar, Jasper Uijlings, Vittorio Ferrari, Joint Calibration for Semantic Segmentation, arXiv:1507.01581.</li>
|
|
|
</ul>
|
|
|
</li>
|
|
|
<li>Fully Convolutional Networks for Semantic Segmentation <a href="http://www.cv-foundation.org/openaccess/content_cvpr_2015/papers/Long_Fully_Convolutional_Networks_2015_CVPR_paper.pdf">[Paper-CVPR15]</a> <a href="http://arxiv.org/pdf/1411.4038">[Paper-arXiv15]</a>
|
|
@@ -294,27 +333,30 @@
|
|
|
<li>Jonathan Long, Evan Shelhamer, Trevor Darrell, Fully Convolutional Networks for Semantic Segmentation, CVPR, 2015.</li>
|
|
|
</ul>
|
|
|
</li>
|
|
|
-<li>Learning Hierarchical Features for Scene Labeling <a href="http://yann.lecun.com/exdb/publis/pdf/farabet-icml-12.pdf">[Paper-ICML12]</a> <a href="http://yann.lecun.com/exdb/publis/pdf/farabet-pami-13.pdf">[Paper-PAMI13]</a>
|
|
|
+<li>Hypercolumn <a href="http://www.cv-foundation.org/openaccess/content_cvpr_2015/papers/Hariharan_Hypercolumns_for_Object_2015_CVPR_paper.pdf">[Paper]</a>
|
|
|
|
|
|
<ul>
|
|
|
-<li>Clement Farabet, Camille Couprie, Laurent Najman, Yann LeCun, Scene Parsing with Multiscale Feature Learning, Purity Trees, and Optimal Covers, ICML, 2012.</li>
|
|
|
-<li>Clement Farabet, Camille Couprie, Laurent Najman, Yann LeCun, Learning Hierarchical Features for Scene Labeling, PAMI, 2013.</li>
|
|
|
+<li>Bharath Hariharan, Pablo Arbelaez, Ross Girshick, Jitendra Malik, Hypercolumns for Object Segmentation and Fine-Grained Localization, CVPR, 2015. </li>
|
|
|
</ul>
|
|
|
</li>
|
|
|
-<li>DeepLab
|
|
|
+<li>Zoom-out <a href="http://www.cv-foundation.org/openaccess/content_cvpr_2015/papers/Mostajabi_Feedforward_Semantic_Segmentation_2015_CVPR_paper.pdf">[Paper]</a>
|
|
|
|
|
|
<ul>
|
|
|
-<li> G Papandreou, LC Chen, K Murphy, AL Yuille, Weakly-and semi-supervised learning of a DCNN for semantic image segmentation, arXiv:1502.02734. <a href="http://arxiv.org/pdf/1502.02734">[Paper]</a>
|
|
|
-</li>
|
|
|
+<li>Mohammadreza Mostajabi, Payman Yadollahpour, Gregory Shakhnarovich, Feedforward Semantic Segmentation With Zoom-Out Features, CVPR, 2015.</li>
|
|
|
</ul>
|
|
|
</li>
|
|
|
-<li>POSTECH
|
|
|
+<li>Deep Hierarchical Parsing
|
|
|
|
|
|
<ul>
|
|
|
-<li>Hyeonwoo Noh, Seunghoon Hong, Bohyung Han, Learning Deconvolution Network for Semantic Segmentation, arXiv:1505.04366. <a href="http://arxiv.org/pdf/1505.04366">[Paper]</a>
|
|
|
+<li>Abhishek Sharma, Oncel Tuzel, David W. Jacobs, Deep Hierarchical Parsing for Semantic Segmentation, CVPR, 2015. <a href="http://www.cv-foundation.org/openaccess/content_cvpr_2015/papers/Sharma_Deep_Hierarchical_Parsing_2015_CVPR_paper.pdf">[Paper]</a>
|
|
|
</li>
|
|
|
-<li>Seunghoon Hong, Hyeonwoo Noh, Bohyung Han, Decoupled Deep Neural Network for Semi-supervised Semantic Segmentation, arXiv:1506.04924. <a href="http://arxiv.org/pdf/1506.04924">[Paper]</a>
|
|
|
+</ul>
|
|
|
</li>
|
|
|
+<li>Learning Hierarchical Features for Scene Labeling <a href="http://yann.lecun.com/exdb/publis/pdf/farabet-icml-12.pdf">[Paper-ICML12]</a> <a href="http://yann.lecun.com/exdb/publis/pdf/farabet-pami-13.pdf">[Paper-PAMI13]</a>
|
|
|
+
|
|
|
+<ul>
|
|
|
+<li>Clement Farabet, Camille Couprie, Laurent Najman, Yann LeCun, Scene Parsing with Multiscale Feature Learning, Purity Trees, and Optimal Covers, ICML, 2012.</li>
|
|
|
+<li>Clement Farabet, Camille Couprie, Laurent Najman, Yann LeCun, Learning Hierarchical Features for Scene Labeling, PAMI, 2013.</li>
|
|
|
</ul>
|
|
|
</li>
|
|
|
</ul>
|
|
@@ -395,6 +437,19 @@
|
|
|
<li>Aravindh Mahendran, Andrea Vedaldi, Understanding Deep Image Representations by Inverting Them, CVPR, 2015.</li>
|
|
|
</ul>
|
|
|
</li>
|
|
|
+<li>Object Detectors Emerge in Deep Scene CNNs <a href="http://arxiv.org/abs/1412.6856">[Paper]</a>
|
|
|
+
|
|
|
+<ul>
|
|
|
+<li>Bolei Zhou, Aditya Khosla, Agata Lapedriza, Aude Oliva, Antonio Torralba, Object Detectors Emerge in Deep Scene CNNs, ICLR, 2015.</li>
|
|
|
+</ul>
|
|
|
+</li>
|
|
|
+<li>Inverting Convolutional Networks with Convolutional Networks
|
|
|
+
|
|
|
+<ul>
|
|
|
+<li>Alexey Dosovitskiy, Thomas Brox, Inverting Convolutional Networks with Convolutional Networks, arXiv, 2015. <a href="http://arxiv.org/abs/1506.02753">[Paper]</a>
|
|
|
+</li>
|
|
|
+</ul>
|
|
|
+</li>
|
|
|
</ul>
|
|
|
|
|
|
<h3>
|
|
@@ -440,10 +495,11 @@
|
|
|
<li>Subhashini Venugopalan, Huijuan Xu, Jeff Donahue, Marcus Rohrbach, Raymond Mooney, Kate Saenko, Translating Videos to Natural Language Using Deep Recurrent Neural Networks, NAACL-HLT, 2015. </li>
|
|
|
</ul>
|
|
|
</li>
|
|
|
-<li>CMU / Microsoft <a href="http://arxiv.org/pdf/1411.5654">[Paper]</a>
|
|
|
+<li>CMU / Microsoft <a href="http://arxiv.org/pdf/1411.5654">[Paper-arXiv]</a> <a href="http://www.cs.cmu.edu/%7Exinleic/papers/cvpr15_rnn.pdf">[Paper-CVPR]</a>
|
|
|
|
|
|
<ul>
|
|
|
<li>Xinlei Chen, C. Lawrence Zitnick, Learning a Recurrent Visual Representation for Image Caption Generation, arXiv:1411.5654.</li>
|
|
|
+<li>Xinlei Chen, C. Lawrence Zitnick, Mind’s Eye: A Recurrent Visual Representation for Image Caption Generation, CVPR 2015</li>
|
|
|
</ul>
|
|
|
</li>
|
|
|
<li>Microsoft <a href="http://arxiv.org/pdf/1411.4952">[Paper]</a>
|
|
@@ -452,6 +508,55 @@
|
|
|
<li>Hao Fang, Saurabh Gupta, Forrest Iandola, Rupesh Srivastava, Li Deng, Piotr Dollár, Jianfeng Gao, Xiaodong He, Margaret Mitchell, John C. Platt, C. Lawrence Zitnick, Geoffrey Zweig, From Captions to Visual Concepts and Back, CVPR, 2015. </li>
|
|
|
</ul>
|
|
|
</li>
|
|
|
+<li>Univ. Montreal / Univ. Toronto [<a href="http://kelvinxu.github.io/projects/capgen.html">Web</a>] [<a href="http://www.cs.toronto.edu/%7Ezemel/documents/captionAttn.pdf">Paper</a>]
|
|
|
+
|
|
|
+<ul>
|
|
|
+<li>Kelvin Xu, Jimmy Lei Ba, Ryan Kiros, Kyunghyun Cho, Aaron Courville, Ruslan Salakhutdinov, Richard S. Zemel, Yoshua Bengio, Show, Attend, and Tell: Neural Image Caption Generation with Visual Attention, arXiv:1502.03044 / ICML 2015</li>
|
|
|
+</ul>
|
|
|
+</li>
|
|
|
+<li>Idiap / EPFL / Facebook [<a href="http://arxiv.org/pdf/1502.03671">Paper</a>]
|
|
|
+
|
|
|
+<ul>
|
|
|
+<li>Remi Lebret, Pedro O. Pinheiro, Ronan Collobert, Phrase-based Image Captioning, arXiv:1502.03671 / ICML 2015</li>
|
|
|
+</ul>
|
|
|
+</li>
|
|
|
+<li>UCLA / Baidu [<a href="http://arxiv.org/pdf/1504.06692">Paper</a>]
|
|
|
+
|
|
|
+<ul>
|
|
|
+<li>Junhua Mao, Wei Xu, Yi Yang, Jiang Wang, Zhiheng Huang, Alan L. Yuille, Learning like a Child: Fast Novel Visual Concept Learning from Sentence Descriptions of Images, arXiv:1504.06692</li>
|
|
|
+</ul>
|
|
|
+</li>
|
|
|
+<li>MS + Berkeley
|
|
|
+
|
|
|
+<ul>
|
|
|
+<li>Jacob Devlin, Saurabh Gupta, Ross Girshick, Margaret Mitchell, C. Lawrence Zitnick, Exploring Nearest Neighbor Approaches for Image Captioning, arXiv:1505.04467 [<a href="http://arxiv.org/pdf/1505.04467.pdf">Paper</a>]</li>
|
|
|
+<li>Jacob Devlin, Hao Cheng, Hao Fang, Saurabh Gupta, Li Deng, Xiaodong He, Geoffrey Zweig, Margaret Mitchell, Language Models for Image Captioning: The Quirks and What Works, arXiv:1505.01809 [<a href="http://arxiv.org/pdf/1505.01809.pdf">Paper</a>]</li>
|
|
|
+</ul>
|
|
|
+</li>
|
|
|
+<li>Adelaide [<a href="http://arxiv.org/pdf/1506.01144.pdf">Paper</a>]
|
|
|
+
|
|
|
+<ul>
|
|
|
+<li>Qi Wu, Chunhua Shen, Anton van den Hengel, Lingqiao Liu, Anthony Dick, Image Captioning with an Intermediate Attributes Layer, arXiv:1506.01144</li>
|
|
|
+</ul>
|
|
|
+</li>
|
|
|
+<li>Tilburg [<a href="http://arxiv.org/pdf/1506.03694.pdf">Paper</a>]
|
|
|
+
|
|
|
+<ul>
|
|
|
+<li>Grzegorz Chrupala, Akos Kadar, Afra Alishahi, Learning language through pictures, arXiv:1506.03694</li>
|
|
|
+</ul>
|
|
|
+</li>
|
|
|
+<li>Univ. Montreal [<a href="http://arxiv.org/pdf/1507.01053.pdf">Paper</a>]
|
|
|
+
|
|
|
+<ul>
|
|
|
+<li>Kyunghyun Cho, Aaron Courville, Yoshua Bengio, Describing Multimedia Content using Attention-based Encoder-Decoder Networks, arXiv:1507.01053</li>
|
|
|
+</ul>
|
|
|
+</li>
|
|
|
+<li>Cornell [<a href="http://arxiv.org/pdf/1508.02091.pdf">Paper</a>]
|
|
|
+
|
|
|
+<ul>
|
|
|
+<li>Jack Hessel, Nicolas Savva, Michael J. Wilber, Image Representations and New Domains in Neural Image Captioning, arXiv:1508.02091 </li>
|
|
|
+</ul>
|
|
|
+</li>
|
|
|
</ul>
|
|
|
|
|
|
<h3>
|
|
@@ -482,6 +587,30 @@
|
|
|
<li>Subhashini Venugopalan, Marcus Rohrbach, Jeff Donahue, Raymond Mooney, Trevor Darrell, Kate Saenko, Sequence to Sequence--Video to Text, arXiv:1505.00487.</li>
|
|
|
</ul>
|
|
|
</li>
|
|
|
+<li>Univ. Montreal / Univ. Sherbrooke [<a href="http://arxiv.org/pdf/1502.08029.pdf">Paper</a>]
|
|
|
+
|
|
|
+<ul>
|
|
|
+<li>Li Yao, Atousa Torabi, Kyunghyun Cho, Nicolas Ballas, Christopher Pal, Hugo Larochelle, Aaron Courville, Describing Videos by Exploiting Temporal Structure, arXiv:1502.08029</li>
|
|
|
+</ul>
|
|
|
+</li>
|
|
|
+<li>MPI / Berkeley [<a href="http://arxiv.org/pdf/1506.01698.pdf">Paper</a>]
|
|
|
+
|
|
|
+<ul>
|
|
|
+<li>Anna Rohrbach, Marcus Rohrbach, Bernt Schiele, The Long-Short Story of Movie Description, arXiv:1506.01698</li>
|
|
|
+</ul>
|
|
|
+</li>
|
|
|
+<li>Univ. Toronto / MIT [<a href="http://arxiv.org/pdf/1506.06724.pdf">Paper</a>]
|
|
|
+
|
|
|
+<ul>
|
|
|
+<li>Yukun Zhu, Ryan Kiros, Richard Zemel, Ruslan Salakhutdinov, Raquel Urtasun, Antonio Torralba, Sanja Fidler, Aligning Books and Movies: Towards Story-like Visual Explanations by Watching Movies and Reading Books, arXiv:1506.06724</li>
|
|
|
+</ul>
|
|
|
+</li>
|
|
|
+<li>Univ. Montreal [<a href="http://arxiv.org/pdf/1507.01053.pdf">Paper</a>]
|
|
|
+
|
|
|
+<ul>
|
|
|
+<li>Kyunghyun Cho, Aaron Courville, Yoshua Bengio, Describing Multimedia Content using Attention-based Encoder-Decoder Networks, arXiv:1507.01053</li>
|
|
|
+</ul>
|
|
|
+</li>
|
|
|
</ul>
|
|
|
|
|
|
<h3>
|
|
@@ -551,6 +680,21 @@
|
|
|
<li>Alexey Dosovitskiy, Jost Tobias Springenberg, Thomas Brox, Learning to Generate Chairs with Convolutional Neural Networks, CVPR, 2015.</li>
|
|
|
</ul>
|
|
|
</li>
|
|
|
+<li>Generate Image with Adversarial Network
|
|
|
+
|
|
|
+<ul>
|
|
|
+<li>Ian J. Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, Yoshua Bengio, Generative Adversarial Networks, NIPS, 2014. <a href="http://arxiv.org/abs/1406.2661">[Paper]</a>
|
|
|
+</li>
|
|
|
+<li>Emily Denton, Soumith Chintala, Arthur Szlam, Rob Fergus, Deep Generative Image Models using a Laplacian Pyramid of Adversarial Networks, NIPS, 2015. <a href="http://arxiv.org/abs/1506.05751">[Paper]</a>
|
|
|
+</li>
|
|
|
+</ul>
|
|
|
+</li>
|
|
|
+<li>Artistic Style <a href="http://arxiv.org/pdf/1506.04878v3">[Paper]</a> <a href="https://github.com/jcjohnson/neural-style">[Code]</a>
|
|
|
+
|
|
|
+<ul>
|
|
|
+<li>Leon A. Gatys, Alexander S. Ecker, Matthias Bethge, A Neural Algorithm of Artistic Style.</li>
|
|
|
+</ul>
|
|
|
+</li>
|
|
|
</ul>
|
|
|
|
|
|
<h2>
|
|
@@ -627,6 +771,13 @@
|
|
|
</li>
|
|
|
<li>Caffe: Deep learning framework by the BVLC <a href="http://caffe.berkeleyvision.org/">[Web]</a>
|
|
|
</li>
|
|
|
+<li>Theano: Mathematical library in Python, maintained by LISA lab <a href="http://deeplearning.net/software/theano/">[Web]</a>
|
|
|
+
|
|
|
+<ul>
|
|
|
+<li>Theano-based deep learning libraries: <a href="http://deeplearning.net/software/pylearn2/">Pylearn2</a>, <a href="https://github.com/mila-udem/blocks">Blocks</a>, <a href="http://keras.io/">Keras</a>, <a href="https://github.com/Lasagne/Lasagne">Lasagne</a>
|
|
|
+</li>
|
|
|
+</ul>
|
|
|
+</li>
|
|
|
<li>MatConvNet: CNNs for MATLAB <a href="http://www.vlfeat.org/matconvnet/">[Web]</a>
|
|
|
</li>
|
|
|
</ul>
|