|
@@ -19,23 +19,23 @@ Please feel free to [pull requests](https://github.com/kjw0612/awesome-deep-visi
|
|
|
|
|
|
## Table of Contents
|
|
## Table of Contents
|
|
- [Papers](#papers)
|
|
- [Papers](#papers)
|
|
-- [ImageNet Classification](#imagenet-classification)
|
|
|
|
-- [Object Detection](#object-detection)
|
|
|
|
-- [Object Tracking](#object-tracking)
|
|
|
|
-- [Low-Level Vision](#low-level-vision)
|
|
|
|
- - [Super-Resolution](#super-resolution)
|
|
|
|
- - [Other Applications](#other-applications)
|
|
|
|
-- [Edge Detection](#edge-detection)
|
|
|
|
-- [Semantic Segmentation](#semantic-segmentation)
|
|
|
|
-- [Visual Attention and Saliency](#visual-attention-and-saliency)
|
|
|
|
-- [Object Recognition](#object-recognition)
|
|
|
|
-- [Understanding CNN](#understanding-cnn)
|
|
|
|
-- [Image and Language](#image-and-language)
|
|
|
|
- - [Image Captioning](#image-captioning)
|
|
|
|
- - [Video Captioning](#video-captioning)
|
|
|
|
- - [Question Answering](#question-answering)
|
|
|
|
-- [Image Generation](#image-generation)
|
|
|
|
-- [Other Topics](#other-topics)
|
|
|
|
|
|
+ - [ImageNet Classification](#imagenet-classification)
|
|
|
|
+ - [Object Detection](#object-detection)
|
|
|
|
+ - [Object Tracking](#object-tracking)
|
|
|
|
+ - [Low-Level Vision](#low-level-vision)
|
|
|
|
+ - [Super-Resolution](#super-resolution)
|
|
|
|
+ - [Other Applications](#other-applications)
|
|
|
|
+ - [Edge Detection](#edge-detection)
|
|
|
|
+ - [Semantic Segmentation](#semantic-segmentation)
|
|
|
|
+ - [Visual Attention and Saliency](#visual-attention-and-saliency)
|
|
|
|
+ - [Object Recognition](#object-recognition)
|
|
|
|
+ - [Understanding CNN](#understanding-cnn)
|
|
|
|
+ - [Image and Language](#image-and-language)
|
|
|
|
+ - [Image Captioning](#image-captioning)
|
|
|
|
+ - [Video Captioning](#video-captioning)
|
|
|
|
+ - [Question Answering](#question-answering)
|
|
|
|
+ - [Image Generation](#image-generation)
|
|
|
|
+ - [Other Topics](#other-topics)
|
|
- [Courses](#courses)
|
|
- [Courses](#courses)
|
|
- [Books](#books)
|
|
- [Books](#books)
|
|
- [Videos](#videos)
|
|
- [Videos](#videos)
|
|
@@ -134,6 +134,8 @@ Please feel free to [pull requests](https://github.com/kjw0612/awesome-deep-visi
|
|
* Li Xu, Jimmy SJ. Ren, Qiong Yan, Renjie Liao, Jiaya Jia, Deep Edge-Aware Filters, ICML, 2015.
|
|
* Li Xu, Jimmy SJ. Ren, Qiong Yan, Renjie Liao, Jiaya Jia, Deep Edge-Aware Filters, ICML, 2015.
|
|
* Computing the Stereo Matching Cost with a Convolutional Neural Network [[Paper]](http://www.cv-foundation.org/openaccess/content_cvpr_2015/papers/Zbontar_Computing_the_Stereo_2015_CVPR_paper.pdf)
|
|
* Computing the Stereo Matching Cost with a Convolutional Neural Network [[Paper]](http://www.cv-foundation.org/openaccess/content_cvpr_2015/papers/Zbontar_Computing_the_Stereo_2015_CVPR_paper.pdf)
|
|
* Jure Žbontar, Yann LeCun, Computing the Stereo Matching Cost with a Convolutional Neural Network, CVPR, 2015.
|
|
* Jure Žbontar, Yann LeCun, Computing the Stereo Matching Cost with a Convolutional Neural Network, CVPR, 2015.
|
|
|
|
+* Feature Learning by Inpainting[[Paper]](https://arxiv.org/pdf/1604.07379v1.pdf)[[Code]](https://github.com/pathak22/context-encoder)
|
|
|
|
+ * Deepak Pathak, Philipp Krahenbuhl, Jeff Donahue, Trevor Darrell, Alexei A. Efros, Context Encoders: Feature Learning by Inpainting, CVPR, 2016
|
|
|
|
|
|
### Edge Detection
|
|
### Edge Detection
|
|

|
|

|