Deep Learning'Deep Learning' (also known as 'deep structured learning' or 'hierarchical learning') is part of a broader family of machine learning methods based on learning data representations, as opposed to task-specific algorithms. Learning can be supervised, semi-supervised or unsupervised. Deep learning models are loosely related to information processing and communication patterns in a biological nervous system, such as neural coding that attempts to define a relationship between various stimuli and associated neuronal responses in the brain. Deep learning architectures such as deep neural networks, deep belief networks, and recurrent neural networks have been applied to fields including computer vision, speech recognition, natural language processing, audio recognition, social network filtering, machine translation, bioinformatics, and drug design, where they have produced results comparable to and in some cases superior to human experts.
Manish Gupta, Puneet Agrawal
In recent years, the fields of natural language processing (NLP) and information retrieval (IR) have made tremendous progress thanks to deep learning models like Recurrent Neural Networks (RNNs), Gated Recurrent Units (GRUs) and Long Short-Term Memory (LSTMs) networks, and Tran...
Cited by0BibtexViews30Links
0
0
Gou Jianping, Yu Baosheng,Maybank Stephen John,Tao Dacheng
In recent years, deep neural networks have been very successful in the fields of both industry and academia, especially for the applications of visual recognition and neural language processing. The great success of deep learning mainly owes to its great scalabilities to both l...
Cited by0BibtexViews33Links
0
0
Deep learning has made major breakthroughs and progress in many fields. This is due to the powerful automatic representation capabilities of deep learning. It has been proved that the design of the network architecture is crucial to the feature representation of data and the fi...
Cited by0BibtexViews46Links
0
0
Li Wanyi, Li Fuyu,Luo Yongkang, Wang Peng
Deep learning (DL) based object detection has achieved great progress. These methods typically assume that large amount of labeled training data is available, and training and test data are drawn from an identical distribution. However, the two assumptions are not always hold i...
Cited by0BibtexViews61Links
0
0
Ren Bin,Liu Mengyuan,Ding Runwei, Liu Hong
3D skeleton-based action recognition, owing to the latent advantages of skeleton, has been an active topic in computer vision. As a consequence, there are lots of impressive works including conventional handcraft feature based and learned feature based have been done over the y...
Cited by0BibtexViews50Links
0
0
Journal of Machine Learning Research, no. 55 (2019)
Neural Architecture Search can be seen as subfield of AutoML and has significant overlap with hyperparameter optimization and meta-learning
Cited by346BibtexViews44Links
0
0
Electronics, no. 3 (2019): 292
In recent years, deep learning has garnered tremendous success in a variety of application domains. This new field of machine learning has been growing rapidly and has been applied to most traditional application domains, as well as some new areas that present more opportunities....
Cited by0BibtexViews28
0
0
I. J. Robotics Res., no. 4-5 (2018)
We presented a method for learning hand-eye coordination for robotic grasping, using deep learning to build a grasp success prediction network, and a continuous servoing mechanism to use this network to continuously control a robotic manipulator
Cited by341BibtexViews185Links
0
0
Chuanqi Tan,Fuchun Sun,Tao Kong, Wenchang Zhang,Chao Yang, Chunfang Liu
ICANN, (2018)
We have review and category current researches of deep transfer learning
Cited by255BibtexViews44Links
0
0
KDD, (2018): 2110-2119
This work explores the potential of network representation learning in social influence analysis and gives the very first attempt to explain the dynamics of social influence
Cited by109BibtexViews434Links
1
0
Commun. ACM, no. 6 (2017): 84-90
The best performance achieved during the ILSVRC2010 competition was 47.1% and 28.2% with an approach that averages the predictions produced from six sparse-coding models trained on different features, and since the best published results are 45.7% and 25.7% with an approach that ...
Cited by72606BibtexViews2456Links
0
0
IEEE Trans. Pattern Anal. Mach. Intell., no. 6 (2017): 1137-1149
We have presented Region Proposal Networks for efficient and accurate region proposal generation
Cited by21451BibtexViews785Links
0
0
AAAI, (2017): 4278-4284
We studied how the introduction of residual connections leads to dramatically improved training speed for the Inception architecture
Cited by4834BibtexViews386Links
0
0
IEEE Trans. Pattern Anal. Mach. Intell., no. 4 (2017): 664-676
We introduced a model that generates natural language descriptions of image regions based on weak labels in form of a dataset of images and sentences, and with very few hardcoded assumptions
Cited by3517BibtexViews220Links
0
0
CVPR, (2017)
We expect depthwise separable convolutions to become a cornerstone of convolutional neural network architecture design in the future, since they offer similar properties as Inception modules, yet are as easy to use as regular convolution layers
Cited by2654BibtexViews173Links
0
0
CVPR, (2017)
We propose a novel deep neural network PointNet that directly consumes point cloud
Cited by2497BibtexViews151Links
0
0
ICLR, (2017)
These models are in principle rich enough to memorize the training data. This situation poses a conceptual challenge to statistical learning theory as traditional measures of model complexity struggle to explain the generalization ability of large artificial neural networks
Cited by2163BibtexViews134Links
0
0
CVPR, (2016)
Deep networks naturally integrate low/mid/highlevel features and classifiers in an end-to-end multilayer fashion, and the “levels” of features can be enriched by the number of stacked layers
Cited by47481BibtexViews1386Links
0
0
David Silver,Aja Huang,Chris J Maddison,Arthur Guez,Laurent Sifre, George van den Driessche, Julian Schrittwieser,Ioannis Antonoglou, Veda Panneershelvam,Marc Lanctot,Sander Dieleman, Dominik Grewe
Nature, no. 7587 (2016): 484-489
Effective move selection and position evaluation functions for Go, based on deep neural networks that are trained by a novel combination of supervised and reinforcement learning
Cited by8666BibtexViews776Links
0
0
CVPR, (2016)
We have provided several design principles to scale up convolutional networks and studied them in the context of the Inception architecture
Cited by8432BibtexViews475Links
0
0
Keywords
Deep LearningFeature ExtractionImage ClassificationConvolutional Neural NetworksNeural NetsLearning Artificial IntelligenceNeural NetworkNeural NetworksBelief NetworkSpeech Recognition
Authors
Yoshua Bengio
Paper 13
Geoffrey E. Hinton
Paper 12
Ilya Sutskever
Paper 7
Oriol Vinyals
Paper 6
David Silver
Paper 6
Koray Kavukcuoglu
Paper 6
Shaoqing Ren
Paper 5
Jian Sun
Paper 5
Kaiming He
Paper 5
Andrew Y. Ng
Paper 5