


default search action
6th ICLR 2018: Vancouver, BC, Canada: Worshop Track
- 6th International Conference on Learning Representations, ICLR 2018, Vancouver, BC, Canada, April 30 - May 3, 2018, Workshop Track Proceedings. OpenReview.net 2018

Accepted Papers
- Yuhui Yuan, Kuiyuan Yang, Jianyuan Guo, Chao Zhang, Jingdong Wang:

Feature Incay for Representation Regularization. - Joshua C. Peterson, Krisha Aghi, Jordan W. Suchow, Alexander Y. Ku, Tom Griffiths:

Capturing Human Category Representations by Sampling in Deep Feature Spaces. - Rachit Dubey, Pulkit Agrawal, Deepak Pathak, Alyosha A. Efros, Thomas L. Griffiths:

Investigating Human Priors for Playing Video Games. - George Philipp, Dawn Song, Jaime G. Carbonell:

Gradients explode - Deep Networks are shallow - ResNet explained. - Sanjeev Arora, Elad Hazan, Holden Lee, Karan Singh, Cyril Zhang, Yi Zhang:

Towards Provable Control for Unknown Linear Dynamical Systems. - Samira Ebrahimi Kahou, Vincent Michalski, Adam Atkinson, Ákos Kádár, Adam Trischler, Yoshua Bengio:

FigureQA: An Annotated Figure Dataset for Visual Reasoning. - Keyi Yu, Yang Liu, Alexander G. Schwing, Jian Peng:

Fast and Accurate Text Classification: Skimming, Rereading and Early Stopping. - Steven T. Kothen-Hill, Asaf Zviran, Rafael C. Schulman, Sunil Deochand, Federico Gaiti, Dillon Maloney, Kevin Y. Huang, Will Liao, Nicolas Robine, Nathaniel D. Omans, Dan A. Landau:

Deep learning mutation prediction enables early stage lung cancer detection in liquid biopsy. - Qibin Zhao, Masashi Sugiyama, Longhao Yuan, Andrzej Cichocki:

Learning Efficient Tensor Representations with Ring Structure Networks. - Yi Wu, Yuxin Wu, Georgia Gkioxari, Yuandong Tian:

Building Generalizable Agents with a Realistic and Rich 3D Environment. - Oliver Hennigh:

Automated Design using Neural Networks and Gradient Descent. - Michael Zhu, Suyog Gupta:

To Prune, or Not to Prune: Exploring the Efficacy of Pruning for Model Compression. - Robert S. DiPietro, Christian Rupprecht, Nassir Navab, Gregory D. Hager:

Analyzing and Exploiting NARX Recurrent Neural Networks for Long-Term Dependencies. - Illia Polosukhin, Alexander Skidanov:

Neural Program Search: Solving Programming Tasks from Description and Examples. - Oleg Rybakov, Vijai Mohan, Avishkar Misra, Scott LeGrand, Rejith Joseph, Kiuk Chung, Siddharth Singh, Qian You, Eric T. Nalisnick, Leo Dirac, Runfei Luo:

The Effectiveness of a two-Layer Neural Network for Recommendations. - Tiago Pimentel, Adriano Veloso, Nivio Ziviani:

Fast Node Embeddings: Learning Ego-Centric Representations. - Maithra Raghu, Alex Irpan, Jacob Andreas, Robert Kleinberg, Quoc V. Le, Jon M. Kleinberg:

Can Deep Reinforcement Learning solve Erdos-Selfridge-Spencer Games? - Daniel Neil, Marwin H. S. Segler, Laura Guasch, Mohamed Ahmed, Dean Plumbley, Matthew Sellwood, Nathan Brown:

Exploring Deep Recurrent Models with Reinforcement Learning for Molecule Design. - David Madras, Toniann Pitassi, Richard S. Zemel:

Predict Responsibly: Increasing Fairness by Learning to Defer. - Justin Gilmer, Luke Metz, Fartash Faghri, Samuel S. Schoenholz, Maithra Raghu, Martin Wattenberg, Ian J. Goodfellow:

Adversarial Spheres. - Ben Usman, Kate Saenko, Brian Kulis:

Stable Distribution Alignment Using the Dual of the Adversarial Distance. - Yang Li, Nan Du, Samy Bengio:

Time-Dependent Representation for Neural Event Sequence Prediction. - Chengtao Li, David Alvarez-Melis, Keyulu Xu, Stefanie Jegelka, Suvrit Sra:

Distributional Adversarial Networks. - Han Zhao, Shanghang Zhang, Guanhang Wu, João Paulo Costeira, José M. F. Moura, Geoffrey J. Gordon:

Multiple Source Domain Adaptation with Adversarial Learning. - Martin Schrimpf, Stephen Merity, James Bradbury, Richard Socher:

A Flexible Approach to Automated RNN Architecture Generation. - Thomas Elsken, Jan Hendrik Metzen, Frank Hutter:

Simple and efficient architecture search for Convolutional Neural Networks. - Shaojie Bai, J. Zico Kolter, Vladlen Koltun:

Convolutional Sequence Modeling Revisited. - Vitalii Zhelezniak, Dan Busbridge, April Shen, Samuel L. Smith, Nils Y. Hammerla:

Decoding Decoders: Finding Optimal Representation Spaces for Unsupervised Similarity Tasks. - Prajit Ramachandran, Barret Zoph, Quoc V. Le:

Searching for Activation Functions. - Yoav Levine, Or Sharir, Amnon Shashua:

Benefits of Depth for Long-Term Memory of Recurrent Networks. - Bowen Baker, Otkrist Gupta, Ramesh Raskar, Nikhil Naik:

Accelerating Neural Architecture Search using Performance Prediction. - Hieu Pham, Melody Y. Guan, Barret Zoph, Quoc V. Le, Jeff Dean:

Faster Discovery of Neural Architectures by Searching for Paths in a Large Model. - Richard Wei, Lane Schwartz, Vikram S. Adve:

DLVM: A modern compiler infrastructure for deep learning systems. - Chao Gao, Martin Müller, Ryan Hayward:

Adversarial Policy Gradient for Alternating Markov Games. - Yusuke Tsuzuku, Hiroto Imachi, Takuya Akiba:

Variance-based Gradient Compression for Efficient Distributed Deep Learning. - Tom Zahavy, Bingyi Kang, Alex Sivak, Jiashi Feng, Huan Xu, Shie Mannor:

Ensemble Robustness and Generalization of Stochastic Deep Learning Algorithms. - Cijo Jose, Moustapha Cissé, François Fleuret:

Kronecker Recurrent Units. - Shohei Ohsawa, Kei Akuzawa, Tatsuya Matsushima, Gustavo Bezerra, Yusuke Iwasawa, Hiroshi Kajino, Seiya Takenaka, Yutaka Matsuo:

Neuron as an Agent. - Ekin Dogus Cubuk, Barret Zoph, Samuel S. Schoenholz, Quoc V. Le:

Intriguing Properties of Adversarial Examples. - Chenwei Wu, Jiajun Luo, Jason D. Lee:

No Spurious Local Minima in a Two Hidden Unit ReLU Network. - Beidi Chen, Yingchen Xu, Anshumali Shrivastava:

Lsh-Sampling breaks the Computational chicken-and-egg Loop in adaptive stochastic Gradient estimation. - Panos Achlioptas, Olga Diamanti, Ioannis Mitliagkas, Leonidas J. Guibas:

Learning Representations and Generative Models for 3D Point Clouds. - Anuroop Sriram, Heewoo Jun, Sanjeev Satheesh, Adam Coates:

Cold fusion: Training Seq2seq Models Together with Language Models. - Benjamin Scellier, Anirudh Goyal, Jonathan Binas, Thomas Mesnard, Yoshua Bengio:

Extending the Framework of Equilibrium Propagation to General Dynamics. - Maher Nouiehed, Meisam Razaviyayn:

Learning Deep Models: Critical Points and Local Openness. - Risi Kondor, Hy Truong Son, Horace Pan, Brandon M. Anderson, Shubhendu Trivedi:

Covariant Compositional Networks For Learning Graphs. - Levent Sagun, Utku Evci, V. Ugur Güney, Yann N. Dauphin, Léon Bottou:

Empirical Analysis of the Hessian of Over-Parametrized Neural Networks. - Zhendong Zhang, Cheolkon Jung:

Regularization Neural Networks via Constrained Virtual Movement Field. - Renjie Liao, Marc Brockschmidt, Daniel Tarlow, Alexander L. Gaunt, Raquel Urtasun, Richard S. Zemel:

Graph Partition Neural Networks for Semi-Supervised Classification. - Quynh Nguyen, Matthias Hein:

The loss surface and expressivity of deep convolutional neural networks. - Brandon Reagen, Udit Gupta, Robert Adolf, Michael Mitzenmacher, Alexander M. Rush

, Gu-Yeon Wei, David Brooks:
Weightless: Lossy weight encoding for deep neural network compression. - Daniel Soudry, Elad Hoffer:

Exponentially vanishing sub-optimal local minima in multilayer neural networks. - Xinyun Chen, Chang Liu, Dawn Song:

Tree-to-tree Neural Networks for Program Translation. - Wenpeng Hu, Bing Liu, Jinwen Ma, Dongyan Zhao, Rui Yan:

Aspect-based Question Generation. - Lars Hiller Eidnes, Arild Nøkland:

Shifting Mean Activation Towards Zero with Bipolar Activation Functions. - Attila Szabó, Qiyang Hu, Tiziano Portenier, Matthias Zwicker, Paolo Favaro:

Challenges in Disentangling Independent Factors of Variation. - Fan Yang, Jiazhong Nie, William W. Cohen, Ni Lao:

Learning to Organize Knowledge with N-Gram Machines. - Rumen Dangovski, Li Jing, Marin Soljacic:

Rotational Unit of Memory. - Taihong Xiao, Jiapeng Hong, Jinwen Ma:

DNA-GAN: Learning Disentangled Representations from Multi-Attribute Images. - Yasin Yazici, Kim-Hui Yap, Stefan Winkler:

Autoregressive Generative Adversarial Networks. - Peter H. Jin, Sergey Levine, Kurt Keutzer:

Regret Minimization for Partially Observable Deep Reinforcement Learning. - Yiping Lu, Aoxiao Zhong, Quanzheng Li, Bin Dong:

Beyond Finite Layer Neural Networks: Bridging Deep Architectures and Numerical Differential Equations. - Zichao Long, Yiping Lu, Xianzhong Ma, Bin Dong:

PDE-Net: Learning PDEs from Data. - Yang Gao, Huazhe Xu, Ji Lin, Fisher Yu, Sergey Levine, Trevor Darrell:

Reinforcement Learning from Imperfect Demonstrations. - Daniel Li, Asim Kadav:

Adaptive Memory Networks. - Gabriel Huang, Hugo Berard, Ahmed Touati, Gauthier Gidel, Pascal Vincent, Simon Lacoste-Julien:

Parametric Adversarial Divergences are Good Task Losses for Generative Modeling. - Joseph Marino, Yisong Yue, Stephan Mandt:

Learning to Infer. - Alessandro Bay, Biswa Sengupta:

GeoSeq2Seq: Information Geometric Sequence-to-Sequence Networks. - Guillaume Alain, Nicolas Le Roux, Pierre-Antoine Manzagol:

Negative eigenvalues of the Hessian in deep neural networks. - Chong Yu, Young Wang:

3D-Scene-GAN: Three-dimensional Scene Reconstruction with Generative Adversarial Networks. - Daniel Fojo, Víctor Campos, Xavier Giró-i-Nieto:

Comparing Fixed and Adaptive Computation Time for Recurrent Neural Networks. - Shikhar Sharma, Dendi Suhubdy, Vincent Michalski, Samira Ebrahimi Kahou, Yoshua Bengio:

ChatPainter: Improving Text to Image Generation using Dialogue. - Yelong Shen, Jianshu Chen, Po-Sen Huang, Yuqing Guo, Jianfeng Gao:

ReinforceWalk: Learning to Walk in Graph with Monte Carlo Tree Search. - Charles B. Delahunt, J. Nathan Kutz:

A moth brain learns to read MNIST. - Sil C. van de Leemput, Jonas Teuwen, Rashindra Manniesing:

MemCNN: a Framework for Developing Memory Efficient Deep Invertible Networks. - Raghav Goyal, Farzaneh Mahdisoltani, Guillaume Berger, Waseem Gharbieh, Ingo Bax, Roland Memisevic:

Evaluating visual "common sense" using fine-grained classification and captioning tasks. - Mehran Pesteie, Purang Abolmaesumi, Robert Rohling:

Deep Neural Maps. - Francesco Locatello, Damien Vincent, Ilya O. Tolstikhin, Gunnar Rätsch, Sylvain Gelly, Bernhard Schölkopf:

Clustering Meets Implicit Generative Models. - Anuvabh Dutt, Denis Pellerin, Georges Quénot:

Coupled Ensembles of Neural Networks. - Yedid Hoshen, Lior Wolf:

NAM - Unsupervised Cross-Domain Image Mapping without Cycles or GANs. - Nathan H. Ng, Julian J. McAuley, Julian Gingold, Nina Desai, Zachary C. Lipton:

Predicting Embryo Morphokinetics in Videos with Late Fusion Nets & Dynamic Decoders. - Richard Shin, Charles Packer, Dawn Song:

Differentiable Neural Network Architecture Search. - Dilin Wang, Qiang Liu:

An Optimization View on Dynamic Routing Between Capsules. - Yun Chen, Kyunghyun Cho, Samuel R. Bowman, Victor O. K. Li:

Stable and Effective Trainable Greedy Decoding for Sequence to Sequence Learning. - Mehdi S. M. Sajjadi, Giambattista Parascandolo, Arash Mehrjou, Bernhard Schölkopf:

Tempered Adversarial Networks. - Joshua Romoff, Alexandre Piché, Peter Henderson, Vincent François-Lavet, Joelle Pineau:

Reward Estimation for Variance Reduction in Deep Reinforcement Learning. - Natasha Jaques, Jesse H. Engel, David Ha, Fred Bertsch, Rosalind W. Picard, Douglas Eck:

Learning via social awareness: improving sketch representations with facial feedback. - Siyu He, Siamak Ravanbakhsh, Shirley Ho:

Analysis of Cosmic Microwave Background with Deep Learning. - Dzmitry Bahdanau, Felix Hill, Jan Leike, Edward Hughes, Pushmeet Kohli, Edward Grefenstette:

Jointly Learning "What" and "How" from Instructions and Goal-States. - Andrei Atanov, Arsenii Ashukha, Dmitry Molchanov, Kirill Neklyudov, Dmitry P. Vetrov:

Uncertainty Estimation via Stochastic Batch Normalization. - Hiroaki Shioya, Yusuke Iwasawa, Yutaka Matsuo:

Extending Robust Adversarial Reinforcement Learning Considering Adaptation and Diversity. - Lisa Zhang, Gregory Rosenblatt, Ethan Fetaya, Renjie Liao, William E. Byrd, Raquel Urtasun, Richard S. Zemel:

Leveraging Constraint Logic Programming for Neural Guided Program Synthesis. - Jiaming Song, Hongyu Ren, Dorsa Sadigh, Stefano Ermon:

Multi-Agent Generative Adversarial Imitation Learning. - Paul K. Rubenstein, Bernhard Schölkopf, Ilya O. Tolstikhin:

Learning Disentangled Representations with Wasserstein Auto-Encoders. - Jung-Su Ha, Young-Jin Park, Hyeok-Joo Chae, Soon-Seo Park, Han-Lim Choi:

Adaptive Path-Integral Approach for Representation Learning and Planning. - Yiming Zhang, Quan Ho Vuong, Kenny Song, Xiao-Yue Gong, Keith W. Ross:

Efficient Entropy For Policy Gradient with Multi-Dimensional Action Space. - Samantha Guerriero, Barbara Caputo, Thomas Mensink:

DeepNCM: Deep Nearest Class Mean Classifiers. - Matan Haroush, Tom Zahavy, Daniel J. Mankowitz, Shie Mannor:

Learning How Not to Act in Text-based Games. - Kangwook Lee, Kyungmin Lee, Hoon Kim, Changho Suh, Kannan Ramchandran:

SGD on Random Mixtures: Private Machine Learning under Data Breach Threats. - Zheng Xu, Yen-Chang Hsu, Jiawei Huang:

Training Shallow and Thin Networks for Acceleration via Knowledge Distillation with Conditional Adversarial Networks. - Jin-Dong Dong, An-Chieh Cheng, Da-Cheng Juan, Wei Wei, Min Sun:

PPP-Net: Platform-aware Progressive Search for Pareto-optimal Neural Architectures. - Joji Toyama, Yusuke Iwasawa, Kotaro Nakayama, Yutaka Matsuo:

Expert-based reward function training: the novel method to train sequence generators. - Tianyun Zhang, Shaokai Ye, Yipeng Zhang, Yanzhi Wang, Makan Fardad:

Systematic Weight Pruning of DNNs using Alternating Direction Method of Multipliers. - Abdul Rahman Abdul Ghani, Nishanth Koganti, Alfredo Solano, Yusuke Iwasawa, Kotaro Nakayama, Yutaka Matsuo:

Designing Efficient Neural Attention Systems Towards Achieving Human-level Sharp Vision. - Feng Wang, Weiyang Liu, Hanjun Dai, Haijun Liu, Jian Cheng:

Additive Margin Softmax for Face Verification. - Asha Anoosheh, Eirikur Agustsson, Radu Timofte:

ComboGAN: Unrestricted Scalability for Image Domain Translation. - Xavier Bresson, Thomas Laurent:

An Experimental Study of Neural Networks for Variable Graphs. - Rinu Boney, Alexander Ilin:

Semi-Supervised Few-Shot Learning with MAML. - Yuechao Gao, Nianhong Liu, Sheng Zhang:

Stacked Filters Stationary Flow For Hardware-Oriented Acceleration Of Deep Convolutional Neural Networks. - Yoshihiro Yamada, Masakazu Iwamura, Koichi Kise:

ShakeDrop regularization. - Fei Wang, Tiark Rompf:

A Language and Compiler View on Differentiable Programming. - David Lopez-Paz, Levent Sagun:

Easing non-convex optimization with neural networks. - Maxwell I. Nye, Andrew Saxe:

Are Efficient Deep Representations Learnable? - Trieu H. Trinh, Andrew M. Dai, Minh-Thang Luong, Quoc V. Le:

Learning Longer-term Dependencies in RNNs with Auxiliary Losses. - Zhang-Wei Hong, Tzu-Yun Shann, Shih-Yang Su, Yi-Hsiang Chang, Chun-Yi Lee:

Diversity-Driven Exploration Strategy for Deep Reinforcement Learning. - Jiajin Li, Baoxiang Wang:

Policy Optimization with Second-Order Advantage Information. - Yulia Rubanova, Ruian Shi, Roujia Li, Jeff Wintersinger, Amit G. Deshwar, Nil Sahin, Quaid Morris:

Reconstructing evolutionary trajectories of mutations in cancer. - Thomas Wolf, Julien Chaumond, Clement Delangue:

Meta-Learning a Dynamical Language Model. - Noe Casas, José A. R. Fonollosa, Marta R. Costa-jussà:

A differentiable BLEU loss. Analysis and first results. - Marek Krcál, Ondrej Svec, Martin Bálek, Otakar Jasek:

Deep Convolutional Malware Classifiers Can Learn from Raw Executables and Labels Only. - Yusuke Iwasawa, Kotaro Nakayama, Yutaka Matsuo:

Censoring Representations with Multiple-Adversaries over Random Subspaces. - Stanislaw Jastrzebski, Zachary Kenton, Devansh Arpit, Nicolas Ballas, Asja Fischer, Yoshua Bengio, Amos J. Storkey:

Finding Flatter Minima with SGD. - Siddhartha Brahma:

SufiSent - Universal Sentence Representations Using Suffix Encodings. - Chen Ma, Junfeng Wen, Yoshua Bengio:

Universal Successor Representations for Transfer Reinforcement Learning. - Namrata Anand, Possu Huang:

Generative Modeling for Protein Structures. - Jakob N. Foerster, Gregory Farquhar, Maruan Al-Shedivat, Tim Rocktäschel, Eric P. Xing, Shimon Whiteson:

DiCE: The Infinitely Differentiable Monte-Carlo Estimator. - Sachin Ravi, Hugo Larochelle:

Meta-Learning for Batch Mode Active Learning. - Soorya Gopalakrishnan, Zhinus Marzi, Upamanyu Madhow, Ramtin Pedarsani:

Combating Adversarial Attacks Using Sparse Representations. - César Laurent, Thomas George, Xavier Bouthillier, Nicolas Ballas, Pascal Vincent:

An Evaluation of Fisher Approximations Beyond Kronecker Factorization. - David Pfau, Christopher P. Burgess:

Minimally Redundant Laplacian Eigenmaps. - Chris Donahue, Julian J. McAuley, Miller S. Puckette:

Synthesizing Audio with GANs. - Mateo Rojas-Carulla, Marco Baroni, David Lopez-Paz:

Causal Discovery Using Proxy Variables. - Sam Leroux, Pavlo Molchanov, Pieter Simoens, Bart Dhoedt, Thomas M. Breuel, Jan Kautz:

IamNN: Iterative and Adaptive Mobile Neural Network for efficient image classification. - Robert J. Wang, Xiang Li, Shuang Ao, Charles X. Ling:

Pelee: A Real-Time Object Detection System on Mobile Devices. - Yingzhen Yang, Jianchao Yang, Ning Xu, Wei Han, Nebojsa Jojic, Thomas S. Huang:

3D-FilterMap: A Compact Architecture for Deep Convolutional Neural Networks. - Ciprian Florescu, Christian Igel:

Resilient Backpropagation (Rprop) for Batch-learning in TensorFlow. - Mika Sarkin Jain, Jack Lindsey:

Semiparametric Reinforcement Learning. - Cheng-Zhi Anna Huang, Sherol Chen, Mark J. Nelson, Douglas Eck:

Towards Mixed-initiative generation of multi-channel sequential structure. - Tian Guo, Tao Lin, Yao Lu:

An interpretable LSTM neural network for autoregressive exogenous model. - Kamil Bennani-Smires, Claudiu Musat, Andreea Hossmann, Michael Baeriswyl:

GitGraph - from Computational Subgraphs to Smaller Architecture Search Spaces. - Alexander A. Alemi, Ian Fischer:

GILBO: One Metric to Measure Them All. - Igor Mordatch:

Concept Learning with Energy-Based Models. - Ashutosh Kumar, Arijit Biswas, Subhajit Sanyal:

eCommerceGAN: A Generative Adversarial Network for e-commerce. - Richard Shin, Illia Polosukhin, Dawn Song:

Towards Specification-Directed Program Repair. - Julius Adebayo, Justin Gilmer, Ian J. Goodfellow, Been Kim:

Local Explanation Methods for Deep Neural Networks Lack Sensitivity to Parameter Values. - Fisher Yu, Dequan Wang, Evan Shelhamer, Trevor Darrell:

Learning Rich Image Representation with Deep Layer Aggregation. - Lisa Lee, Emilio Parisotto, Devendra Singh Chaplot, Ruslan Salakhutdinov:

LSTM Iteration Networks: An Exploration of Differentiable Path Finding. - Peter H. Jin, Boris Ginsburg, Kurt Keutzer:

Spatially Parallel Convolutions. - Amy Zhang, Harsh Satija, Joelle Pineau:

Decoupling Dynamics and Reward for Transfer Learning. - Pei-Hsuan Lu, Pin-Yu Chen, Chia-Mu Yu:

On the Limitation of Local Intrinsic Dimensionality for Characterizing the Subspaces of Adversarial Examples. - Edgar Minasyan, Vinay Prabhu:

Hockey-Stick GAN. - Ivan Lobov:

SpectralWords: Spectral Embeddings Approach to Word Similarity Task for Large Vocabularies. - Shiyu Liang, Ruoyu Sun, Yixuan Li, R. Srikant:

Understanding the Loss Surface of Single-Layered Neural Networks for Binary Classification. - Yash Sharma, Pin-Yu Chen:

Attacking the Madry Defense Model with $L_1$-based Adversarial Examples. - Arjun Nitin Bhagoji, Warren He, Bo Li, Dawn Song:

Black-box Attacks on Deep Neural Networks via Gradient Estimation. - Zhe Li, Shuo Wang, Caiwen Ding, Qinru Qiu, Yanzhi Wang, Yun Liang:

Efficient Recurrent Neural Networks using Structured Matrices in FPGAs. - Phiala E. Shanahan, Daniel Trewartha, William Detmold:

Neural network parameter regression for lattice quantum chromodynamics simulations in nuclear and particle physics. - Zachary Nado, Jasper Snoek, Roger B. Grosse, David Duvenaud, Bowen Xu, James Martens:

Stochastic Gradient Langevin dynamics that Exploit Neural Network Structure. - Remi Tachet des Combes, Philip Bachman, Harm van Seijen:

Learning Invariances for Policy Generalization. - Lionel Gueguen, Alex Sergeev, Rosanne Liu, Jason Yosinski:

Faster Neural Networks Straight from JPEG. - KiJung Yoon, Renjie Liao, Yuwen Xiong, Lisa Zhang, Ethan Fetaya, Raquel Urtasun, Richard S. Zemel, Xaq Pitkow:

Inference in probabilistic graphical models by Graph Neural Networks. - Tim Tsz-Kit Lau, Jinshan Zeng, Baoyuan Wu, Yuan Yao:

A Proximal Block Coordinate Descent Algorithm for Deep Neural Network Training. - Avital Oliver, Augustus Odena, Colin Raffel, Ekin D. Cubuk, Ian J. Goodfellow:

Realistic Evaluation of Semi-Supervised Learning Algorithms. - Luke Metz, Niru Maheswaranathan, Brian Cheung, Jascha Sohl-Dickstein:

Learning to Learn Without Labels. - Kate Rakelly, Evan Shelhamer, Trevor Darrell, Alyosha A. Efros, Sergey Levine:

Conditional Networks for Few-Shot Semantic Segmentation. - Stefan Falkner, Aaron Klein, Frank Hutter:

Practical Hyperparameter Optimization for Deep Learning. - George Tucker, Surya Bhupatiraju, Shixiang Gu, Richard E. Turner, Zoubin Ghahramani, Sergey Levine:

The Mirage of Action-Dependent Baselines in Reinforcement Learning. - Nicolas Le Roux, Reza Babanezhad, Pierre-Antoine Manzagol:

Online variance-reducing optimization. - Simon Brodeur, Ethan Perez, Ankesh Anand, Florian Golemo, Luca Celotti, Florian Strub, Jean Rouat, Hugo Larochelle, Aaron C. Courville:

HoME: a Household Multimodal Environment. - Alexander Chistyakov, Ekaterina Lobacheva, Alexander Shevelev, Alexey Romanenko:

Monotonic models for real-time dynamic malware detection. - Romain Laroche, Harm van Seijen:

In reinforcement learning, all objective functions are not equal. - Jeremy Bernstein, Yu-Xiang Wang, Kamyar Azizzadenesheli, Anima Anandkumar:

Compression by the signs: distributed learning is a two-way street. - Rui Shu, Shengjia Zhao, Mykel J. Kochenderfer:

Rethinking Style and Content Disentanglement in Variational Autoencoders. - Max Kochurov, Timur Garipov, Dmitry Podoprikhin, Dmitry Molchanov, Arsenii Ashukha, Dmitry P. Vetrov:

Bayesian Incremental Learning for Deep Neural Networks. - Paul K. Rubenstein, Bernhard Schölkopf, Ilya O. Tolstikhin:

Wasserstein Auto-Encoders: Latent Dimensionality and Random Encoders. - Samuli Laine:

Feature-Based Metrics for Exploring the Latent Space of Generative Models. - Ryan Spring, Anshumali Shrivastava:

Scalable Estimation via LSH Samplers (LSS). - Roland Fernandez, Asli Celikyilmaz, Paul Smolensky, Rishabh Singh:

Learning and Analyzing Vector Encoding of Symbolic Representation. - Ysbrand Galama, Thomas Mensink:

Iterative GANs for Rotating Visual Objects. - Xi Chen, Nikhil Mishra, Mostafa Rohaninejad, Pieter Abbeel:

PixelSNAIL: An Improved Autoregressive Generative Model. - Tian Qi Chen, Xuechen Li, Roger B. Grosse, David Duvenaud:

Isolating Sources of Disentanglement in Variational Autoencoders. - Rose Catherine, William W. Cohen:

TransNets for Review Generation. - Martin Simonovsky, Nikos Komodakis:

Towards Variational Generation of Small Graphs. - Yao-Hung Hubert Tsai, Denny Wu, Makoto Yamada, Ruslan Salakhutdinov, Ichiro Takeuchi, Kenji Fukumizu:

Selecting the Best in GANs Family: a Post Selection Inference Framework. - Andreea Bobu, Eric Tzeng, Judy Hoffman

, Trevor Darrell:
Adapting to Continuously Shifting Domains. - Will Grathwohl, Elliot Creager, Seyed Kamyar Seyed Ghasemipour, Richard S. Zemel:

Gradient-based Optimization of Neural Network Architecture. - Facundo Sapienza, Pablo Groisman, Matthieu Jonckheere:

Weighted Geodesic Distance Following Fermat's Principle. - Satrajit Chatterjee:

Learning and Memorization. - Bruno Lecouat, Chuan Sheng Foo, Houssam Zenati, Vijay Ramaseshan Chandrasekhar:

Semi-Supervised Learning With GANs: Revisiting Manifold Regularization. - Anna T. Thomas, Albert Gu, Tri Dao, Atri Rudra, Christopher Ré:

Learning Invariance with Compact Transforms. - Amit Deshpande, Navin Goyal, Sushrut Karmalkar:

Depth separation and weight-width trade-offs for sigmoidal neural networks. - Ryan Szeto, Simon Stent, Germán Ros, Jason J. Corso:

A Dataset To Evaluate The Representations Learned By Video Prediction Models. - Tianhe Yu, Chelsea Finn, Annie Xie, Sudeep Dasari, Tianhao Zhang, Pieter Abbeel, Sergey Levine:

One-Shot Imitation from Observing Humans via Domain-Adaptive Meta-Learning. - Damien Scieur, Edouard Oyallon, Alexandre d'Aspremont, Francis R. Bach:

Nonlinear Acceleration of CNNs. - D. Sculley, Jasper Snoek, Alexander B. Wiltschko, Ali Rahimi:

Winner's Curse? On Pace, Progress, and Empirical Rigor.

manage site settings
To protect your privacy, all features that rely on external API calls from your browser are turned off by default. You need to opt-in for them to become active. All settings here will be stored as cookies with your web browser. For more information see our F.A.Q.


Google
Google Scholar
Semantic Scholar
Internet Archive Scholar
CiteSeerX
ORCID














