{"title":"handong1587","link":[{"@attributes":{"href":"https:\/\/handong1587.github.io\/atom.xml","rel":"self","type":"application\/atom+xml"}},{"@attributes":{"href":"https:\/\/handong1587.github.io","rel":"alternate","type":"text\/html"}}],"updated":"2023-11-22T02:04:48+00:00","id":"https:\/\/handong1587.github.io","author":{"name":"handong1587","email":"handong1587@gmail.com"},"entry":[{"title":"BEV","link":{"@attributes":{"href":"https:\/\/handong1587.github.io\/deep_learning\/2022\/06\/27\/bev.html"}},"updated":"2022-06-27T00:00:00+00:00","id":"https:\/\/handong1587.github.io\/deep_learning\/2022\/06\/27\/bev","content":"<h1 id=\"papers\">Papers<\/h1>\n\n<p><strong>Vision-Centric BEV Perception: A Survey<\/strong><\/p>\n\n<ul>\n  <li>arxiv: <a href=\"https:\/\/arxiv.org\/abs\/2208.02797\">https:\/\/arxiv.org\/abs\/2208.02797<\/a><\/li>\n  <li>github: <a href=\"https:\/\/github.com\/4DVLab\/Vision-Centric-BEV-Perception\">https:\/\/github.com\/4DVLab\/Vision-Centric-BEV-Perception<\/a><\/li>\n<\/ul>\n\n<h1 id=\"multi-camera-3d-object-detection\">Multi-Camera 3D Object Detection<\/h1>\n\n<p><strong>Lift, Splat, Shoot: Encoding Images From Arbitrary Camera Rigs by Implicitly Unprojecting to 3D<\/strong><\/p>\n\n<ul>\n  <li>intro: ECCV 2020<\/li>\n  <li>intro: NVIDIA, Vector Institute, University of Toronto<\/li>\n  <li>project page: <a href=\"https:\/\/nv-tlabs.github.io\/lift-splat-shoot\/\">https:\/\/nv-tlabs.github.io\/lift-splat-shoot\/<\/a><\/li>\n  <li>arxiv: <a href=\"https:\/\/arxiv.org\/abs\/2008.05711\">https:\/\/arxiv.org\/abs\/2008.05711<\/a><\/li>\n  <li>github: <a href=\"https:\/\/github.com\/nv-tlabs\/lift-splat-shoot\">https:\/\/github.com\/nv-tlabs\/lift-splat-shoot<\/a><\/li>\n<\/ul>\n\n<p><strong>BEVDet: High-Performance Multi-Camera 3D Object Detection in Bird-Eye-View<\/strong><\/p>\n\n<ul>\n  <li>intro: PhiGent Robotics<\/li>\n  <li>arxiv: <a href=\"https:\/\/arxiv.org\/abs\/2112.11790\">https:\/\/arxiv.org\/abs\/2112.11790<\/a><\/li>\n<\/ul>\n\n<p><strong>BEVDet4D: Exploit Temporal Cues in Multi-camera 3D Object Detection<\/strong><\/p>\n\n<ul>\n  <li>intro: PhiGent Robotics<\/li>\n  <li>arxiv: <a href=\"https:\/\/arxiv.org\/abs\/2203.17054\">https:\/\/arxiv.org\/abs\/2203.17054<\/a><\/li>\n<\/ul>\n\n<p><strong>BEVerse: Unified Perception and Prediction in Birds-Eye-View for Vision-Centric Autonomous Driving<\/strong><\/p>\n\n<ul>\n  <li>intro: Tsinghua University &amp; PhiGent Robotics<\/li>\n  <li>arxiv: <a href=\"https:\/\/arxiv.org\/abs\/2205.09743\">https:\/\/arxiv.org\/abs\/2205.09743<\/a><\/li>\n  <li>github: <a href=\"https:\/\/github.com\/zhangyp15\/BEVerse\">https:\/\/github.com\/zhangyp15\/BEVerse<\/a><\/li>\n<\/ul>\n\n<p><strong>BEVFormer: Learning Bird\u2019s-Eye-View Representation from Multi-Camera Images via Spatiotemporal Transformers<\/strong><\/p>\n\n<ul>\n  <li>intro: Nanjing University &amp; Shanghai AI Laboratory &amp; The University of Hong Kong<\/li>\n  <li>arxiv: <a href=\"https:\/\/arxiv.org\/abs\/2203.17270\">https:\/\/arxiv.org\/abs\/2203.17270<\/a><\/li>\n  <li>github: <a href=\"https:\/\/github.com\/zhiqi-li\/BEVFormer\">https:\/\/github.com\/zhiqi-li\/BEVFormer<\/a><\/li>\n<\/ul>\n\n<p><strong>HFT: Lifting Perspective Representations via Hybrid Feature Transformation<\/strong><\/p>\n\n<ul>\n  <li>intro: Institute of Automation, Chinese Academy of Sciences &amp; PhiGent Robotics<\/li>\n  <li>arxiv: <a href=\"https:\/\/arxiv.org\/abs\/2204.05068\">https:\/\/arxiv.org\/abs\/2204.05068<\/a><\/li>\n  <li>github: <a href=\"https:\/\/github.com\/JiayuZou2020\/HFT\">https:\/\/github.com\/JiayuZou2020\/HFT<\/a><\/li>\n<\/ul>\n\n<p><strong>M^2BEV: Multi-Camera Joint 3D Detection and Segmentation with Unified Birds-Eye View Representation<\/strong><\/p>\n\n<ul>\n  <li>project page: <a href=\"https:\/\/xieenze.github.io\/projects\/m2bev\/\">https:\/\/xieenze.github.io\/projects\/m2bev\/<\/a><\/li>\n  <li>arxiv: <a href=\"https:\/\/arxiv.org\/abs\/2204.05088\">https:\/\/arxiv.org\/abs\/2204.05088<\/a><\/li>\n<\/ul>\n\n<p><strong>BEVFusion: Multi-Task Multi-Sensor Fusion with Unified Bird\u2019s-Eye View Representation<\/strong><\/p>\n\n<ul>\n  <li>project page: <a href=\"https:\/\/bevfusion.mit.edu\/\">https:\/\/bevfusion.mit.edu\/<\/a><\/li>\n  <li>arxiv: <a href=\"https:\/\/arxiv.org\/abs\/2205.13542\">https:\/\/arxiv.org\/abs\/2205.13542<\/a><\/li>\n  <li>github: <a href=\"https:\/\/github.com\/mit-han-lab\/bevfusion\">https:\/\/github.com\/mit-han-lab\/bevfusion<\/a><\/li>\n<\/ul>\n\n<p><strong>BEVFusion: A Simple and Robust LiDAR-Camera Fusion Framework<\/strong><\/p>\n\n<ul>\n  <li>intro: Peking University &amp; Alibaba Group<\/li>\n  <li>arxiv: <a href=\"https:\/\/arxiv.org\/abs\/2205.13790\">https:\/\/arxiv.org\/abs\/2205.13790<\/a><\/li>\n  <li>github: <a href=\"https:\/\/github.com\/ADLab-AutoDrive\/BEVFusion\">https:\/\/github.com\/ADLab-AutoDrive\/BEVFusion<\/a><\/li>\n<\/ul>\n\n<p><strong>A Simple Baseline for BEV Perception Without LiDAR<\/strong><\/p>\n\n<ul>\n  <li>intro: Carnegie Mellon University &amp; Toyota Research Institute<\/li>\n  <li>project page: <a href=\"http:\/\/www.cs.cmu.edu\/~aharley\/bev\/\">http:\/\/www.cs.cmu.edu\/~aharley\/bev\/<\/a><\/li>\n  <li>arxiv: <a href=\"https:\/\/arxiv.org\/abs\/2206.07959\">https:\/\/arxiv.org\/abs\/2206.07959<\/a><\/li>\n<\/ul>\n\n<p><strong>BEVDepth: Acquisition of Reliable Depth for Multi-view 3D Object Detection<\/strong><\/p>\n\n<ul>\n  <li>intro: Megvii Inc. (Face++) &amp; Huazhong University of Science and Technology &amp; Xi\u2019an Jiaotong University<\/li>\n  <li>arxiv: <a href=\"https:\/\/arxiv.org\/abs\/2206.10092\">https:\/\/arxiv.org\/abs\/2206.10092<\/a><\/li>\n<\/ul>\n\n<p><strong>PolarFormer: Multi-camera 3D Object Detection with Polar Transformers<\/strong><\/p>\n\n<ul>\n  <li>intro: 1Fudan University &amp; CASIA &amp; Alibaba DAMO Academy &amp; University of Surrey<\/li>\n  <li>arxiv: <a href=\"https:\/\/arxiv.org\/abs\/2206.15398\">https:\/\/arxiv.org\/abs\/2206.15398<\/a><\/li>\n  <li>github: <a href=\"https:\/\/github.com\/fudan-zvg\/PolarFormer\">https:\/\/github.com\/fudan-zvg\/PolarFormer<\/a><\/li>\n<\/ul>\n\n<p><strong>ORA3D: Overlap Region Aware Multi-view 3D Object Detection<\/strong><\/p>\n\n<ul>\n  <li>intro: Korea University &amp; KAIST &amp; Hyundai Motor Company R&amp;D Division<\/li>\n  <li>arxiv: <a href=\"https:\/\/arxiv.org\/abs\/2207.00865\">https:\/\/arxiv.org\/abs\/2207.00865<\/a><\/li>\n<\/ul>\n\n<p><strong>MSMDFusion: Fusing LiDAR and Camera at Multiple Scales with Multi-Depth Seeds for 3D Object Detection<\/strong><\/p>\n\n<ul>\n  <li>intro: Fudan University &amp; Meituan<\/li>\n  <li>arxiv: <a href=\"https:\/\/arxiv.org\/abs\/2209.03102\">https:\/\/arxiv.org\/abs\/2209.03102<\/a><\/li>\n<\/ul>\n\n<h1 id=\"hd-map-construction\">HD Map Construction<\/h1>\n\n<p><strong>HDMapNet: An Online HD Map Construction and Evaluation Framework<\/strong><\/p>\n\n<ul>\n  <li>intro: ICRA 2022<\/li>\n  <li>intro: Tsinghua University &amp; MIT &amp; Li Auto<\/li>\n  <li>project page: <a href=\"https:\/\/tsinghua-mars-lab.github.io\/HDMapNet\/\">https:\/\/tsinghua-mars-lab.github.io\/HDMapNet\/<\/a><\/li>\n  <li>arxiv: <a href=\"https:\/\/arxiv.org\/abs\/2107.06307\">https:\/\/arxiv.org\/abs\/2107.06307<\/a><\/li>\n  <li>github: <a href=\"https:\/\/github.com\/Tsinghua-MARS-Lab\/HDMapNet\">https:\/\/github.com\/Tsinghua-MARS-Lab\/HDMapNet<\/a><\/li>\n<\/ul>\n\n<p><strong>VectorMapNet: End-to-end Vectorized HD Map Learning<\/strong><\/p>\n\n<ul>\n  <li>intro: Tsinghua University &amp; MIT &amp; Li Auto<\/li>\n  <li>arxiv: <a href=\"https:\/\/arxiv.org\/abs\/2206.08920\">https:\/\/arxiv.org\/abs\/2206.08920<\/a><\/li>\n<\/ul>\n\n<p><strong>UniFormer: Unified Multi-view Fusion Transformer for Spatial-Temporal Representation in Bird\u2019s-Eye-View<\/strong><\/p>\n\n<ul>\n  <li>intro: Zhejiang University &amp; DJI &amp; Shanghai AI Lab<\/li>\n  <li>arxiv: <a href=\"https:\/\/arxiv.org\/abs\/2207.08536\">https:\/\/arxiv.org\/abs\/2207.08536<\/a><\/li>\n<\/ul>\n\n<p><strong>MapTR: Structured Modeling and Learning for Online Vectorized HD Map Construction<\/strong><\/p>\n\n<ul>\n  <li>intro: University of Science &amp; Technology, Horizon Robotics<\/li>\n  <li>arxiv: <a href=\"https:\/\/arxiv.org\/abs\/2208.14437\">https:\/\/arxiv.org\/abs\/2208.14437<\/a><\/li>\n  <li>gihtub: <a href=\"https:\/\/github.com\/hustvl\/MapTR\">https:\/\/github.com\/hustvl\/MapTR<\/a><\/li>\n<\/ul>\n\n<h1 id=\"semantic-segmentation\">Semantic Segmentation<\/h1>\n\n<p><strong>LaRa: Latents and Rays for Multi-Camera Bird\u2019s-Eye-View Semantic Segmentation<\/strong><\/p>\n\n<ul>\n  <li>intro: Valeo.ai &amp; Inria<\/li>\n  <li>arxiv: <a href=\"https:\/\/arxiv.org\/abs\/2206.13294\">https:\/\/arxiv.org\/abs\/2206.13294<\/a><\/li>\n<\/ul>\n\n<p><strong>CoBEVT: Cooperative Bird\u2019s Eye View Semantic Segmentation with Sparse Transformers<\/strong><\/p>\n\n<ul>\n  <li>intro: University of California, Los Angeles &amp; University of Texas at Austin &amp; University of California<\/li>\n  <li>arxiv: <a href=\"https:\/\/arxiv.org\/abs\/2207.02202\">https:\/\/arxiv.org\/abs\/2207.02202<\/a><\/li>\n<\/ul>\n"},{"title":"3D","link":{"@attributes":{"href":"https:\/\/handong1587.github.io\/deep_learning\/2021\/07\/28\/3d.html"}},"updated":"2021-07-28T00:00:00+00:00","id":"https:\/\/handong1587.github.io\/deep_learning\/2021\/07\/28\/3d","content":"<h1 id=\"papers\">Papers<\/h1>\n\n<p><strong>Expressive Body Capture: 3D Hands, Face, and Body from a Single Image<\/strong><\/p>\n\n<ul>\n  <li>intro: CVPR 2019<\/li>\n  <li>arxiv: <a href=\"https:\/\/arxiv.org\/abs\/1904.05866\">https:\/\/arxiv.org\/abs\/1904.05866<\/a><\/li>\n  <li>project page: <a href=\"https:\/\/smpl-x.is.tue.mpg.de\/\">https:\/\/smpl-x.is.tue.mpg.de\/<\/a><\/li>\n  <li>github: <a href=\"https:\/\/github.com\/vchoutas\/smplify-x\">https:\/\/github.com\/vchoutas\/smplify-x<\/a><\/li>\n<\/ul>\n\n<p><strong>Collaborative Regression of Expressive Bodies using Moderation<\/strong><\/p>\n\n<ul>\n  <li>intro: PIXIE<\/li>\n  <li>project page: <a href=\"https:\/\/pixie.is.tue.mpg.de\/\">https:\/\/pixie.is.tue.mpg.de\/<\/a><\/li>\n  <li>arxiv: <a href=\"https:\/\/arxiv.org\/abs\/2105.05301\">https:\/\/arxiv.org\/abs\/2105.05301<\/a><\/li>\n  <li>github: <a href=\"https:\/\/github.com\/YadiraF\/PIXIE\">https:\/\/github.com\/YadiraF\/PIXIE<\/a><\/li>\n<\/ul>\n\n<p><strong>Hand Image Understanding via Deep Multi-Task Learning<\/strong><\/p>\n\n<ul>\n  <li>intro: ICCV 2021<\/li>\n  <li>arxiv: <a href=\"https:\/\/arxiv.org\/abs\/2107.11646\">https:\/\/arxiv.org\/abs\/2107.11646<\/a><\/li>\n<\/ul>\n\n<p><strong>VoxelTrack: Multi-Person 3D Human Pose Estimation and Tracking in the Wild<\/strong><\/p>\n\n<p><a href=\"https:\/\/arxiv.org\/abs\/2108.02452\">https:\/\/arxiv.org\/abs\/2108.02452<\/a><\/p>\n\n<p><strong>EventHPE: Event-based 3D Human Pose and Shape Estimation<\/strong><\/p>\n\n<ul>\n  <li>intro: ICCV 2021<\/li>\n  <li>intro: University of Alberta &amp; Shandong University &amp; Celepixel Technology &amp; University of Guelph &amp; Nanyang Technological University<\/li>\n  <li>arxiv: <a href=\"https:\/\/arxiv.org\/abs\/2108.06819\">https:\/\/arxiv.org\/abs\/2108.06819<\/a><\/li>\n<\/ul>\n\n<h1 id=\"monocular-3d-object-detection\">Monocular 3D Object Detection<\/h1>\n\n<p><strong>Monocular 3D Object Detection and Box Fitting Trained End-to-End Using Intersection-over-Union Loss<\/strong><\/p>\n\n<ul>\n  <li>keywords: SS3D<\/li>\n  <li>arxiv: <a href=\"https:\/\/arxiv.org\/abs\/1906.08070\">https:\/\/arxiv.org\/abs\/1906.08070<\/a><\/li>\n  <li>video: <a href=\"https:\/\/www.youtube.com\/playlist?list=PL4jJwJr7UjMb4bzLwUGHdVmhfNS2Ads_x\">https:\/\/www.youtube.com\/playlist?list=PL4jJwJr7UjMb4bzLwUGHdVmhfNS2Ads_x<\/a><\/li>\n<\/ul>\n\n<p><strong>M3D-RPN: Monocular 3D Region Proposal Network for Object Detection<\/strong><\/p>\n\n<ul>\n  <li>intro: ICCV 2019 oral<\/li>\n  <li>project page: <a href=\"http:\/\/cvlab.cse.msu.edu\/project-m3d-rpn.html\">http:\/\/cvlab.cse.msu.edu\/project-m3d-rpn.html<\/a><\/li>\n  <li>arxiv: <a href=\"https:\/\/arxiv.org\/abs\/1907.06038\">https:\/\/arxiv.org\/abs\/1907.06038<\/a><\/li>\n  <li>github: <a href=\"https:\/\/github.com\/garrickbrazil\/M3D-RPN\">https:\/\/github.com\/garrickbrazil\/M3D-RPN<\/a><\/li>\n<\/ul>\n\n<p><strong>Learning Depth-Guided Convolutions for Monocular 3D Object Detection<\/strong><\/p>\n\n<ul>\n  <li>intro: CVPR 2020<\/li>\n  <li>arxiv: <a href=\"https:\/\/arxiv.org\/abs\/1912.04799\">https:\/\/arxiv.org\/abs\/1912.04799<\/a><\/li>\n  <li>github: <a href=\"https:\/\/github.com\/dingmyu\/D4LCN\">https:\/\/github.com\/dingmyu\/D4LCN<\/a><\/li>\n<\/ul>\n\n<p><strong>RTM3D: Real-time Monocular 3D Detection from Object Keypoints for Autonomous Driving<\/strong><\/p>\n\n<ul>\n  <li>intro: ECCV 2020<\/li>\n  <li>arxiv: <a href=\"https:\/\/arxiv.org\/abs\/2001.03343\">https:\/\/arxiv.org\/abs\/2001.03343<\/a><\/li>\n<\/ul>\n\n<p><strong>SMOKE: Single-Stage Monocular 3D Object Detection via Keypoint Estimation<\/strong><\/p>\n\n<ul>\n  <li>intro: CVPR 2020<\/li>\n  <li>intro: ZongMu Tech &amp; TU\/e<\/li>\n  <li>arxiv: <a href=\"https:\/\/arxiv.org\/abs\/2002.10111\">https:\/\/arxiv.org\/abs\/2002.10111<\/a><\/li>\n  <li>github(official): <a href=\"https:\/\/github.com\/lzccccc\/SMOKE\">https:\/\/github.com\/lzccccc\/SMOKE<\/a><\/li>\n<\/ul>\n\n<p><strong>Center3D: Center-based Monocular 3D Object Detection with Joint Depth Understanding<\/strong><\/p>\n\n<ul>\n  <li>keywords: one-stage anchor-free<\/li>\n  <li>arxiv: <a href=\"https:\/\/arxiv.org\/abs\/2005.13423\">https:\/\/arxiv.org\/abs\/2005.13423<\/a><\/li>\n<\/ul>\n\n<p><strong>Monocular Differentiable Rendering for Self-Supervised 3D Object Detection<\/strong><\/p>\n\n<ul>\n  <li>intro: ECCV 2020<\/li>\n  <li>intro: Preferred Networks, Inc &amp; Toyota Research Institute<\/li>\n  <li>arxiv: <a href=\"https:\/\/arxiv.org\/abs\/2009.14524\">https:\/\/arxiv.org\/abs\/2009.14524<\/a><\/li>\n<\/ul>\n\n<p><strong>M3DSSD: Monocular 3D Single Stage Object Detector<\/strong><\/p>\n\n<ul>\n  <li>intro: CVPR 2021<\/li>\n  <li>intro: Zhejiang University &amp; Mohamed bin Zayed University of Artificial Intelligence &amp; Inception Institute of Artificial Intelligence<\/li>\n  <li>arxiv: <a href=\"https:\/\/arxiv.org\/abs\/2103.13164\">https:\/\/arxiv.org\/abs\/2103.13164<\/a><\/li>\n<\/ul>\n\n<p><strong>Delving into Localization Errors for Monocular 3D Object Detection<\/strong><\/p>\n\n<ul>\n  <li>intro: CVPR 2021<\/li>\n  <li>arxiv: <a href=\"https:\/\/arxiv.org\/abs\/2103.16237\">https:\/\/arxiv.org\/abs\/2103.16237<\/a><\/li>\n  <li>github: <a href=\"https:\/\/github.com\/xinzhuma\/monodle\">https:\/\/github.com\/xinzhuma\/monodle<\/a><\/li>\n<\/ul>\n\n<p><strong>Depth-conditioned Dynamic Message Propagation for Monocular 3D Object Detection<\/strong><\/p>\n\n<ul>\n  <li>github: CVPR 2021<\/li>\n  <li>arxiv: <a href=\"https:\/\/arxiv.org\/abs\/2103.16470\">https:\/\/arxiv.org\/abs\/2103.16470<\/a><\/li>\n  <li>github: <a href=\"https:\/\/github.com\/fudan-zvg\/DDMP\">https:\/\/github.com\/fudan-zvg\/DDMP<\/a><\/li>\n<\/ul>\n\n<p><strong>GrooMeD-NMS: Grouped Mathematically Differentiable NMS for Monocular 3D Object Detection<\/strong><\/p>\n\n<ul>\n  <li>intro: CVPR 2021<\/li>\n  <li>arxiv: <a href=\"https:\/\/arxiv.org\/abs\/2103.17202\">https:\/\/arxiv.org\/abs\/2103.17202<\/a><\/li>\n<\/ul>\n\n<p><strong>Objects are Different: Flexible Monocular 3D Object Detection<\/strong><\/p>\n\n<ul>\n  <li>intro: CVPR 2021<\/li>\n  <li>arxiv: <a href=\"https:\/\/arxiv.org\/abs\/2104.02323\">https:\/\/arxiv.org\/abs\/2104.02323<\/a><\/li>\n  <li>github: <a href=\"https:\/\/github.com\/zhangyp15\/MonoFlex\">https:\/\/github.com\/zhangyp15\/MonoFlex<\/a><\/li>\n<\/ul>\n\n<p><strong>Geometry-based Distance Decomposition for Monocular 3D Object Detection<\/strong><\/p>\n\n<p><a href=\"https:\/\/arxiv.org\/abs\/2104.03775\">https:\/\/arxiv.org\/abs\/2104.03775<\/a><\/p>\n\n<p><strong>Geometry-aware data augmentation for monocular 3D object detection<\/strong><\/p>\n\n<p><a href=\"https:\/\/arxiv.org\/abs\/2104.05858\">https:\/\/arxiv.org\/abs\/2104.05858<\/a><\/p>\n\n<p><strong>OCM3D: Object-Centric Monocular 3D Object Detection<\/strong><\/p>\n\n<p><a href=\"https:\/\/arxiv.org\/abs\/2104.06041\">https:\/\/arxiv.org\/abs\/2104.06041<\/a><\/p>\n\n<p><strong>Exploring 2D Data Augmentation for 3D Monocular Object Detection<\/strong><\/p>\n\n<p><a href=\"https:\/\/arxiv.org\/abs\/2104.10786\">https:\/\/arxiv.org\/abs\/2104.10786<\/a><\/p>\n\n<p><strong>Progressive Coordinate Transforms for Monocular 3D Object Detection<\/strong><\/p>\n\n<ul>\n  <li>intro: Fudan University &amp; Amazon Inc.<\/li>\n  <li>arxiv: <a href=\"https:\/\/arxiv.org\/abs\/2108.05793\">https:\/\/arxiv.org\/abs\/2108.05793<\/a><\/li>\n  <li>github: <a href=\"https:\/\/github.com\/amazon-research\/progressive-coordinate-transforms\">https:\/\/github.com\/amazon-research\/progressive-coordinate-transforms<\/a><\/li>\n<\/ul>\n\n<p><strong>AutoShape: Real-Time Shape-Aware Monocular 3D Object Detection<\/strong><\/p>\n\n<ul>\n  <li>intro: ICCV 2021<\/li>\n  <li>intro: Baidu Research<\/li>\n  <li>arxiv: <a href=\"https:\/\/arxiv.org\/abs\/2108.11127\">https:\/\/arxiv.org\/abs\/2108.11127<\/a><\/li>\n  <li>github: <a href=\"https:\/\/github.com\/zongdai\/AutoShape\">https:\/\/github.com\/zongdai\/AutoShape<\/a><\/li>\n<\/ul>\n\n<p><strong>Categorical Depth Distribution Network for Monocular 3D Object Detection<\/strong><\/p>\n\n<ul>\n  <li>intro: CVPR 2021 oral<\/li>\n  <li>intro: University of Toronto Robotics Institute<\/li>\n  <li>project page: <a href=\"https:\/\/trailab.github.io\/CaDDN\/\">https:\/\/trailab.github.io\/CaDDN\/<\/a><\/li>\n  <li>arxiv: <a href=\"https:\/\/arxiv.org\/abs\/2103.01100\">https:\/\/arxiv.org\/abs\/2103.01100<\/a><\/li>\n  <li>github: <a href=\"https:\/\/github.com\/TRAILab\/CaDDN\">https:\/\/github.com\/TRAILab\/CaDDN<\/a><\/li>\n<\/ul>\n\n<p><strong>The Devil is in the Task: Exploiting Reciprocal Appearance-Localization Features for Monocular 3D Object Detection<\/strong><\/p>\n\n<ul>\n  <li>intro: ICCV 2021<\/li>\n  <li>arxiv: <a href=\"https:\/\/arxiv.org\/abs\/2112.14023\">https:\/\/arxiv.org\/abs\/2112.14023<\/a><\/li>\n<\/ul>\n\n<p><strong>SGM3D: Stereo Guided Monocular 3D Object Detection<\/strong><\/p>\n\n<ul>\n  <li>intro: Fudan University &amp; Baidu Inc.<\/li>\n  <li>arxiv: <a href=\"https:\/\/arxiv.org\/abs\/2112.01914\">https:\/\/arxiv.org\/abs\/2112.01914<\/a><\/li>\n  <li>github: <a href=\"https:\/\/github.com\/zhouzheyuan\/sgm3d\">https:\/\/github.com\/zhouzheyuan\/sgm3d<\/a><\/li>\n<\/ul>\n\n<p><strong>MonoDistill: Learning Spatial Features for Monocular 3D Object Detection<\/strong><\/p>\n\n<ul>\n  <li>intro: ICLR 2022<\/li>\n  <li>intro: Dalian University of Technology &amp; The University of Sydney<\/li>\n  <li>arxiv: <a href=\"https:\/\/arxiv.org\/abs\/2201.10830\">https:\/\/arxiv.org\/abs\/2201.10830<\/a><\/li>\n  <li>github: <a href=\"https:\/\/github.com\/monster-ghost\/MonoDistill\">https:\/\/github.com\/monster-ghost\/MonoDistill<\/a><\/li>\n<\/ul>\n\n<p><strong>Pseudo-Stereo for Monocular 3D Object Detection in Autonomous Driving<\/strong><\/p>\n\n<ul>\n  <li>intro: CVPR 2022<\/li>\n  <li>arxiv: <a href=\"https:\/\/arxiv.org\/abs\/2203.02112\">https:\/\/arxiv.org\/abs\/2203.02112<\/a><\/li>\n  <li>github:<a href=\"https:\/\/github.com\/revisitq\/Pseudo-Stereo-3D\">https:\/\/github.com\/revisitq\/Pseudo-Stereo-3D<\/a><\/li>\n<\/ul>\n\n<p><strong>MonoJSG: Joint Semantic and Geometric Cost Volume for Monocular 3D Object Detection<\/strong><\/p>\n\n<ul>\n  <li>intro: CVPR 2022<\/li>\n  <li>intro: The Hong Kong University of Science and Technology &amp; DJI<\/li>\n  <li>arxiv: <a href=\"https:\/\/arxiv.org\/abs\/2203.08563\">https:\/\/arxiv.org\/abs\/2203.08563<\/a><\/li>\n<\/ul>\n\n<p><strong>MonoDTR: Monocular 3D Object Detection with Depth-Aware Transformer<\/strong><\/p>\n\n<ul>\n  <li>intro: CVPR 2022<\/li>\n  <li>intro: National Taiwan University &amp; Mobile Drive Technology<\/li>\n  <li>arxiv: <a href=\"https:\/\/arxiv.org\/abs\/2203.10981\">https:\/\/arxiv.org\/abs\/2203.10981<\/a><\/li>\n  <li>github: <a href=\"https:\/\/github.com\/kuanchihhuang\/MonoDTR\">https:\/\/github.com\/kuanchihhuang\/MonoDTR<\/a><\/li>\n<\/ul>\n\n<p><strong>MonoDETR: Depth-aware Transformer for Monocular 3D Object Detection<\/strong><\/p>\n\n<ul>\n  <li>intro: Shanghai AI Laboratory &amp; Peking University &amp; The Chinese University of Hong Kong<\/li>\n  <li>arxiv: <a href=\"https:\/\/arxiv.org\/abs\/2203.13310\">https:\/\/arxiv.org\/abs\/2203.13310<\/a><\/li>\n<\/ul>\n\n<p><strong>Homography Loss for Monocular 3D Object Detection<\/strong><\/p>\n\n<ul>\n  <li>intro: CVPR 2022<\/li>\n  <li>arxiv: <a href=\"https:\/\/arxiv.org\/abs\/2204.00754\">https:\/\/arxiv.org\/abs\/2204.00754<\/a><\/li>\n<\/ul>\n\n<p><strong>Towards Model Generalization for Monocular 3D Object Detection<\/strong><\/p>\n\n<ul>\n  <li>intro: Harbin Institute of Technology &amp; University of Science and Technology of China &amp; SenseTime Research<\/li>\n  <li>arxiv: <a href=\"https:\/\/arxiv.org\/abs\/2205.11664\">https:\/\/arxiv.org\/abs\/2205.11664<\/a><\/li>\n<\/ul>\n\n<p><strong>Delving into the Pre-training Paradigm of Monocular 3D Object Detection<\/strong><\/p>\n\n<ul>\n  <li>intro: Tsinghua University &amp; Huazhong University of Science and Technology<\/li>\n  <li>arxiv: <a href=\"https:\/\/arxiv.org\/abs\/2206.03657\">https:\/\/arxiv.org\/abs\/2206.03657<\/a><\/li>\n<\/ul>\n\n<p><strong>MonoGround: Detecting Monocular 3D Objects from the Ground<\/strong><\/p>\n\n<ul>\n  <li>intro: CVPR 2022<\/li>\n  <li>arxiv: <a href=\"https:\/\/arxiv.org\/abs\/2206.07372\">https:\/\/arxiv.org\/abs\/2206.07372<\/a><\/li>\n  <li>github: <a href=\"https:\/\/github.com\/cfzd\/MonoGround\">https:\/\/github.com\/cfzd\/MonoGround<\/a><\/li>\n<\/ul>\n\n<p><strong>Densely Constrained Depth Estimator for Monocular 3D Object Detection<\/strong><\/p>\n\n<ul>\n  <li>intro: ECCV 2022<\/li>\n  <li>intro: CASIA &amp; UCAS &amp; HKISI CAS<\/li>\n  <li>arxiv: <a href=\"https:\/\/arxiv.org\/abs\/2207.10047\">https:\/\/arxiv.org\/abs\/2207.10047<\/a><\/li>\n  <li>github:<a href=\"https:\/\/github.com\/BraveGroup\/DCD\">https:\/\/github.com\/BraveGroup\/DCD<\/a><\/li>\n<\/ul>\n\n<p><strong>Consistency of Implicit and Explicit Features Matters for Monocular 3D Object Detection<\/strong><\/p>\n\n<ul>\n  <li>intro: DiDi<\/li>\n  <li>arxiv: <a href=\"https:\/\/arxiv.org\/abs\/2207.07933\">https:\/\/arxiv.org\/abs\/2207.07933<\/a><\/li>\n<\/ul>\n\n<p><strong>DID-M3D: Decoupling Instance Depth for Monocular 3D Object Detection<\/strong><\/p>\n\n<ul>\n  <li>intro: ECCV 2022<\/li>\n  <li>intro: Zhejiang University &amp; Fabu Inc.<\/li>\n  <li>arxiv: <a href=\"https:\/\/arxiv.org\/abs\/2207.08531\">https:\/\/arxiv.org\/abs\/2207.08531<\/a><\/li>\n  <li>github: <a href=\"https:\/\/github.com\/SPengLiang\/DID-M3D\">https:\/\/github.com\/SPengLiang\/DID-M3D<\/a><\/li>\n<\/ul>\n\n<p><strong>DEVIANT: Depth EquiVarIAnt NeTwork for Monocular 3D Object Detection<\/strong><\/p>\n\n<ul>\n  <li>intro: ECCV 2022<\/li>\n  <li>intro: Michigan State University &amp; Meta AI &amp; Ford Motor Company<\/li>\n  <li>arxiv: <a href=\"https:\/\/arxiv.org\/abs\/2207.10758\">https:\/\/arxiv.org\/abs\/2207.10758<\/a><\/li>\n  <li>github: <a href=\"https:\/\/github.com\/abhi1kumar\/DEVIANT\">https:\/\/github.com\/abhi1kumar\/DEVIANT<\/a><\/li>\n<\/ul>\n\n<p><strong>Monocular 3D Object Detection with Depth from Motion<\/strong><\/p>\n\n<ul>\n  <li>intro: ECCV 2022 Oral<\/li>\n  <li>intro: The Chinese University of Hong Kong &amp; Shanghai AI Laboratory<\/li>\n  <li>arxiv: <a href=\"https:\/\/arxiv.org\/abs\/2207.12988\">https:\/\/arxiv.org\/abs\/2207.12988<\/a><\/li>\n  <li>github: <a href=\"https:\/\/github.com\/Tai-Wang\/Depth-from-Motion\">https:\/\/github.com\/Tai-Wang\/Depth-from-Motion<\/a><\/li>\n<\/ul>\n\n<p><strong>MV-FCOS3D++: Multi-View Camera-Only 4D Object Detection with Pretrained Monocular Backbones<\/strong><\/p>\n\n<ul>\n  <li>intro: The Chinese University of Hong Kong &amp; Hong Kong University of Science and Technology &amp; The Chinese University of Hong Kong &amp; 4Nanyang Technological University<\/li>\n  <li>arxiv: <a href=\"https:\/\/arxiv.org\/abs\/2207.12716\">https:\/\/arxiv.org\/abs\/2207.12716<\/a><\/li>\n  <li>github: <a href=\"https:\/\/github.com\/Tai-Wang\/Depth-from-Motion\">https:\/\/github.com\/Tai-Wang\/Depth-from-Motion<\/a><\/li>\n<\/ul>\n\n<p><strong>SEFormer: Structure Embedding Transformer for 3D Object Detection<\/strong><\/p>\n\n<ul>\n  <li>intro: Tsinghua University &amp; Australian National University &amp; National University of Singapore<\/li>\n  <li>arxiv: <a href=\"https:\/\/arxiv.org\/abs\/2209.01745\">https:\/\/arxiv.org\/abs\/2209.01745<\/a><\/li>\n<\/ul>\n\n<h1 id=\"multi-modal-3d-object-detection\">Multi-Modal 3D Object Detection<\/h1>\n\n<p><strong>AutoAlign: Pixel-Instance Feature Aggregation for Multi-Modal 3D Object Detection<\/strong><\/p>\n\n<ul>\n  <li>intro: IJCAI 2022<\/li>\n  <li>intro: University of Science and Technology &amp; Harbin Institute of Technology &amp; SenseTime Research &amp; The Chinese University of Hong Kong &amp; Tsinghua University<\/li>\n  <li>arxiv: <a href=\"https:\/\/arxiv.org\/abs\/2201.06493\">https:\/\/arxiv.org\/abs\/2201.06493<\/a><\/li>\n<\/ul>\n\n<p><strong>AutoAlignV2: Deformable Feature Aggregation for Dynamic Multi-Modal 3D Object Detection<\/strong><\/p>\n\n<ul>\n  <li>intro: ECCV 2022<\/li>\n  <li>intro: University of Science and Technology of China &amp; Harbin Institute of Technology &amp; SenseTime Research<\/li>\n  <li>arxiv: <a href=\"https:\/\/arxiv.org\/abs\/2207.10316\">https:\/\/arxiv.org\/abs\/2207.10316<\/a><\/li>\n  <li>github: <a href=\"https:\/\/github.com\/zehuichen123\/AutoAlignV2\">https:\/\/github.com\/zehuichen123\/AutoAlignV2<\/a><\/li>\n<\/ul>\n\n<h1 id=\"monocular-3d-detection-and-tracking\">Monocular 3D Detection and Tracking<\/h1>\n\n<p><strong>Time3D: End-to-End Joint Monocular 3D Object Detection and Tracking for Autonomous Driving<\/strong><\/p>\n\n<ul>\n  <li>intro: CVPR 2022<\/li>\n  <li>intro: PP-CEM &amp; Rising Auto<\/li>\n  <li>arxiv: <a href=\"https:\/\/arxiv.org\/abs\/2205.14882\">https:\/\/arxiv.org\/abs\/2205.14882<\/a><\/li>\n<\/ul>\n\n<p><strong>Depth Estimation Matters Most: Improving Per-Object Depth Estimation for Monocular 3D Detection and Tracking<\/strong><\/p>\n\n<ul>\n  <li>intro: Waymo LLC &amp; Johns Hopkins University &amp; Cornell University<\/li>\n  <li>arxiv: <a href=\"https:\/\/arxiv.org\/abs\/2206.03666\">https:\/\/arxiv.org\/abs\/2206.03666<\/a><\/li>\n<\/ul>\n\n<h1 id=\"multi-camera-3d-object-detection\">Multi-Camera 3D Object Detection<\/h1>\n\n<p><strong>PETR: Position Embedding Transformation for Multi-View 3D Object Detection<\/strong><\/p>\n\n<ul>\n  <li>intro: MEGVII Technology<\/li>\n  <li>arxiv: <a href=\"https:\/\/arxiv.org\/abs\/2203.05625\">https:\/\/arxiv.org\/abs\/2203.05625<\/a><\/li>\n<\/ul>\n\n<p><strong>PETRv2: A Unified Framework for 3D Perception from Multi-Camera Images<\/strong><\/p>\n\n<ul>\n  <li>intro: MEGVII Technology<\/li>\n  <li>arxiv: <a href=\"https:\/\/arxiv.org\/abs\/2206.01256\">https:\/\/arxiv.org\/abs\/2206.01256<\/a><\/li>\n<\/ul>\n\n<h2 id=\"sparse4d\">Sparse4D<\/h2>\n\n<p><strong>Sparse4D: Multi-view 3D Object Detection with Sparse Spatial-Temporal Fusion<\/strong><\/p>\n\n<ul>\n  <li>intro: Horizon Robotics<\/li>\n  <li>arxiv: <a href=\"https:\/\/arxiv.org\/abs\/2211.10581\">https:\/\/arxiv.org\/abs\/2211.10581<\/a><\/li>\n  <li>github: <a href=\"https:\/\/github.com\/linxuewu\/Sparse4D\">https:\/\/github.com\/linxuewu\/Sparse4D<\/a><\/li>\n<\/ul>\n\n<p><strong>Sparse4D v2: Recurrent Temporal Fusion with Sparse Model<\/strong><\/p>\n\n<ul>\n  <li>intro: Horizon Robotics<\/li>\n  <li>arxiv: <a href=\"https:\/\/arxiv.org\/abs\/2305.14018\">https:\/\/arxiv.org\/abs\/2305.14018<\/a><\/li>\n  <li>github: <a href=\"https:\/\/github.com\/linxuewu\/Sparse4D\">https:\/\/github.com\/linxuewu\/Sparse4D<\/a><\/li>\n<\/ul>\n\n<p><strong>Sparse4D v3: Advancing End-to-End 3D Detection and Tracking<\/strong><\/p>\n\n<ul>\n  <li>intro: Horizon Robotics<\/li>\n  <li>arxiv: <a href=\"https:\/\/arxiv.org\/abs\/2311.11722\">https:\/\/arxiv.org\/abs\/2311.11722<\/a><\/li>\n  <li>github: <a href=\"https:\/\/github.com\/linxuewu\/Sparse4D\">https:\/\/github.com\/linxuewu\/Sparse4D<\/a><\/li>\n<\/ul>\n\n<h1 id=\"multi-camera-multiple-3d-object-tracking\">Multi-Camera Multiple 3D Object Tracking<\/h1>\n\n<p><strong>Multi-Camera Multiple 3D Object Tracking on the Move for Autonomous Vehicles<\/strong><\/p>\n\n<ul>\n  <li>intro: CVPR Workshop 2022<\/li>\n  <li>arxiv: <a href=\"https:\/\/arxiv.org\/abs\/2204.09151\">https:\/\/arxiv.org\/abs\/2204.09151<\/a><\/li>\n<\/ul>\n\n<p><strong>SRCN3D: Sparse R-CNN 3D Surround-View Camera Object Detection and Tracking for Autonomous Driving<\/strong><\/p>\n\n<ul>\n  <li>intro: Tsinghua University<\/li>\n  <li>arxiv: <a href=\"https:\/\/arxiv.org\/abs\/2206.14451\">https:\/\/arxiv.org\/abs\/2206.14451<\/a><\/li>\n  <li>github: <a href=\"https:\/\/github.com\/synsin0\/SRCN3D\">https:\/\/github.com\/synsin0\/SRCN3D<\/a><\/li>\n<\/ul>\n"},{"title":"Study Resources","link":{"@attributes":{"href":"https:\/\/handong1587.github.io\/study\/2018\/04\/18\/resources.html"}},"updated":"2018-04-18T00:00:00+00:00","id":"https:\/\/handong1587.github.io\/study\/2018\/04\/18\/resources","content":"<p><strong>draw.io<\/strong><\/p>\n\n<ul>\n  <li>intro: an app to create diagrams. You can use it online, download it or add it to Android and iOS for free<\/li>\n  <li>homepage: <a href=\"https:\/\/www.draw.io\/\">https:\/\/www.draw.io\/<\/a><\/li>\n<\/ul>\n"},{"title":"Keep Up With New Trends","link":{"@attributes":{"href":"https:\/\/handong1587.github.io\/deep_learning\/2017\/12\/18\/keep-up-with-new-trends.html"}},"updated":"2017-12-18T00:00:00+00:00","id":"https:\/\/handong1587.github.io\/deep_learning\/2017\/12\/18\/keep-up-with-new-trends","content":"<p><strong>ComputerVisionFoundation Videos<\/strong><\/p>\n\n<p><a href=\"https:\/\/www.youtube.com\/channel\/UC0n76gicaarsN_Y9YShWwhw\/playlists\">https:\/\/www.youtube.com\/channel\/UC0n76gicaarsN_Y9YShWwhw\/playlists<\/a><\/p>\n\n<h1 id=\"eccv-2018\">ECCV 2018<\/h1>\n\n<p><strong>ECCV 2018 papers<\/strong><\/p>\n\n<p><a href=\"http:\/\/openaccess.thecvf.com\/ECCV2018.py\">http:\/\/openaccess.thecvf.com\/ECCV2018.py<\/a><\/p>\n\n<h1 id=\"icml-2018\">ICML 2018<\/h1>\n\n<p><strong>DeepMind papers at ICML 2018<\/strong><\/p>\n\n<p><strong>Facebook Research at ICML 2018<\/strong><\/p>\n\n<p><a href=\"https:\/\/research.fb.com\/facebook-research-at-icml-2018\/\">https:\/\/research.fb.com\/facebook-research-at-icml-2018\/<\/a><\/p>\n\n<p><strong>ICML 2018 Notes<\/strong><\/p>\n\n<ul>\n  <li>day1: <a href=\"https:\/\/gmarti.gitlab.io\/ml\/2018\/07\/10\/icml18-tutorials.html\">https:\/\/gmarti.gitlab.io\/ml\/2018\/07\/10\/icml18-tutorials.html<\/a><\/li>\n  <li>day2: <a href=\"https:\/\/gmarti.gitlab.io\/ml\/2018\/07\/11\/icml18-day-2.html\">https:\/\/gmarti.gitlab.io\/ml\/2018\/07\/11\/icml18-day-2.html<\/a><\/li>\n  <li>day3: <a href=\"https:\/\/gmarti.gitlab.io\/ml\/2018\/07\/12\/icml18-day-3.html\">https:\/\/gmarti.gitlab.io\/ml\/2018\/07\/12\/icml18-day-3.html<\/a><\/li>\n  <li>day4: <a href=\"https:\/\/gmarti.gitlab.io\/ml\/2018\/07\/13\/icml18-day-4.html\">https:\/\/gmarti.gitlab.io\/ml\/2018\/07\/13\/icml18-day-4.html<\/a><\/li>\n<\/ul>\n\n<p><strong>ICML 2018 Notes<\/strong><\/p>\n\n<ul>\n  <li>notes: <a href=\"https:\/\/david-abel.github.io\/blog\/posts\/misc\/icml_2018.pdf\">https:\/\/david-abel.github.io\/blog\/posts\/misc\/icml_2018.pdf<\/a><\/li>\n  <li>github: <a href=\"https:\/\/david-abel.github.io\/\">https:\/\/david-abel.github.io\/<\/a><\/li>\n<\/ul>\n\n<h1 id=\"ijcai-2018\">IJCAI 2018<\/h1>\n\n<p><strong>Proceedings of IJCAI 2018<\/strong><\/p>\n\n<p><a href=\"https:\/\/www.ijcai.org\/proceedings\/2018\/\">https:\/\/www.ijcai.org\/proceedings\/2018\/<\/a><\/p>\n\n<h1 id=\"cvpr-2018\">CVPR 2018<\/h1>\n\n<p><strong>CVPR 2018 open access<\/strong><\/p>\n\n<p><a href=\"http:\/\/openaccess.thecvf.com\/CVPR2018.py\">http:\/\/openaccess.thecvf.com\/CVPR2018.py<\/a><\/p>\n\n<p><strong>CVPR18: Tutorials<\/strong><\/p>\n\n<ul>\n  <li>youtube: <a href=\"https:\/\/www.youtube.com\/playlist?list=PL_bDvITUYucD54Ym5XKGqTv9xNsrOX0aS\">https:\/\/www.youtube.com\/playlist?list=PL_bDvITUYucD54Ym5XKGqTv9xNsrOX0aS<\/a><\/li>\n  <li>bilibili: <a href=\"https:\/\/www.bilibili.com\/video\/av27038992\/\">https:\/\/www.bilibili.com\/video\/av27038992\/<\/a><\/li>\n<\/ul>\n\n<h1 id=\"valse-2018\">VALSE 2018<\/h1>\n\n<p><a href=\"http:\/\/ice.dlut.edu.cn\/valse2018\/programs.html\">http:\/\/ice.dlut.edu.cn\/valse2018\/programs.html<\/a><\/p>\n\n<h1 id=\"nips-2017\">NIPS 2017<\/h1>\n\n<p><strong>NIPS 2017 Spotlights<\/strong><\/p>\n\n<ul>\n  <li>youtube: <a href=\"https:\/\/www.youtube.com\/playlist?list=PLbVjlVq6hjK89WtlGHdC_PNwcawrzht5S\">https:\/\/www.youtube.com\/playlist?list=PLbVjlVq6hjK89WtlGHdC_PNwcawrzht5S<\/a><\/li>\n<\/ul>\n\n<p><strong>NIPS 2017 \u2014 notes and thoughs<\/strong><\/p>\n\n<p><a href=\"https:\/\/olgalitech.wordpress.com\/2017\/12\/12\/nips-2017-notes-and-thoughs\/\">https:\/\/olgalitech.wordpress.com\/2017\/12\/12\/nips-2017-notes-and-thoughs\/<\/a><\/p>\n\n<p><strong>NIPS 2017 Notes<\/strong><\/p>\n\n<ul>\n  <li>notes: <a href=\"https:\/\/cs.brown.edu\/~dabel\/blog\/posts\/misc\/nips_2017.pdf\">https:\/\/cs.brown.edu\/~dabel\/blog\/posts\/misc\/nips_2017.pdf<\/a><\/li>\n  <li>blog: <a href=\"https:\/\/cs.brown.edu\/~dabel\/blog.html\">https:\/\/cs.brown.edu\/~dabel\/blog.html<\/a><\/li>\n<\/ul>\n\n<p><strong>NIPS 2017<\/strong><\/p>\n\n<ul>\n  <li>intro: A list of resources for all invited talks, tutorials, workshops and presentations at NIPS 2017<\/li>\n  <li>github: <a href=\"https:\/\/github.com\/\/hindupuravinash\/nips2017\">https:\/\/github.com\/\/hindupuravinash\/nips2017<\/a><\/li>\n<\/ul>\n\n<p><strong>Global NIPS 2017 Paper Implementation Challenge<\/strong><\/p>\n\n<ul>\n  <li>intro: 8th December 2017 - 31st January 2018 (Application closed)<\/li>\n  <li>homepage: <a href=\"https:\/\/nurture.ai\/nips-challenge\">https:\/\/nurture.ai\/nips-challenge<\/a><\/li>\n<\/ul>\n\n<h1 id=\"iccv-2017\">ICCV 2017<\/h1>\n\n<p><strong>ICCV 2017 open access<\/strong><\/p>\n\n<p><a href=\"http:\/\/openaccess.thecvf.com\/ICCV2017.py\">http:\/\/openaccess.thecvf.com\/ICCV2017.py<\/a><\/p>\n\n<p><strong>ICCV 2017 Workshops, Venice Italy<\/strong><\/p>\n\n<p><a href=\"http:\/\/openaccess.thecvf.com\/ICCV2017_workshops\/menu.py\">http:\/\/openaccess.thecvf.com\/ICCV2017_workshops\/menu.py<\/a><\/p>\n\n<p><strong>ICCV17 Tutorials<\/strong><\/p>\n\n<p><a href=\"https:\/\/www.youtube.com\/playlist?list=PL_bDvITUYucBGj2Hmv1e7CP9U82kHWVOT\">https:\/\/www.youtube.com\/playlist?list=PL_bDvITUYucBGj2Hmv1e7CP9U82kHWVOT<\/a><\/p>\n\n<p><strong>Facebook at ICCV 2017<\/strong><\/p>\n\n<p><a href=\"https:\/\/research.fb.com\/facebook-at-iccv-2017\/\">https:\/\/research.fb.com\/facebook-at-iccv-2017\/<\/a><\/p>\n\n<p><strong>ICCV 2017 Tutorial on GANs<\/strong><\/p>\n\n<ul>\n  <li>homepage: <a href=\"https:\/\/sites.google.com\/view\/iccv-2017-gans\/schedule\">https:\/\/sites.google.com\/view\/iccv-2017-gans\/schedule<\/a><\/li>\n  <li>youtube: {https:\/\/www.youtube.com\/playlist?list=PL_bDvITUYucDEzjMTgh1cgtTIODZe3prZ}(https:\/\/www.youtube.com\/playlist?list=PL_bDvITUYucDEzjMTgh1cgtTIODZe3prZ)<\/li>\n<\/ul>\n\n<h1 id=\"ilsvrc-2017\">ILSVRC 2017<\/h1>\n\n<p><strong>Overview of ILSVRC 2017<\/strong><\/p>\n\n<p><a href=\"http:\/\/image-net.org\/challenges\/talks_2017\/ILSVRC2017_overview.pdf\">http:\/\/image-net.org\/challenges\/talks_2017\/ILSVRC2017_overview.pdf<\/a><\/p>\n\n<p><strong>ImageNet: Where are we going? And where have we been?<\/strong><\/p>\n\n<ul>\n  <li>intro: by Fei-Fei Li, Jia Deng<\/li>\n  <li>slides: <a href=\"http:\/\/image-net.org\/challenges\/talks_2017\/imagenet_ilsvrc2017_v1.0.pdf\">http:\/\/image-net.org\/challenges\/talks_2017\/imagenet_ilsvrc2017_v1.0.pdf<\/a><\/li>\n<\/ul>\n\n<h1 id=\"deep-learning-and-reinforcement-learning-summer-school-2017\">Deep Learning and Reinforcement Learning Summer School 2017<\/h1>\n\n<ul>\n  <li>homepage: <a href=\"https:\/\/mila.umontreal.ca\/en\/cours\/deep-learning-summer-school-2017\/\">https:\/\/mila.umontreal.ca\/en\/cours\/deep-learning-summer-school-2017\/<\/a><\/li>\n  <li>slides: <a href=\"https:\/\/mila.umontreal.ca\/en\/cours\/deep-learning-summer-school-2017\/slides\/\">https:\/\/mila.umontreal.ca\/en\/cours\/deep-learning-summer-school-2017\/slides\/<\/a><\/li>\n  <li>mirror: <a href=\"https:\/\/pan.baidu.com\/s\/1eSvijvW#list\/path=%2F\">https:\/\/pan.baidu.com\/s\/1eSvijvW#list\/path=%2F<\/a><\/li>\n<\/ul>\n\n<h1 id=\"iclr-2017\">ICLR 2017<\/h1>\n\n<p><strong>ICLR 2017 Videos<\/strong><\/p>\n\n<p><a href=\"https:\/\/www.facebook.com\/pg\/iclr.cc\/videos\/\">https:\/\/www.facebook.com\/pg\/iclr.cc\/videos\/<\/a><\/p>\n\n<h1 id=\"cvpr-2017\">CVPR 2017<\/h1>\n\n<p><strong>CVPR 2017 open access<\/strong><\/p>\n\n<p><a href=\"http:\/\/openaccess.thecvf.com\/CVPR2017.py\">http:\/\/openaccess.thecvf.com\/CVPR2017.py<\/a><\/p>\n\n<p><strong>CVPR 2017 Workshops, Honolulu Hawaii<\/strong><\/p>\n\n<p><a href=\"http:\/\/openaccess.thecvf.com\/CVPR2017_workshops\/menu.py\">http:\/\/openaccess.thecvf.com\/CVPR2017_workshops\/menu.py<\/a><\/p>\n\n<h2 id=\"cvpr-2017-tutorial\">CVPR 2017 Tutorial<\/h2>\n\n<p><strong>CVPR\u201917 Tutorial: Deep Learning for Objects and Scenes<\/strong><\/p>\n\n<p><a href=\"http:\/\/deeplearning.csail.mit.edu\/\">http:\/\/deeplearning.csail.mit.edu\/<\/a><\/p>\n\n<p><strong>Lecture 1: Learning Deep Representations for Visual Recognition<\/strong><\/p>\n\n<ul>\n  <li>intro: by Kaiming He<\/li>\n  <li>slides: <a href=\"http:\/\/deeplearning.csail.mit.edu\/cvpr2017_tutorial_kaiminghe.pdf\">http:\/\/deeplearning.csail.mit.edu\/cvpr2017_tutorial_kaiminghe.pdf<\/a><\/li>\n  <li>youtube: <a href=\"https:\/\/www.youtube.com\/watch?v=jHv37mKAhV4\">https:\/\/www.youtube.com\/watch?v=jHv37mKAhV4<\/a><\/li>\n<\/ul>\n\n<p><strong>Lecture 2: Deep Learning for Instance-level Object Understanding<\/strong><\/p>\n\n<ul>\n  <li>intro: by Ross Girshick<\/li>\n  <li>slides: <a href=\"http:\/\/deeplearning.csail.mit.edu\/instance_ross.pdf\">http:\/\/deeplearning.csail.mit.edu\/instance_ross.pdf<\/a><\/li>\n  <li>youtube: <a href=\"https:\/\/www.youtube.com\/watch?v=jHv37mKAhV4&amp;feature=youtu.be&amp;t=2349\">https:\/\/www.youtube.com\/watch?v=jHv37mKAhV4&amp;feature=youtu.be&amp;t=2349<\/a><\/li>\n<\/ul>\n\n<h1 id=\"nips-2016\">NIPS 2016<\/h1>\n\n<p><strong>NIPS 2016 Schedule<\/strong><\/p>\n\n<p><a href=\"https:\/\/nips.cc\/Conferences\/2016\/Schedule\">https:\/\/nips.cc\/Conferences\/2016\/Schedule<\/a><\/p>\n\n<p><strong>DeepMind Papers @ NIPS (Part 1)<\/strong><\/p>\n\n<p><a href=\"https:\/\/deepmind.com\/blog\/deepmind-papers-nips-part-1\/\">https:\/\/deepmind.com\/blog\/deepmind-papers-nips-part-1\/<\/a><\/p>\n\n<p><strong>DeepMind Papers @ NIPS (Part 2)<\/strong><\/p>\n\n<p><a href=\"https:\/\/deepmind.com\/blog\/deepmind-papers-nips-part-2\/\">https:\/\/deepmind.com\/blog\/deepmind-papers-nips-part-2\/<\/a><\/p>\n\n<p><strong>DeepMind Papers @ NIPS (Part 3)<\/strong><\/p>\n\n<p><a href=\"https:\/\/deepmind.com\/blog\/deepmind-papers-nips-part-3\/\">https:\/\/deepmind.com\/blog\/deepmind-papers-nips-part-3\/<\/a><\/p>\n\n<p><strong>NIPS 2016 Review, Days 0 &amp; 1<\/strong><\/p>\n\n<p><a href=\"https:\/\/gab41.lab41.org\/nips-2016-review-day-1-6e504bcf1451#.ldaft47ea\">https:\/\/gab41.lab41.org\/nips-2016-review-day-1-6e504bcf1451#.ldaft47ea<\/a><\/p>\n\n<p><strong>NIPS 2016 Review, Day 2<\/strong><\/p>\n\n<p><a href=\"https:\/\/gab41.lab41.org\/nips-2016-review-day-2-daff1088135e#.o9r8li43x\">https:\/\/gab41.lab41.org\/nips-2016-review-day-2-daff1088135e#.o9r8li43x<\/a><\/p>\n\n<p><strong>NIPS 2016\u200a\u2014\u200aDay 1 Highlights<\/strong><\/p>\n\n<p><a href=\"https:\/\/blog.insightdatascience.com\/nips-2016-day-1-6ae1207cab82#.c248ycixg\">https:\/\/blog.insightdatascience.com\/nips-2016-day-1-6ae1207cab82#.c248ycixg<\/a><\/p>\n\n<p><strong>NIPS 2016\u200a\u2014\u200aDay 2 Highlights: Platform wars, RL and RNNs<\/strong><\/p>\n\n<p><a href=\"https:\/\/blog.insightdatascience.com\/nips-2016-day-2-highlights-platform-wars-rl-and-rnns-9dca43bc1448#.zgtu1rtr0\">https:\/\/blog.insightdatascience.com\/nips-2016-day-2-highlights-platform-wars-rl-and-rnns-9dca43bc1448#.zgtu1rtr0<\/a><\/p>\n\n<p><strong>50 things I learned at NIPS 2016<\/strong><\/p>\n\n<p><a href=\"https:\/\/blog.ought.com\/nips-2016-875bb8fadb8c#.f1a1161hq\">https:\/\/blog.ought.com\/nips-2016-875bb8fadb8c#.f1a1161hq<\/a><\/p>\n\n<p><strong>NIPS 2016 Highlights<\/strong><\/p>\n\n<ul>\n  <li>slides: <a href=\"http:\/\/www.slideshare.net\/SebastianRuder\/nips-2016-highlights-sebastian-ruder\">http:\/\/www.slideshare.net\/SebastianRuder\/nips-2016-highlights-sebastian-ruder<\/a><\/li>\n  <li>mirror: <a href=\"https:\/\/pan.baidu.com\/s\/1kUKnCJ9\">https:\/\/pan.baidu.com\/s\/1kUKnCJ9<\/a><\/li>\n<\/ul>\n\n<p><strong>Brad Neuberg\u2019s NIPS 2016 Notes<\/strong><\/p>\n\n<ul>\n  <li>blog: <a href=\"https:\/\/paper.dropbox.com\/doc\/Brad-Neubergs-NIPS-2016-Notes-XUFRdpNYyBhau0gWcybRo\">https:\/\/paper.dropbox.com\/doc\/Brad-Neubergs-NIPS-2016-Notes-XUFRdpNYyBhau0gWcybRo<\/a><\/li>\n<\/ul>\n\n<p><strong>All Code Implementations for NIPS 2016 papers<\/strong><\/p>\n\n<ul>\n  <li>reddit: <a href=\"https:\/\/www.reddit.com\/r\/MachineLearning\/comments\/5hwqeb\/project_all_code_implementations_for_nips_2016\/\">https:\/\/www.reddit.com\/r\/MachineLearning\/comments\/5hwqeb\/project_all_code_implementations_for_nips_2016\/<\/a><\/li>\n<\/ul>\n\n<h1 id=\"heuritech-deep-learning-meetup\">Heuritech Deep Learning Meetup<\/h1>\n\n<p><strong>Heuritech Deep Learning Meetup #7: more than 100 attendees for convolutionnal neural networks<\/strong><\/p>\n\n<ul>\n  <li>blog: <a href=\"https:\/\/blog.heuritech.com\/2016\/11\/03\/heuritech-deep-learning-meetup-7-more-than-100-attendees-for-convolutionnal-neural-networks\/\">https:\/\/blog.heuritech.com\/2016\/11\/03\/heuritech-deep-learning-meetup-7-more-than-100-attendees-for-convolutionnal-neural-networks\/<\/a><\/li>\n<\/ul>\n\n<h1 id=\"eccv-2016\">ECCV 2016<\/h1>\n\n<p><strong>ECCV Brings Together the Brightest Minds in Computer Vision<\/strong><\/p>\n\n<p><a href=\"https:\/\/research.facebook.com\/blog\/eccv-brings-together-the-brightest-minds-in-computer-vision\/\">https:\/\/research.facebook.com\/blog\/eccv-brings-together-the-brightest-minds-in-computer-vision\/<\/a><\/p>\n\n<p><strong>ECCV in a theatrical setting<\/strong><\/p>\n\n<ul>\n  <li>blog: <a href=\"http:\/\/zoyathinks.blogspot.jp\/2016\/10\/eccv-in-theatrical-setting.html\">http:\/\/zoyathinks.blogspot.jp\/2016\/10\/eccv-in-theatrical-setting.html<\/a><\/li>\n<\/ul>\n\n<h1 id=\"2nd-imagenet--coco-joint-workshop\">2nd ImageNet + COCO Joint Workshop<\/h1>\n\n<p><strong>2nd ImageNet and COCO Visual Recognition Challenges Joint Workshop<\/strong><\/p>\n\n<p><a href=\"http:\/\/image-net.org\/challenges\/ilsvrc+coco2016\">http:\/\/image-net.org\/challenges\/ilsvrc+coco2016<\/a><\/p>\n\n<h1 id=\"dlss-2016\">DLSS 2016<\/h1>\n\n<p><strong>Montr\u00e9al Deep Learning Summer School 2016<\/strong><\/p>\n\n<ul>\n  <li>video lectures: <a href=\"http:\/\/videolectures.net\/deeplearning2016_montreal\/\">http:\/\/videolectures.net\/deeplearning2016_montreal\/<\/a><\/li>\n  <li>material: <a href=\"https:\/\/github.com\/mila-udem\/summerschool2016\">https:\/\/github.com\/mila-udem\/summerschool2016<\/a><\/li>\n  <li>slides: <a href=\"https:\/\/sites.google.com\/site\/deeplearningsummerschool2016\/speakers\">https:\/\/sites.google.com\/site\/deeplearningsummerschool2016\/speakers<\/a><\/li>\n  <li>mirror: <a href=\"http:\/\/pan.baidu.com\/s\/1kUWrWI7\">http:\/\/pan.baidu.com\/s\/1kUWrWI7<\/a><\/li>\n<\/ul>\n\n<p><strong>Highlights from the Deep Learning Summer School (Part 1)<\/strong><\/p>\n\n<p><a href=\"https:\/\/vkrakovna.wordpress.com\/2016\/08\/25\/highlights-from-the-deep-learning-summer-school-part-1\/\">https:\/\/vkrakovna.wordpress.com\/2016\/08\/25\/highlights-from-the-deep-learning-summer-school-part-1\/<\/a><\/p>\n\n<p><strong>What I learned from Deep Learning Summer School 2016<\/strong><\/p>\n\n<p><a href=\"https:\/\/www.linkedin.com\/pulse\/what-i-learned-from-deep-learning-summer-school-2016-hamid-palangi\">https:\/\/www.linkedin.com\/pulse\/what-i-learned-from-deep-learning-summer-school-2016-hamid-palangi<\/a><\/p>\n\n<h1 id=\"icml-2016\">ICML 2016<\/h1>\n\n<p><strong>10 Papers from ICML and CVPR<\/strong><\/p>\n\n<p><a href=\"https:\/\/leotam.github.io\/general\/2016\/07\/12\/ICMLcVPR.html\">https:\/\/leotam.github.io\/general\/2016\/07\/12\/ICMLcVPR.html<\/a><\/p>\n\n<p><strong>ICML 2016 was awesome<\/strong><\/p>\n\n<ul>\n  <li>blog: <a href=\"http:\/\/hunch.net\/?p=4710099\">http:\/\/hunch.net\/?p=4710099<\/a><\/li>\n<\/ul>\n\n<p><strong>Highlights from ICML 2016<\/strong><\/p>\n\n<p><a href=\"http:\/\/www.lunametrics.com\/blog\/2016\/07\/05\/highlights-icml-2016\/\">http:\/\/www.lunametrics.com\/blog\/2016\/07\/05\/highlights-icml-2016\/<\/a><\/p>\n\n<p><strong>ICML 2016 tutorials<\/strong><\/p>\n\n<p><a href=\"http:\/\/icml.cc\/2016\/?page_id=97\">http:\/\/icml.cc\/2016\/?page_id=97<\/a><\/p>\n\n<p><strong>Deep Learning, Tools and Methods workshop<\/strong><\/p>\n\n<ul>\n  <li>intro: 3 hour tutorials on Torch, Tensorflow and Talks by Yoshua Bengio, NVIDIA, AMD<\/li>\n  <li>homepage: <a href=\"https:\/\/portal.klewel.com\/watch\/webcast\/deep-learning-tools-and-methods-workshop\/\">https:\/\/portal.klewel.com\/watch\/webcast\/deep-learning-tools-and-methods-workshop\/<\/a><\/li>\n  <li>slides: <a href=\"http:\/\/www.idiap.ch\/workshop\/dltm\/\">http:\/\/www.idiap.ch\/workshop\/dltm\/<\/a><\/li>\n  <li>Torch tutorials: <a href=\"https:\/\/github.com\/szagoruyko\/idiap-tutorials\">https:\/\/github.com\/szagoruyko\/idiap-tutorials<\/a><\/li>\n<\/ul>\n\n<p><strong>ICML 2016 Conference and Workshops<\/strong><\/p>\n\n<ul>\n  <li>intro: talks, orals, tutorials<\/li>\n  <li>homepage: <a href=\"http:\/\/techtalks.tv\/icml\/2016\/\">http:\/\/techtalks.tv\/icml\/2016\/<\/a><\/li>\n<\/ul>\n\n<h1 id=\"iclr-2016\">ICLR 2016<\/h1>\n\n<p><strong>Deep Learning Trends @ ICLR 2016<\/strong><\/p>\n\n<p><a href=\"http:\/\/www.computervisionblog.com\/2016\/06\/deep-learning-trends-iclr-2016.html\">http:\/\/www.computervisionblog.com\/2016\/06\/deep-learning-trends-iclr-2016.html<\/a><\/p>\n\n<p><strong>WACV 2016: IEEE Winter Conference on Applications of Computer Vision<\/strong><\/p>\n\n<ul>\n  <li>homepage: <a href=\"http:\/\/www.wacv16.org\/\">http:\/\/www.wacv16.org\/<\/a><\/li>\n  <li>youtube: <a href=\"https:\/\/www.youtube.com\/channel\/UCdV5ooxkvhbpmv0_3MzIo9g\/videos\">https:\/\/www.youtube.com\/channel\/UCdV5ooxkvhbpmv0_3MzIo9g\/videos<\/a><\/li>\n<\/ul>\n\n<p><strong>ICLR 2016 Takeaways: Adversarial Models &amp; Optimization<\/strong><\/p>\n\n<p><a href=\"https:\/\/indico.io\/blog\/iclr-2016-takeaways\/\">https:\/\/indico.io\/blog\/iclr-2016-takeaways\/<\/a><\/p>\n\n<p><strong>tensor talk - Latest AI Code: conference-iclr-2016<\/strong><\/p>\n\n<p><a href=\"https:\/\/tensortalk.com\/?cat=conference-iclr-2016\">https:\/\/tensortalk.com\/?cat=conference-iclr-2016<\/a><\/p>\n\n<h1 id=\"cvpr-2016\">CVPR 2016<\/h1>\n\n<p><strong>CVPR 2016<\/strong><\/p>\n\n<ul>\n  <li>homepage: <a href=\"http:\/\/cvpr2016.thecvf.com\/program\/main_conference\">http:\/\/cvpr2016.thecvf.com\/program\/main_conference<\/a><\/li>\n  <li>Object Recognition and Detection: <a href=\"http:\/\/cvpr2016.thecvf.com\/program\/main_conference#O1-2A\">http:\/\/cvpr2016.thecvf.com\/program\/main_conference#O1-2A<\/a><\/li>\n  <li>Object Detection 1: <a href=\"http:\/\/cvpr2016.thecvf.com\/program\/main_conference#S1-2A\">http:\/\/cvpr2016.thecvf.com\/program\/main_conference#S1-2A<\/a><\/li>\n  <li>Object Detection 2: <a href=\"http:\/\/cvpr2016.thecvf.com\/program\/main_conference#S2-2A\">http:\/\/cvpr2016.thecvf.com\/program\/main_conference#S2-2A<\/a><\/li>\n<\/ul>\n\n<p><strong>Workshop @ CVPR16: Deep Vision Workshop<\/strong><\/p>\n\n<ul>\n  <li>youtube: <a href=\"https:\/\/www.youtube.com\/playlist?list=PL_bDvITUYucC8uLRtWw8fdvVr3DdwzAeH\">https:\/\/www.youtube.com\/playlist?list=PL_bDvITUYucC8uLRtWw8fdvVr3DdwzAeH<\/a><\/li>\n<\/ul>\n\n<p><strong>Five Things I Learned at CVPR 2016<\/strong><\/p>\n\n<ul>\n  <li>day 1: <a href=\"https:\/\/gab41.lab41.org\/all-your-questions-answered-cvpr-day-1-40f488103076#.ejrgol28h\">https:\/\/gab41.lab41.org\/all-your-questions-answered-cvpr-day-1-40f488103076#.ejrgol28h<\/a><\/li>\n  <li>day 2: <a href=\"https:\/\/gab41.lab41.org\/the-sounds-of-cvpr-day-2-f33a3625cbf3#.nifea1blu\">https:\/\/gab41.lab41.org\/the-sounds-of-cvpr-day-2-f33a3625cbf3#.nifea1blu<\/a><\/li>\n  <li>day 3: <a href=\"https:\/\/gab41.lab41.org\/animated-gifs-and-video-clips-cvpr-day-3-96fdcfc36e2c#.x9wd86lym\">https:\/\/gab41.lab41.org\/animated-gifs-and-video-clips-cvpr-day-3-96fdcfc36e2c#.x9wd86lym<\/a><\/li>\n  <li>day 4: <a href=\"https:\/\/gab41.lab41.org\/caption-this-cvpr-day-4-8fe94d7aeb71#.rhzd3zg5j\">https:\/\/gab41.lab41.org\/caption-this-cvpr-day-4-8fe94d7aeb71#.rhzd3zg5j<\/a><\/li>\n  <li>day 5: <a href=\"https:\/\/gab41.lab41.org\/five-things-i-learned-at-cvpr-2016-5e857c017f7b#.umag6vs3v\">https:\/\/gab41.lab41.org\/five-things-i-learned-at-cvpr-2016-5e857c017f7b#.umag6vs3v<\/a><\/li>\n<\/ul>\n\n<h1 id=\"valse-2016\">VALSE 2016<\/h1>\n\n<p><strong>VALSE 2016<\/strong><\/p>\n\n<p><a href=\"http:\/\/mclab.eic.hust.edu.cn\/valse2016\/program.html\">http:\/\/mclab.eic.hust.edu.cn\/valse2016\/program.html<\/a><\/p>\n\n<p><strong>Science: Table of Contents: Artificial Intelligence<\/strong><\/p>\n\n<p><a href=\"http:\/\/science.sciencemag.org\/content\/349\/6245.toc\">http:\/\/science.sciencemag.org\/content\/349\/6245.toc<\/a><\/p>\n\n<p><strong>Deep Learning and the Future of AI<\/strong><\/p>\n\n<ul>\n  <li>author: by Prof. Yann LeCun (Director of AI Research at Facebook &amp; Professor at NYU)<\/li>\n  <li>homapage: <a href=\"http:\/\/indico.cern.ch\/event\/510372\/\">http:\/\/indico.cern.ch\/event\/510372\/<\/a><\/li>\n  <li>slides: <a href=\"http:\/\/indico.cern.ch\/event\/510372\/attachments\/1245509\/1840815\/lecun-20160324-cern.pdf\">http:\/\/indico.cern.ch\/event\/510372\/attachments\/1245509\/1840815\/lecun-20160324-cern.pdf<\/a><\/li>\n<\/ul>\n\n<h1 id=\"icml-2015\">ICML 2015<\/h1>\n\n<p><strong>Video Recordings of the ICML\u201915 Deep Learning Workshop<\/strong><\/p>\n\n<ul>\n  <li>homepage: <a href=\"http:\/\/dpkingma.com\/?page_id=483\">http:\/\/dpkingma.com\/?page_id=483<\/a><\/li>\n  <li>youtube: <a href=\"https:\/\/www.youtube.com\/playlist?list=PLdH9u0f1XKW8cUM3vIVjnpBfk_FKzviCu\">https:\/\/www.youtube.com\/playlist?list=PLdH9u0f1XKW8cUM3vIVjnpBfk_FKzviCu<\/a><\/li>\n<\/ul>\n\n<h1 id=\"iccv-2015\">ICCV 2015<\/h1>\n\n<p><strong>International Conference on Computer Vision (ICCV) 2015, Santiago<\/strong><\/p>\n\n<p><a href=\"http:\/\/videolectures.net\/iccv2015_santiago\/\">http:\/\/videolectures.net\/iccv2015_santiago\/<\/a><\/p>\n\n<p><strong>ICCV 2015 Tutorial on Tools for Efficient Object Detection<\/strong><\/p>\n\n<p><a href=\"http:\/\/mp7.watson.ibm.com\/ICCV2015\/ObjectDetectionICCV2015.html\">http:\/\/mp7.watson.ibm.com\/ICCV2015\/ObjectDetectionICCV2015.html<\/a><\/p>\n\n<p><strong>ICCV 2015 Tutorials<\/strong><\/p>\n\n<p><a href=\"http:\/\/pamitc.org\/iccv15\/tutorials.php\">http:\/\/pamitc.org\/iccv15\/tutorials.php<\/a><\/p>\n\n<p><strong>ICCV 2015 Tutorial on Tools for Efficient Object Detection<\/strong><\/p>\n\n<p><a href=\"http:\/\/mp7.watson.ibm.com\/ICCV2015\/ObjectDetectionICCV2015.html\">http:\/\/mp7.watson.ibm.com\/ICCV2015\/ObjectDetectionICCV2015.html<\/a><\/p>\n\n<h1 id=\"imagenet--coco-joint-workshop\">ImageNet + COCO Joint Workshop<\/h1>\n\n<p><strong>ImageNet and MS COCO Visual Recognition Challenges Joint Workshop<\/strong><\/p>\n\n<p><a href=\"http:\/\/image-net.org\/challenges\/ilsvrc+mscoco2015\">http:\/\/image-net.org\/challenges\/ilsvrc+mscoco2015<\/a><\/p>\n\n<p><strong>OpenAI: Some thoughts, mostly questions<\/strong><\/p>\n\n<p><a href=\"https:\/\/medium.com\/@kleinsound\/openai-some-thoughts-mostly-questions-30fb63d53ef0#.32u1yt6oy\">https:\/\/medium.com\/@kleinsound\/openai-some-thoughts-mostly-questions-30fb63d53ef0#.32u1yt6oy<\/a><\/p>\n\n<p><strong>OpenAI \u2014 quick thoughts<\/strong><\/p>\n\n<p><a href=\"http:\/\/wp.goertzel.org\/openai-quick-thoughts\/\">http:\/\/wp.goertzel.org\/openai-quick-thoughts\/<\/a><\/p>\n\n<h1 id=\"nips-2015\">NIPS 2015<\/h1>\n\n<p><strong>NIPS 2015 workshop on non-convex optimization<\/strong><\/p>\n\n<p><a href=\"http:\/\/www.offconvex.org\/2016\/01\/25\/non-convex-workshop\/\">http:\/\/www.offconvex.org\/2016\/01\/25\/non-convex-workshop\/<\/a><\/p>\n\n<p><strong>10 Deep Learning Trends at NIPS 2015<\/strong><\/p>\n\n<p><a href=\"http:\/\/codinginparadise.org\/ebooks\/html\/blog\/ten_deep_learning_trends_at_nips_2015.html\">http:\/\/codinginparadise.org\/ebooks\/html\/blog\/ten_deep_learning_trends_at_nips_2015.html<\/a><\/p>\n\n<p><strong>NIPS 2015 \u2013 Deep RL Workshop<\/strong><\/p>\n\n<p><a href=\"https:\/\/gridworld.wordpress.com\/2015\/12\/13\/nips-2015-deep-rl-workshop\/\">https:\/\/gridworld.wordpress.com\/2015\/12\/13\/nips-2015-deep-rl-workshop\/<\/a><\/p>\n\n<p><strong>My takeaways from NIPS 2015<\/strong><\/p>\n\n<ul>\n  <li>blog: <a href=\"http:\/\/www.danvk.org\/2015\/12\/12\/nips-2015.html\">http:\/\/www.danvk.org\/2015\/12\/12\/nips-2015.html<\/a><\/li>\n<\/ul>\n\n<p><strong>On the spirit of NIPS 2015 and OpenAI<\/strong><\/p>\n\n<ul>\n  <li>blog: <a href=\"https:\/\/blogs.princeton.edu\/imabandit\/2015\/12\/13\/on-the-spirit-of-nips-2015-and-openai\/\">https:\/\/blogs.princeton.edu\/imabandit\/2015\/12\/13\/on-the-spirit-of-nips-2015-and-openai\/<\/a><\/li>\n<\/ul>\n\n<p><strong>NIPS 2015<\/strong><\/p>\n\n<ul>\n  <li>Part 1: <a href=\"https:\/\/memming.wordpress.com\/2015\/12\/07\/nips-2015-part-1\/\">https:\/\/memming.wordpress.com\/2015\/12\/07\/nips-2015-part-1\/<\/a><\/li>\n  <li>Part 2: <a href=\"https:\/\/memming.wordpress.com\/2015\/12\/09\/nips-2015-part-2\/\">https:\/\/memming.wordpress.com\/2015\/12\/09\/nips-2015-part-2\/<\/a><\/li>\n<\/ul>\n\n<p><strong>Deep Learning - NIPS\u20192015 Tutorial (By Geoff Hinton, Yoshua Bengio &amp; Yann LeCun)<\/strong><\/p>\n\n<ul>\n  <li>slides: <a href=\"http:\/\/www.iro.umontreal.ca\/~bengioy\/talks\/DL-Tutorial-NIPS2015.pdf\">http:\/\/www.iro.umontreal.ca\/~bengioy\/talks\/DL-Tutorial-NIPS2015.pdf<\/a><\/li>\n<\/ul>\n\n<p><strong>NIPS 2015 Posner Lecture \u2013 Zoubin Ghahramani: Probabilistic Machine Learning<\/strong><\/p>\n\n<p><a href=\"https:\/\/gridworld.wordpress.com\/2015\/12\/08\/nips-2015-posner-lecture-zoubin-ghahramani\/\">https:\/\/gridworld.wordpress.com\/2015\/12\/08\/nips-2015-posner-lecture-zoubin-ghahramani\/<\/a><\/p>\n\n<p><strong>NIPS 2015 Deep Learning Tutorial Notes<\/strong><\/p>\n\n<p><a href=\"http:\/\/jatwood.org\/blog\/nips-deep-learning-tutorial.html\">http:\/\/jatwood.org\/blog\/nips-deep-learning-tutorial.html<\/a><\/p>\n\n<h1 id=\"dlss-2015\">DLSS 2015<\/h1>\n\n<p><strong>26 Things I Learned in the Deep Learning Summer School<\/strong><\/p>\n\n<p><a href=\"http:\/\/www.marekrei.com\/blog\/26-things-i-learned-in-the-deep-learning-summer-school\/\">http:\/\/www.marekrei.com\/blog\/26-things-i-learned-in-the-deep-learning-summer-school\/<\/a> <br \/>\n<a href=\"http:\/\/www.csdn.net\/article\/2015-09-16\/2825716\">http:\/\/www.csdn.net\/article\/2015-09-16\/2825716<\/a><\/p>\n\n<p><strong>Deep Learning Summer School 2015<\/strong><\/p>\n\n<ul>\n  <li>homepage: <a href=\"https:\/\/sites.google.com\/site\/deeplearningsummerschool\/schedule\">https:\/\/sites.google.com\/site\/deeplearningsummerschool\/schedule<\/a><\/li>\n  <li>slides: <a href=\"http:\/\/docs.huihoo.com\/deep-learning\/deeplearningsummerschool\/2015\/\">http:\/\/docs.huihoo.com\/deep-learning\/deeplearningsummerschool\/2015\/<\/a><\/li>\n  <li>github: <a href=\"https:\/\/github.com\/mila-udem\/summerschool2015\">https:\/\/github.com\/mila-udem\/summerschool2015<\/a><\/li>\n<\/ul>\n\n<h1 id=\"iclr-2015\">ICLR 2015<\/h1>\n\n<p><strong>Conference Schedule<\/strong><\/p>\n\n<p><a href=\"http:\/\/www.iclr.cc\/doku.php?id=iclr2015:main&amp;utm_content=buffer0b339&amp;utm_campaign=buffer#conference_schedule\">http:\/\/www.iclr.cc\/doku.php?id=iclr2015:main&amp;utm_content=buffer0b339&amp;utm_campaign=buffer#conference_schedule<\/a><\/p>\n\n<h1 id=\"cvpr-2014\">CVPR 2014<\/h1>\n\n<p><strong>TUTORIAL ON DEEP LEARNING FOR VISION<\/strong><\/p>\n\n<p><a href=\"https:\/\/sites.google.com\/site\/deeplearningcvpr2014\/\">https:\/\/sites.google.com\/site\/deeplearningcvpr2014\/<\/a><\/p>\n"},{"title":"Courses","link":{"@attributes":{"href":"https:\/\/handong1587.github.io\/study\/2017\/11\/28\/courses.html"}},"updated":"2017-11-28T00:00:00+00:00","id":"https:\/\/handong1587.github.io\/study\/2017\/11\/28\/courses","content":"<p><strong>CS 007: PERSONAL FINANCE FOR ENGINEERS<\/strong><\/p>\n\n<ul>\n  <li>intro: Stanford University 2017-8<\/li>\n  <li>homepage: <a href=\"https:\/\/cs007.blog\/\">https:\/\/cs007.blog\/<\/a><\/li>\n<\/ul>\n"},{"title":"PyInstsaller and Others","link":{"@attributes":{"href":"https:\/\/handong1587.github.io\/programming_study\/2016\/12\/24\/pyinstaller-and-others.html"}},"updated":"2016-12-24T00:00:00+00:00","id":"https:\/\/handong1587.github.io\/programming_study\/2016\/12\/24\/pyinstaller-and-others","content":"<h1 id=\"quick-introduction\">Quick introduction<\/h1>\n\n<p>I recently need to convert one Python program into binary mode program. \nThat is, you don\u2019t want to expose any of your source code, data files, \nonly one binary executable file will be provided.<\/p>\n\n<p><a href=\"http:\/\/www.pyinstaller.org\/\">PyInstaller<\/a> is a fairly good choice to use, \nand can work on many platforms like Linux, Windows, etc.<\/p>\n\n<p>You can check out its official git repository at \n<a href=\"https:\/\/github.com\/pyinstaller\/pyinstaller\">https:\/\/github.com\/pyinstaller\/pyinstaller<\/a>.<\/p>\n\n<p>It is recommended that first try out its officially, stable release \u2013 \nbut when something weird come just around, you can turn to the github dev branch for help \u2013 actually that is what I did.<\/p>\n\n<h1 id=\"hidden-import\">hidden-import<\/h1>\n\n<p>There are 2 basic ways to process Python scripts. I chose to use pyinstaller.py directly,\nalthough you can use <em>spec<\/em> file if you want.<\/p>\n\n<p>When building Python scripts, you probably will get some build errors telling you that some Python packages cannot be imported.\nLike:<\/p>\n\n<div class=\"language-plaintext highlighter-rouge\"><div class=\"highlight\"><pre class=\"highlight\"><code>ImportError: The 'packaging' package is required\n<\/code><\/pre><\/div><\/div>\n\n<div class=\"language-plaintext highlighter-rouge\"><div class=\"highlight\"><pre class=\"highlight\"><code>ImportError: No module named core_cy\n<\/code><\/pre><\/div><\/div>\n\n<p>I might explain it in the future, but to put it simply, some Python packages need to be \u201chidden-import\u201d to get around this issue.\nSo now we can setup a fundamental build script to help our work:<\/p>\n\n<div class=\"language-plaintext highlighter-rouge\"><div class=\"highlight\"><pre class=\"highlight\"><code>\/path\/to\/git\/pyinstaller\/pyinstaller.py \\\n    --onefile \\\n    --hidden-import=skimage.io \\\n    --hidden-import=skimage.transform \\\n    --hidden-import=skimage.filter.rank.core_cy \\\n    --hidden-import=packaging \\\n    --hidden-import=packaging.version \\\n    --hidden-import=packaging.specifiers \\\n    --hidden-import=packaging.requirements \\\n    --hidden-import=scipy.linalg \\\n    --hidden-import=scipy.linalg.cython_blas \\\n    --hidden-import=scipy.linalg.cython_lapack \\\n    --hidden-import=scipy.ndimage \\\n    --hidden-import=skimage._shared.interpolation \\\n    --hidden-import=google.protobuf.internal \\\n    --hidden-import=google.protobuf.internal.enum_type_wrapper \\\n    --hidden-import=google.protobuf.descriptor \\\n    target_program.py\n<\/code><\/pre><\/div><\/div>\n\n<h1 id=\"what-is-wrong-with-mkl\">What is wrong with MKL<\/h1>\n\n<p>One weird error I met was the Intel MKL FATAL ERROR:<\/p>\n\n<div class=\"language-plaintext highlighter-rouge\"><div class=\"highlight\"><pre class=\"highlight\"><code>Intel MKL FATAL ERROR: Cannot load libmkl_avx2.so or libmkl_def.so.\n<\/code><\/pre><\/div><\/div>\n\n<p>Since I use anaconda, I find MKL has already been installed on the anaconda install location \nand can find these 2 files easily, but this error still pop out.\nIf I remember correctly, the solution is even more weird:\nsimply update numpy to a latest version:<\/p>\n\n<div class=\"language-plaintext highlighter-rouge\"><div class=\"highlight\"><pre class=\"highlight\"><code>conda update numpy\n<\/code><\/pre><\/div><\/div>\n\n<p>or:<\/p>\n\n<div class=\"language-plaintext highlighter-rouge\"><div class=\"highlight\"><pre class=\"highlight\"><code>conda install linux-64_numpy-1.11.2-py27_0.tar.bz2\n<\/code><\/pre><\/div><\/div>\n\n<p>I don\u2019t know what happened exactly but looks like it been fixed. Hmm\u2026<\/p>\n\n<h1 id=\"add-data-and-_meipass\">\u2013add-data and _MEIPASS<\/h1>\n\n<p>PyInstaller can also bundle data files to your programs. When bundled app runs, \nit will load these data files, in a different location.\nHere is a helper function to locate your data files:<\/p>\n\n<div class=\"language-plaintext highlighter-rouge\"><div class=\"highlight\"><pre class=\"highlight\"><code>def resource_path(relative):\n    bundle_dir = os.environ.get(\"_MEIPASS2\", os.path.abspath(\".\"))\n    if getattr(sys, 'frozen', False):\n        # we are running in a bundle\n        bundle_dir = sys._MEIPASS\n    else:\n        # we are running in a normal Python environment\n        bundle_dir = os.path.dirname(os.path.abspath(__file__))\n\n    return os.path.join(bundle_dir, relative)\n<\/code><\/pre><\/div><\/div>\n\n<p>You can put your data file in your local directory, \nbut need to specify the data file name in Python script in a right way:<\/p>\n\n<div class=\"language-plaintext highlighter-rouge\"><div class=\"highlight\"><pre class=\"highlight\"><code>target_file = resource_path('target_data_file1')\n<\/code><\/pre><\/div><\/div>\n\n<p>In build script, you need to configure the data files or folders:<\/p>\n\n<div class=\"language-plaintext highlighter-rouge\"><div class=\"highlight\"><pre class=\"highlight\"><code>    --add-data=\"target_data_file1:.\" \\\n    --add-data=\"target_data_file2:.\" \\\n    --add-data=\"folder1\/sub_folder1\/target_data_file3:folder1\/sub_folder1\/target_data_file3\" \\\n<\/code><\/pre><\/div><\/div>\n\n<h1 id=\"missing-libs\">Missing libs<\/h1>\n\n<div class=\"language-plaintext highlighter-rouge\"><div class=\"highlight\"><pre class=\"highlight\"><code>    --add-binary=\"libgfortran.so.1:lib\" \\\n<\/code><\/pre><\/div><\/div>\n\n<p>The build error told me one *so file is required. So just add it.<\/p>\n\n<h1 id=\"config-pythonpath\">Config PYTHONPATH<\/h1>\n\n<p>Some of your Python scripts might depends on some relative path, \nso you will need to put this dependencies into the build script:<\/p>\n\n<div class=\"language-plaintext highlighter-rouge\"><div class=\"highlight\"><pre class=\"highlight\"><code>--paths=\"..\/dependency_folder\" \\\n<\/code><\/pre><\/div><\/div>\n\n<h1 id=\"continue-tackling-weird-stuffs\">Continue tackling weird stuffs<\/h1>\n\n<p>Util now it sounds like an easy task.\nBut what happened next consumed me about 2 days \u2013 I wish I could have known how to avoid it :-(<\/p>\n\n<p>My Python project includes A Caffe module which run a simple image classification process.\nOne basic function is <a href=\"https:\/\/github.com\/BVLC\/caffe\">Caffe<\/a> calling skimage.io to load image:<\/p>\n\n<p><a href=\"https:\/\/github.com\/BVLC\/caffe\/blob\/master\/python\/caffe\/io.py\">https:\/\/github.com\/BVLC\/caffe\/blob\/master\/python\/caffe\/io.py<\/a><\/p>\n\n<div class=\"language-plaintext highlighter-rouge\"><div class=\"highlight\"><pre class=\"highlight\"><code>def load_image(filename, color=True):\n    img = skimage.img_as_float(skimage.io.imread(filename, as_grey=not color)).astype(np.float32)\n    if img.ndim == 2:\n        img = img[:, :, np.newaxis]\n        if color:\n            img = np.tile(img, (1, 1, 3))\n    elif img.shape[2] == 4:\n        img = img[:, :, :3]\n    return img\n<\/code><\/pre><\/div><\/div>\n\n<p>I wonder if PyInstaller currently has a good support for Python package skimage.\nBut from what I know by now, it doesn\u2019t.<\/p>\n\n<p>Run from Python source code files, it works fine. But when I packed all things into one single binary file,\nit can not load image at all. And after debugging and googleing for a long time \u2013 \nI always thought maybe I did something wrong \u2013 I get rid of this. PyInstaller hates skimage! \nSo at last I use cv2 instead. And it works smoothly.<\/p>\n\n<div class=\"language-plaintext highlighter-rouge\"><div class=\"highlight\"><pre class=\"highlight\"><code>def cv2_load_image(filename, color=True):\n    img = cv2.imread(filename).astype(np.float32) \/ 255\n    if img.ndim == 3:\n        img[:,:,:] = img[:,:,2::-1]\n\n    if img.ndim == 2:\n        img = img[:, :, np.newaxis]\n        if color:\n            img = np.tile(img, (1, 1, 3))\n    elif img.shape[2] == 4:\n        img = img[:, :, :3]\n    return img\n<\/code><\/pre><\/div><\/div>\n\n<p>For all above details, please do check out PyInstaller Documentation: \n<a href=\"https:\/\/media.readthedocs.org\/pdf\/pyinstaller\/latest\/pyinstaller.pdf\">https:\/\/media.readthedocs.org\/pdf\/pyinstaller\/latest\/pyinstaller.pdf<\/a><\/p>\n\n<h1 id=\"looks-like-we-make-it\">Looks like we make it!<\/h1>\n\n<div class=\"language-plaintext highlighter-rouge\"><div class=\"highlight\"><pre class=\"highlight\"><code>\/path\/to\/git\/pyinstaller\/pyinstaller.py \\\n    --onefile \\\n    --hidden-import=skimage.io \\\n    --hidden-import=skimage.transform \\\n    --hidden-import=skimage.filter.rank.core_cy \\\n    --hidden-import=packaging \\\n    --hidden-import=packaging.version \\\n    --hidden-import=packaging.specifiers \\\n    --hidden-import=packaging.requirements \\\n    --hidden-import=scipy.linalg \\\n    --hidden-import=scipy.linalg.cython_blas \\\n    --hidden-import=scipy.linalg.cython_lapack \\\n    --hidden-import=scipy.ndimage \\\n    --hidden-import=skimage._shared.interpolation \\\n    --hidden-import=google.protobuf.internal \\\n    --hidden-import=google.protobuf.internal.enum_type_wrapper \\\n    --hidden-import=google.protobuf.descriptor \\\n    --add-binary=\"libgfortran.so.1:lib\" \\\n    --add-data=\"target_data_file1:.\" \\\n    --add-data=\"target_data_file2:.\" \\\n    --add-data=\"folder1\/sub_folder1\/target_data_file3:folder1\/sub_folder1\/target_data_file3\" \\\n    --paths=\"..\/dependency_folder\" \\\n    target_program.py\n<\/code><\/pre><\/div><\/div>\n\n<h1 id=\"misc\">Misc<\/h1>\n\n<p>I just find a simple method to read\/write binary file via Python: \nusing cPickle to dump data to file in binary format.<\/p>\n\n<div class=\"language-plaintext highlighter-rouge\"><div class=\"highlight\"><pre class=\"highlight\"><code>import cPickle\n\na = ('img_path1', 1111, 222.222, 333, 444, 555, 6666)\nb = ('img_path2', 777, 88.8888, 9999, 1010, 1111, 1212)\nc = []\nc.append(a)\nc.append(b)\n\nwith open('wb_txt', 'wb') as f:\n    cPickle.dump(c, f, cPickle.HIGHEST_PROTOCOL)\n\nwith open('wb_txt', 'rb') as f:\n    data = cPickle.load(f)\n    print data\n<\/code><\/pre><\/div><\/div>\n\n<p>Hopefully this note can guide someone new to PyInstaller like me to walk out of sloughy.<\/p>\n"},{"title":"C++ Programming Solutions","link":{"@attributes":{"href":"https:\/\/handong1587.github.io\/programming_study\/2016\/09\/07\/cpp-programming-solutions.html"}},"updated":"2016-09-07T00:00:00+00:00","id":"https:\/\/handong1587.github.io\/programming_study\/2016\/09\/07\/cpp-programming-solutions","content":"<h1 id=\"reference-a-nonstatic-mfc-class-member-in-a-static-thread-function\">Reference a nonstatic MFC class member in a static thread function<\/h1>\n\n<p>Declare a thread function:<\/p>\n\n<div class=\"language-plaintext highlighter-rouge\"><div class=\"highlight\"><pre class=\"highlight\"><code>static DWORD WINAPI ThreadFunc(LPVOID lpParameter);\n<\/code><\/pre><\/div><\/div>\n\n<p>Pass a <code class=\"language-plaintext highlighter-rouge\">this<\/code> pointer to thread function:<\/p>\n\n<div class=\"language-plaintext highlighter-rouge\"><div class=\"highlight\"><pre class=\"highlight\"><code>HANDLE hThread = CreateThread(NULL, 0, ThreadFunc, this, 0, NULL);\n<\/code><\/pre><\/div><\/div>\n\n<p>In the thread function definition:<\/p>\n\n<div class=\"language-plaintext highlighter-rouge\"><div class=\"highlight\"><pre class=\"highlight\"><code>DWORD WINAPI CMFCDemoDlg::ThreadFunc(LPVOID lpParameter)\n{\n    \/\/convert lpParameter to class pointer type\n    CMFCDemoDlg* pMfcDemo = (CMFCDemoDlg*)lpParameter;\n\n    \/\/ Now you can reference the CMFCDemoDlg class members\n    ......\n}\n<\/code><\/pre><\/div><\/div>\n"},{"title":"Add Lunr Search Plugin For Blog","link":{"@attributes":{"href":"https:\/\/handong1587.github.io\/web_dev\/2016\/07\/31\/add-lunr-search-plugin-for-blog.html"}},"updated":"2016-07-31T00:00:00+00:00","id":"https:\/\/handong1587.github.io\/web_dev\/2016\/07\/31\/add-lunr-search-plugin-for-blog","content":"<p>I decided to add a full-text search plugin to my blog:<\/p>\n\n<p><a href=\"https:\/\/github.com\/slashdotdash\/jekyll-lunr-js-search\">https:\/\/github.com\/slashdotdash\/jekyll-lunr-js-search<\/a> .<\/p>\n\n<p>Although it should be an easy work, there are still some rules I think are somewhat crucial to follow (for me..).<\/p>\n\n<p>First rule: DO NOT try to do this on Windows.<\/p>\n\n<p>On windows (and OS X), you can not even manage to gem install therubyracer, which is essential component required by jekyll-lunr-js-search. \nSee my previous post:<\/p>\n\n<p><a href=\"http:\/\/handong1587.github.io\/web_dev\/2016\/07\/03\/install-therubyracer.html\">http:\/\/handong1587.github.io\/web_dev\/2016\/07\/03\/install-therubyracer.html<\/a><\/p>\n\n<p>Keep yourself aware that you don\u2019t include jQuery twice. It can really cause all sorts of issues.<\/p>\n\n<p>This post explains in a more detail:<\/p>\n\n<p><strong>Double referencing jQuery deletes all assigned plugins.<\/strong><\/p>\n\n<p><a href=\"https:\/\/bugs.jquery.com\/ticket\/10066\">https:\/\/bugs.jquery.com\/ticket\/10066<\/a><\/p>\n\n<p>It kept me receiving one wired error like:<\/p>\n\n<div class=\"language-plaintext highlighter-rouge\"><div class=\"highlight\"><pre class=\"highlight\"><code>TypeError: $(...).lunrSearch is not a function\n<\/code><\/pre><\/div><\/div>\n\n<p>and took me a long time to find out why this happened.<\/p>\n\n<p>For a newbie like me who <em>know nothing at all<\/em> about front-end web development, \nall the work become trial and error, and google plus stackoverflow. So great now it can work.<\/p>\n\n<p>Thanks to <em>My Chemical Romance<\/em> for helping me through those tough debugging nights!<\/p>\n"},{"title":"vsftpd Commands","link":{"@attributes":{"href":"https:\/\/handong1587.github.io\/linux_study\/2016\/07\/28\/vsftpd-cmd.html"}},"updated":"2016-07-28T00:00:00+00:00","id":"https:\/\/handong1587.github.io\/linux_study\/2016\/07\/28\/vsftpd-cmd","content":"<p>FTP\u547d\u4ee4\u662fInternet\u7528\u6237\u4f7f\u7528\u6700\u9891\u7e41\u7684\u547d\u4ee4\u4e4b\u4e00\uff0c\u4e0d\u8bba\u662f\u5728DOS\u8fd8\u662fUNIX\u64cd\u4f5c\u7cfb\u7edf\u4e0b\u4f7f\u7528FTP\uff0c\u90fd\u4f1a\u9047\u5230\u5927\u91cf\u7684FTP\u5185\u90e8\u547d\u4ee4\u3002\n\u719f\u6089\u5e76\u7075\u6d3b\u5e94\u7528FTP\u7684\u5185\u90e8\u547d\u4ee4\uff0c\u53ef\u4ee5\u5927\u5927\u65b9\u4fbf\u4f7f\u7528\u8005\uff0c\u5e76\u6536\u5230\u4e8b\u534a\u529f\u500d\u4e4b\u6548\u3002 \nFTP\u7684\u547d\u4ee4\u884c\u683c\u5f0f\u4e3a\uff1a ftp -v -d -i -n -g [\u4e3b\u673a\u540d] \uff0c\u5176\u4e2d -v \u663e\u793a\u8fdc\u7a0b\u670d\u52a1\u5668\u7684\u6240\u6709\u54cd\u5e94\u4fe1\u606f\uff1b \n-n \u9650\u5236ftp\u7684\u81ea\u52a8\u767b\u5f55\uff0c\u5373\u4e0d\u4f7f\u7528\uff1b \n.n etrc\u6587\u4ef6\uff1b \n-d \u4f7f\u7528\u8c03\u8bd5\u65b9\u5f0f\uff1b \n-g \u53d6\u6d88\u5168\u5c40\u6587\u4ef6\u540d\u3002<\/p>\n\n<p>ftp\u4f7f\u7528\u7684\u5185\u90e8\u547d\u4ee4\u5982\u4e0b(\u4e2d\u62ec\u53f7\u8868\u793a\u53ef\u9009\u9879):<\/p>\n\n<p>1.![cmd[args]]\uff1a\u5728\u672c\u5730\u673a\u4e2d\u6267\u884c\u4ea4\u4e92shell\uff0cexit\u56de\u5230ftp\u73af\u5883\uff0c\u5982\uff1a !ls*.zip.<\/p>\n\n<p>2.$ macro-ame[args]\uff1a\u6267\u884c\u5b8f\u5b9a\u4e49macro-name.<\/p>\n\n<p>3.account[password]\uff1a\u63d0\u4f9b\u767b\u5f55\u8fdc\u7a0b\u7cfb\u7edf\u6210\u529f\u540e\u8bbf\u95ee\u7cfb\u7edf\u8d44\u6e90\u6240\u9700\u7684\u8865\u5145\u53e3\u4ee4\u3002<\/p>\n\n<p>4.append local-file[remote-file]\uff1a\u5c06\u672c\u5730\u6587\u4ef6\u8ffd\u52a0\u5230\u8fdc\u7a0b\u7cfb\u7edf\u4e3b\u673a\uff0c\u82e5\u672a\u6307\u5b9a\u8fdc\u7a0b\u7cfb\u7edf\u6587\u4ef6\u540d\uff0c\u5219\u4f7f\u7528\u672c\u5730\u6587\u4ef6\u540d\u3002<\/p>\n\n<p>5.ascii\uff1a\u4f7f\u7528ascii\u7c7b\u578b\u4f20\u8f93\u65b9\u5f0f\u3002<\/p>\n\n<p>6.bell\uff1a\u6bcf\u4e2a\u547d\u4ee4\u6267\u884c\u5b8c\u6bd5\u540e\u8ba1\u7b97\u673a\u54cd\u94c3\u4e00\u6b21\u3002<\/p>\n\n<p>7.bin\uff1a\u4f7f\u7528\u4e8c\u8fdb\u5236\u6587\u4ef6\u4f20\u8f93\u65b9\u5f0f\u3002<\/p>\n\n<p>8.bye\uff1a\u9000\u51faftp\u4f1a\u8bdd\u8fc7\u7a0b\u3002<\/p>\n\n<p>9.case\uff1a\u5728\u4f7f\u7528mget\u65f6\uff0c\u5c06\u8fdc\u7a0b\u4e3b\u673a\u6587\u4ef6\u540d\u4e2d\u7684\u5927\u5199\u8f6c\u4e3a\u5c0f\u5199\u5b57\u6bcd\u3002<\/p>\n\n<p>10.<code class=\"language-plaintext highlighter-rouge\">cd remote-dir<\/code>\uff1a\u8fdb\u5165\u8fdc\u7a0b\u4e3b\u673a\u76ee\u5f55\u3002<\/p>\n\n<p>11.cdup\uff1a\u8fdb\u5165\u8fdc\u7a0b\u4e3b\u673a\u76ee\u5f55\u7684\u7236\u76ee\u5f55\u3002<\/p>\n\n<p>12.<code class=\"language-plaintext highlighter-rouge\">chmod mode file-name<\/code>\uff1a\u5c06\u8fdc\u7a0b\u4e3b\u673a\u6587\u4ef6file-name\u7684\u5b58\u53d6\u65b9\u5f0f\u8bbe\u7f6e\u4e3amode\uff0c\u5982\uff1a<code class=\"language-plaintext highlighter-rouge\">chmod 777 a.out<\/code>\u3002<\/p>\n\n<p>13.close\uff1a\u4e2d\u65ad\u4e0e\u8fdc\u7a0b\u670d\u52a1\u5668\u7684ftp\u4f1a\u8bdd(\u4e0eopen\u5bf9\u5e94)\u3002<\/p>\n\n<p>14.cr\uff1a\u4f7f\u7528asscii\u65b9\u5f0f\u4f20\u8f93\u6587\u4ef6\u65f6\uff0c\u5c06\u56de\u8f66\u6362\u884c\u8f6c\u6362\u4e3a\u56de\u884c\u3002<\/p>\n\n<p>15.<code class=\"language-plaintext highlighter-rouge\">delete remote-file<\/code>\uff1a\u5220\u9664\u8fdc\u7a0b\u4e3b\u673a\u6587\u4ef6\u3002<\/p>\n\n<p>16.debug [debug-value]\uff1a\u8bbe\u7f6e\u8c03\u8bd5\u65b9\u5f0f\uff0c\u663e\u793a\u53d1\u9001\u81f3\u8fdc\u7a0b\u4e3b\u673a\u7684\u6bcf\u6761\u547d\u4ee4\uff0c\u5982\uff1adeb up 3\uff0c\u82e5\u8bbe\u4e3a0\uff0c\u8868\u793a\u53d6\u6d88debug\u3002<\/p>\n\n<p>17.<code class=\"language-plaintext highlighter-rouge\">dir [remote-dir] [local-file]<\/code>\uff1a\u663e\u793a\u8fdc\u7a0b\u4e3b\u673a\u76ee\u5f55\uff0c\u5e76\u5c06\u7ed3\u679c\u5b58\u5165\u672c\u5730\u6587\u4ef6local-file\u3002<\/p>\n\n<p>18.disconnection\uff1a\u540cclose\u3002<\/p>\n\n<p>19.form format\uff1a\u5c06\u6587\u4ef6\u4f20\u8f93\u65b9\u5f0f\u8bbe\u7f6e\u4e3aformat\uff0c\u7f3a\u7701\u4e3afile\u65b9\u5f0f\u3002<\/p>\n\n<p>20.<code class=\"language-plaintext highlighter-rouge\">get remote-file [local-file]<\/code>\uff1a\u5c06\u8fdc\u7a0b\u4e3b\u673a\u7684\u6587\u4ef6remote-file\u4f20\u81f3\u672c\u5730\u786c\u76d8\u7684local-file\u3002<\/p>\n\n<p>21.glob\uff1a\u8bbe\u7f6emdelete\uff0cmget\uff0cmput\u7684\u6587\u4ef6\u540d\u6269\u5c55\uff0c\u7f3a\u7701\u65f6\u4e0d\u6269\u5c55\u6587\u4ef6\u540d\uff0c\u540c\u547d\u4ee4\u884c\u7684-g\u53c2\u6570\u3002<\/p>\n\n<p>22.hash\uff1a\u6bcf\u4f20\u8f931024\u5b57\u8282\uff0c\u663e\u793a\u4e00\u4e2ahash\u7b26\u53f7(#)\u3002<\/p>\n\n<p>23.help [cmd]\uff1a\u663e\u793aftp\u5185\u90e8\u547d\u4ee4cmd\u7684\u5e2e\u52a9\u4fe1\u606f\uff0c\u5982\uff1ahelp get\u3002<\/p>\n\n<p>24.idle [seconds]\uff1a\u5c06\u8fdc\u7a0b\u670d\u52a1\u5668\u7684\u4f11\u7720\u8ba1\u65f6\u5668\u8bbe\u4e3a[seconds]\u79d2\u3002<\/p>\n\n<p>25.image\uff1a\u8bbe\u7f6e\u4e8c\u8fdb\u5236\u4f20\u8f93\u65b9\u5f0f(\u540cbinary)\u3002<\/p>\n\n<p>26.lcd [dir]\uff1a\u5c06\u672c\u5730\u5de5\u4f5c\u76ee\u5f55\u5207\u6362\u81f3dir\u3002<\/p>\n\n<p>27.<code class=\"language-plaintext highlighter-rouge\">ls [remote-dir] [local-file]<\/code>\uff1a\u663e\u793a\u8fdc\u7a0b\u76ee\u5f55remote-dir\uff0c\u5e76\u5b58\u5165\u672c\u5730\u6587\u4ef6local-file\u3002<\/p>\n\n<p>28.macdef macro-name\uff1a\u5b9a\u4e49\u4e00\u4e2a\u5b8f\uff0c\u9047\u5230macdef\u4e0b\u7684\u7a7a\u884c\u65f6\uff0c\u5b8f\u5b9a\u4e49\u7ed3\u675f\u3002<\/p>\n\n<p>29.<code class=\"language-plaintext highlighter-rouge\">mdelete [remote-file]<\/code>\uff1a\u5220\u9664\u8fdc\u7a0b\u4e3b\u673a\u6587\u4ef6\u3002<\/p>\n\n<p>30.<code class=\"language-plaintext highlighter-rouge\">mdir remote-files local-file<\/code>\uff1a\u4e0edir\u7c7b\u4f3c\uff0c\u4f46\u53ef\u6307\u5b9a\u591a\u4e2a\u8fdc\u7a0b\u6587\u4ef6\uff0c\u5982\uff1amdir <em>.o.<\/em>.zipoutfile\u3002<\/p>\n\n<p>31.<code class=\"language-plaintext highlighter-rouge\">mget remote-files<\/code>\uff1a\u4f20\u8f93\u591a\u4e2a\u8fdc\u7a0b\u6587\u4ef6\u3002<\/p>\n\n<p>32.<code class=\"language-plaintext highlighter-rouge\">mkdir dir-name<\/code>\uff1a\u5728\u8fdc\u7a0b\u4e3b\u673a\u4e2d\u5efa\u4e00\u76ee\u5f55\u3002<\/p>\n\n<p>33.<code class=\"language-plaintext highlighter-rouge\">mls remote-file local-file<\/code>\uff1a\u540cnlist\uff0c\u4f46\u53ef\u6307\u5b9a\u591a\u4e2a\u6587\u4ef6\u540d\u3002<\/p>\n\n<p>34.mode [modename]\uff1a\u5c06\u6587\u4ef6\u4f20\u8f93\u65b9\u5f0f\u8bbe\u7f6e\u4e3amodename\uff0c\u7f3a\u7701\u4e3astream\u65b9\u5f0f\u3002<\/p>\n\n<p>35.modtime file-name\uff1a\u663e\u793a\u8fdc\u7a0b\u4e3b\u673a\u6587\u4ef6\u7684\u6700\u540e\u4fee\u6539\u65f6\u95f4\u3002<\/p>\n\n<p>36.mput local-file\uff1a\u5c06\u591a\u4e2a\u6587\u4ef6\u4f20\u8f93\u81f3\u8fdc\u7a0b\u4e3b\u673a\u3002<\/p>\n\n<p>37.newer file-name\uff1a \u5982\u679c\u8fdc\u7a0b\u673a\u4e2dfile-name\u7684\u4fee\u6539\u65f6\u95f4\u6bd4\u672c\u5730\u786c\u76d8\u540c\u540d\u6587\u4ef6\u7684\u65f6\u95f4\u66f4\u8fd1\uff0c\u5219\u91cd\u4f20\u8be5\u6587\u4ef6\u3002<\/p>\n\n<p>38.nlist [remote-dir] [local-file]\uff1a\u663e\u793a\u8fdc\u7a0b\u4e3b\u673a\u76ee\u5f55\u7684\u6587\u4ef6\u6e05\u5355\uff0c\u5e76\u5b58\u5165\u672c\u5730\u786c\u76d8\u7684local-file\u3002<\/p>\n\n<p>39.nmap [inpattern outpattern]\uff1a\u8bbe\u7f6e\u6587\u4ef6\u540d\u6620\u5c04\u673a\u5236\uff0c\u4f7f\u5f97\u6587\u4ef6\u4f20\u8f93\u65f6\uff0c\u6587\u4ef6\u4e2d\u7684\u67d0\u4e9b\u5b57\u7b26\u76f8\u4e92\u8f6c\u6362\uff0c\u5982\uff1anmap $1.$2.$3[$1\uff0c$2].[$2\uff0c$3]\uff0c\u5219\u4f20\u8f93\u6587\u4ef6a1.a2.a3\u65f6\uff0c\u6587\u4ef6\u540d\u53d8\u4e3aa1\uff0ca2\u3002\u8be5\u547d\u4ee4\u7279\u522b\u9002\u7528\u4e8e\u8fdc\u7a0b\u4e3b\u673a\u4e3a\u975eUNIX\u673a\u7684\u60c5\u51b5\u3002<\/p>\n\n<p>40.ntrans [inchars[outchars]]\uff1a\u8bbe\u7f6e\u6587\u4ef6\u540d\u5b57\u7b26\u7684\u7ffb\u8bd1\u673a\u5236\uff0c\u5982ntrans 1R\uff0c\u5219\u6587\u4ef6\u540dLLL\u5c06\u53d8\u4e3aRRR\u3002<\/p>\n\n<p>41.open host [port]\uff1a\u5efa\u7acb\u6307\u5b9aftp\u670d\u52a1\u5668\u8fde\u63a5\uff0c\u53ef\u6307\u5b9a\u8fde\u63a5\u7aef\u53e3\u3002<\/p>\n\n<p>42.passive\uff1a\u8fdb\u5165\u88ab\u52a8\u4f20\u8f93\u65b9\u5f0f\u3002<\/p>\n\n<p>43.prompt\uff1a\u8bbe\u7f6e\u591a\u4e2a\u6587\u4ef6\u4f20\u8f93\u65f6\u7684\u4ea4\u4e92\u63d0\u793a\u3002<\/p>\n\n<p>44.proxy ftp-cmd\uff1a\u5728\u6b21\u8981\u63a7\u5236\u8fde\u63a5\u4e2d\uff0c\u6267\u884c\u4e00\u6761ftp\u547d\u4ee4\uff0c \u8be5\u547d\u4ee4\u5141\u8bb8\u8fde\u63a5\u4e24\u4e2aftp\u670d\u52a1\u5668\uff0c\u4ee5\u5728\u4e24\u4e2a\u670d\u52a1\u5668\u95f4\u4f20\u8f93\u6587\u4ef6\u3002\u7b2c\u4e00\u6761ftp\u547d\u4ee4\u5fc5\u987b\u4e3aopen\uff0c\u4ee5\u9996\u5148\u5efa\u7acb\u4e24\u4e2a\u670d\u52a1\u5668\u95f4\u7684\u8fde\u63a5\u3002<\/p>\n\n<p>45.<code class=\"language-plaintext highlighter-rouge\">put local-file [remote-file]<\/code>\uff1a\u5c06\u672c\u5730\u6587\u4ef6local-file\u4f20\u9001\u81f3\u8fdc\u7a0b\u4e3b\u673a\u3002<\/p>\n\n<p>46.pwd\uff1a\u663e\u793a\u8fdc\u7a0b\u4e3b\u673a\u7684\u5f53\u524d\u5de5\u4f5c\u76ee\u5f55\u3002<\/p>\n\n<p>47.quit\uff1a\u540cbye\uff0c\u9000\u51faftp\u4f1a\u8bdd\u3002<\/p>\n\n<p>48.quote arg1\uff0carg2\u2026\uff1a\u5c06\u53c2\u6570\u9010\u5b57\u53d1\u81f3\u8fdc\u7a0bftp\u670d\u52a1\u5668\uff0c\u5982\uff1aquote syst.<\/p>\n\n<p>49.<code class=\"language-plaintext highlighter-rouge\">recv remote-file [local-file]<\/code>\uff1a\u540cget\u3002<\/p>\n\n<p>50.<code class=\"language-plaintext highlighter-rouge\">reget remote-file [local-file]<\/code>\uff1a\u7c7b\u4f3c\u4e8eget\uff0c\u4f46\u82e5local-file\u5b58\u5728\uff0c\u5219\u4ece\u4e0a\u6b21\u4f20\u8f93\u4e2d\u65ad\u5904\u7eed\u4f20\u3002<\/p>\n\n<p>51.rhelp [cmd-name]\uff1a\u8bf7\u6c42\u83b7\u5f97\u8fdc\u7a0b\u4e3b\u673a\u7684\u5e2e\u52a9\u3002<\/p>\n\n<p>52.rstatus [file-name]\uff1a\u82e5\u672a\u6307\u5b9a\u6587\u4ef6\u540d\uff0c\u5219\u663e\u793a\u8fdc\u7a0b\u4e3b\u673a\u7684\u72b6\u6001\uff0c\u5426\u5219\u663e\u793a\u6587\u4ef6\u72b6\u6001\u3002<\/p>\n\n<p>53.<code class=\"language-plaintext highlighter-rouge\">rename [from] [to]<\/code>\uff1a\u66f4\u6539\u8fdc\u7a0b\u4e3b\u673a\u6587\u4ef6\u540d\u3002<\/p>\n\n<p>54.reset\uff1a\u6e05\u9664\u56de\u7b54\u961f\u5217\u3002<\/p>\n\n<p>55.restart marker\uff1a\u4ece\u6307\u5b9a\u7684\u6807\u5fd7marker\u5904\uff0c\u91cd\u65b0\u5f00\u59cbget\u6216put\uff0c\u5982\uff1arestart 130\u3002<\/p>\n\n<p>56.<code class=\"language-plaintext highlighter-rouge\">rmdir dir-name<\/code>\uff1a\u5220\u9664\u8fdc\u7a0b\u4e3b\u673a\u76ee\u5f55\u3002<\/p>\n\n<p>57.runique\uff1a\u8bbe\u7f6e\u6587\u4ef6\u540d\u552f\u4e00\u6027\u5b58\u50a8\uff0c\u82e5\u6587\u4ef6\u5b58\u5728\uff0c\u5219\u5728\u539f\u6587\u4ef6\u540e\u52a0\u540e\u7f00 ..1\uff0c.2\u7b49\u3002<\/p>\n\n<p>58.send local-file[remote-file]\uff1a\u540cput\u3002<\/p>\n\n<p>59.sendport\uff1a\u8bbe\u7f6ePORT\u547d\u4ee4\u7684\u4f7f\u7528\u3002<\/p>\n\n<p>60.site arg1\uff0carg2\u2026\uff1a\u5c06\u53c2\u6570\u4f5c\u4e3aSITE\u547d\u4ee4\u9010\u5b57\u53d1\u9001\u81f3\u8fdc\u7a0bftp\u4e3b\u673a\u3002<\/p>\n\n<p>61.<code class=\"language-plaintext highlighter-rouge\">size file-name<\/code>\uff1a\u663e\u793a\u8fdc\u7a0b\u4e3b\u673a\u6587\u4ef6\u5927\u5c0f\uff0c\u5982\uff1asite idle 7200\u3002<\/p>\n\n<p>62.<code class=\"language-plaintext highlighter-rouge\">status<\/code>\uff1a\u663e\u793a\u5f53\u524dftp\u72b6\u6001\u3002<\/p>\n\n<p>63.struct [struct-name]\uff1a\u5c06\u6587\u4ef6\u4f20\u8f93\u7ed3\u6784\u8bbe\u7f6e\u4e3astruct-name\uff0c\u7f3a\u7701\u65f6\u4f7f\u7528stream\u7ed3\u6784\u3002<\/p>\n\n<p>64.sunique\uff1a\u5c06\u8fdc\u7a0b\u4e3b\u673a\u6587\u4ef6\u540d\u5b58\u50a8\u8bbe\u7f6e\u4e3a\u552f\u4e00(\u4e0erunique\u5bf9\u5e94)\u3002<\/p>\n\n<p>65.system\uff1a\u663e\u793a\u8fdc\u7a0b\u4e3b\u673a\u7684\u64cd\u4f5c\u7cfb\u7edf\u7c7b\u578b\u3002<\/p>\n\n<p>66.tenex\uff1a\u5c06\u6587\u4ef6\u4f20\u8f93\u7c7b\u578b\u8bbe\u7f6e\u4e3aTENEX\u673a\u7684\u6240\u9700\u7684\u7c7b\u578b\u3002<\/p>\n\n<p>67.tick\uff1a\u8bbe\u7f6e\u4f20\u8f93\u65f6\u7684\u5b57\u8282\u8ba1\u6570\u5668\u3002<\/p>\n\n<p>68.trace\uff1a\u8bbe\u7f6e\u5305\u8ddf\u8e2a\u3002<\/p>\n\n<p>69.type [type-name]\uff1a\u8bbe\u7f6e\u6587\u4ef6\u4f20\u8f93\u7c7b\u578b\u4e3atype-name\uff0c\u7f3a\u7701\u4e3aascii\uff0c\u5982\uff1atype binary\uff0c\u8bbe\u7f6e\u4e8c\u8fdb\u5236\u4f20\u8f93\u65b9\u5f0f\u3002<\/p>\n\n<p>70.umask [newmask]\uff1a\u5c06\u8fdc\u7a0b\u670d\u52a1\u5668\u7684\u7f3a\u7701umask\u8bbe\u7f6e\u4e3anewmask\uff0c\u5982\uff1aumask 3\u3002<\/p>\n\n<p>71.<code class=\"language-plaintext highlighter-rouge\">user user-name [password] [account]<\/code>\uff1a\u5411\u8fdc\u7a0b\u4e3b\u673a\u8868\u660e\u81ea\u5df1\u7684\u8eab\u4efd\uff0c\u9700\u8981\u53e3\u4ee4\u65f6\uff0c\u5fc5\u987b\u8f93\u5165\u53e3\u4ee4\uff0c\u5982\uff1auser anonymous my@email\u3002<\/p>\n\n<p>72.verbose\uff1a\u540c\u547d\u4ee4\u884c\u7684-v\u53c2\u6570\uff0c\u5373\u8bbe\u7f6e\u8be6\u5c3d\u62a5\u544a\u65b9\u5f0f\uff0cftp\u670d\u52a1\u5668\u7684\u6240\u6709 \u54cd\u5e94\u90fd\u5c06\u663e\u793a\u7ed9\u7528\u6237\uff0c\u7f3a\u7701\u4e3aon.<\/p>\n\n<p>73.?[cmd]\uff1a\u540chelp.<\/p>\n\n<h1 id=\"ref\">Ref<\/h1>\n\n<p><a href=\"http:\/\/www.jb51.net\/os\/RedHat\/1133.html\">http:\/\/www.jb51.net\/os\/RedHat\/1133.html<\/a><\/p>\n"},{"title":"Setup vsftpd on Ubuntu 14.10","link":{"@attributes":{"href":"https:\/\/handong1587.github.io\/linux_study\/2016\/07\/27\/setup-vsftpd.html"}},"updated":"2016-07-27T00:00:00+00:00","id":"https:\/\/handong1587.github.io\/linux_study\/2016\/07\/27\/setup-vsftpd","content":"<h1 id=\"setup-vsftpd\">Setup vsftpd<\/h1>\n\n<p>Install vsftpd:<\/p>\n\n<div class=\"language-plaintext highlighter-rouge\"><div class=\"highlight\"><pre class=\"highlight\"><code>sudo apt-get install vsftpd\n<\/code><\/pre><\/div><\/div>\n\n<p>Check if vsftpd installed successfully:<\/p>\n\n<div class=\"language-plaintext highlighter-rouge\"><div class=\"highlight\"><pre class=\"highlight\"><code>sudo service vsftpd status\n<\/code><\/pre><\/div><\/div>\n\n<p>Add <code class=\"language-plaintext highlighter-rouge\">\/home\/uftp<\/code> as user home directory:<\/p>\n\n<div class=\"language-plaintext highlighter-rouge\"><div class=\"highlight\"><pre class=\"highlight\"><code>sudo mkdir \/data\/jinbin.lin\/uftp\n<\/code><\/pre><\/div><\/div>\n\n<p>Add user <code class=\"language-plaintext highlighter-rouge\">uftp<\/code> and set password:<\/p>\n\n<div class=\"language-plaintext highlighter-rouge\"><div class=\"highlight\"><pre class=\"highlight\"><code>sudo useradd -d \/data\/jinbin.lin\/uftp -s \/bin\/bash uftp\n<\/code><\/pre><\/div><\/div>\n\n<p>Set user password (need to enter password twice):<\/p>\n\n<div class=\"language-plaintext highlighter-rouge\"><div class=\"highlight\"><pre class=\"highlight\"><code>sudo passwd uftp\n<\/code><\/pre><\/div><\/div>\n\n<p>Edit vsftpd configuration file:<\/p>\n\n<div class=\"language-plaintext highlighter-rouge\"><div class=\"highlight\"><pre class=\"highlight\"><code>\/etc\/vsftpd.conf\n<\/code><\/pre><\/div><\/div>\n\n<p>Add following commands at the end of <code class=\"language-plaintext highlighter-rouge\">vsftpd.conf<\/code>:<\/p>\n\n<div class=\"language-plaintext highlighter-rouge\"><div class=\"highlight\"><pre class=\"highlight\"><code>userlist_deny=NO\nuserlist_enable=YES\nuserlist_file=\/etc\/allowed_users\n<\/code><\/pre><\/div><\/div>\n\n<p>Modify following configurations:<\/p>\n\n<div class=\"language-plaintext highlighter-rouge\"><div class=\"highlight\"><pre class=\"highlight\"><code>local_enable=YES\nwrite_enable=YES\n<\/code><\/pre><\/div><\/div>\n\n<p>Edit <code class=\"language-plaintext highlighter-rouge\">\/etc\/allowed_users<\/code>\uff0cadd username: uftp<\/p>\n\n<p>Check file <code class=\"language-plaintext highlighter-rouge\">\/etc\/ftpusers<\/code>, delete <code class=\"language-plaintext highlighter-rouge\">uftp<\/code> (if file contains this username). \nThis file recording usernames which are forbidden to access FTP server.<\/p>\n\n<p>Restart vsftpd:<\/p>\n\n<div class=\"language-plaintext highlighter-rouge\"><div class=\"highlight\"><pre class=\"highlight\"><code>sudo service vsftpd restart\n<\/code><\/pre><\/div><\/div>\n\n<h1 id=\"close-ftp-server\">Close FTP server<\/h1>\n\n<div class=\"language-plaintext highlighter-rouge\"><div class=\"highlight\"><pre class=\"highlight\"><code>sudo service vsftpd stop\n<\/code><\/pre><\/div><\/div>\n\n<h1 id=\"visit-ftp-server\">Visit FTP server<\/h1>\n\n<p>(By default, the anonymous user is disabled)<\/p>\n\n<div class=\"language-plaintext highlighter-rouge\"><div class=\"highlight\"><pre class=\"highlight\"><code>ftp:\/\/user:password@hostname\/\n<\/code><\/pre><\/div><\/div>\n\n<h1 id=\"forbid-user-access-top-level-directory\">Forbid user access top level directory<\/h1>\n\n<p>Create file <code class=\"language-plaintext highlighter-rouge\">vsftpd.chroot_list<\/code> but don\u2019t add anything:<\/p>\n\n<div class=\"language-plaintext highlighter-rouge\"><div class=\"highlight\"><pre class=\"highlight\"><code>sudo touch \/etc\/vsftpd.chroot_list\n<\/code><\/pre><\/div><\/div>\n\n<p>Modify configurations as following:<\/p>\n\n<div class=\"language-plaintext highlighter-rouge\"><div class=\"highlight\"><pre class=\"highlight\"><code>chroot_local_user=YES\nchroot_list_enable=NO\nchroot_list_file=\/etc\/vsftpd.chroot_list\n<\/code><\/pre><\/div><\/div>\n\n<p>If want to have write permission to user home directory (otherwise you would meet this error when login: \n\u201c500 OOPS: vsftpd: refusing to run with writable root inside chroot ()\u201d):<\/p>\n\n<div class=\"language-plaintext highlighter-rouge\"><div class=\"highlight\"><pre class=\"highlight\"><code>allow_writeable_chroot=YES\n<\/code><\/pre><\/div><\/div>\n\n<p>Restart vsftpd:<\/p>\n\n<div class=\"language-plaintext highlighter-rouge\"><div class=\"highlight\"><pre class=\"highlight\"><code>sudo service vsftpd restart\n<\/code><\/pre><\/div><\/div>\n\n<h1 id=\"does-not-allow-the-user-to-change-the-specified-chroot_list_file-root\">Does not allow the user to change the specified chroot_list_file root<\/h1>\n\n<div class=\"language-plaintext highlighter-rouge\"><div class=\"highlight\"><pre class=\"highlight\"><code>chroot_local_user=NO\nchroot_list_enable=YES\nchroot_list_file=\/etc\/vsftpd.chroot_list\n<\/code><\/pre><\/div><\/div>\n\n<h1 id=\"allows-only-specified-users-to-change-chroot_list_file-root\">Allows only specified users to change chroot_list_file root<\/h1>\n\n<div class=\"language-plaintext highlighter-rouge\"><div class=\"highlight\"><pre class=\"highlight\"><code>chroot_local_user=YES\nchroot_list_enable=YES\nchroot_list_file=\/etc\/vsftpd.chroot_list\n<\/code><\/pre><\/div><\/div>\n\n<h1 id=\"frequently-used-command\">Frequently used command<\/h1>\n\n<p><code class=\"language-plaintext highlighter-rouge\">mkdir<\/code><\/p>\n\n<p><code class=\"language-plaintext highlighter-rouge\">dir<\/code> or <code class=\"language-plaintext highlighter-rouge\">ls<\/code><\/p>\n\n<p><code class=\"language-plaintext highlighter-rouge\">put<\/code><\/p>\n\n<p><code class=\"language-plaintext highlighter-rouge\">get<\/code><\/p>\n\n<h1 id=\"refs\">Refs<\/h1>\n\n<p><strong>How to Install and Configure vsftpd on Ubuntu 14.04 LTS<\/strong><\/p>\n\n<p><a href=\"http:\/\/www.liquidweb.com\/kb\/how-to-install-and-configure-vsftpd-on-ubuntu-14-04-lts\/\">http:\/\/www.liquidweb.com\/kb\/how-to-install-and-configure-vsftpd-on-ubuntu-14-04-lts\/<\/a><\/p>\n\n<p><strong>vsftpd \u914d\u7f6e:chroot_local_user\u4e0echroot_list_enable\u8be6\u89e3<\/strong><\/p>\n\n<p><a href=\"http:\/\/blog.csdn.net\/bluishglc\/article\/details\/42398811\">http:\/\/blog.csdn.net\/bluishglc\/article\/details\/42398811<\/a><\/p>\n"}]}