网页资源链接
最近在读论文
分布式和优化
- Hao Yu's home page:ICML 2019 有两篇 Parallel SGD 文章
- (NeurIPS 2018) A Linear Speedup Analysis of Distributed Deep Learning with Sparse and Quantized Communication
- (NeurIPS 2018):Online Adaptive Methods, Universality and Acceleration
- Non-convex Optimization for Machine Learning:arXiv.org > stat > arXiv:1712.07897
- 博客:An overview of gradient descent optimization algorithms
- 异步 SGD:A DAG Model of Synchronous Stochastic Gradient Descent in Distributed Deep Learning:arXiv.org > cs > arXiv:1805.03812
- DeepMind:TF-Replicator: Distributed Machine Learning for Researchers
- Seb Arnold (2016):An Introduction to Distributed Deep Learning
- 斯坦福优化课程:EE364a: Convex Optimization I
- 斯坦福 Boyd 凸优化:Convex Optimization – Boyd and Vandenberghe
- 斯坦福数学优化:Mathematical Optimization
- 优化课程:Introduction to Optimization Theory (2017) 来自斯坦福 Aaron Sidford
- 优化课程:(最近在看)IE 598 – BIG DATA OPTIMIZATION (Fall 2016)
- SVRG算法的阅读理解和实践
- Nesterov Accelerated Gradient and Momentum
- Optimization for Deep Learning Highlights in 2017
- 机器之心:入门 | 目标函数的经典优化算法介绍
- Yurii Nesterov:How to advance in Structural Convex Optimization
Google 搜索:How to advance in structural convex optimization
- Wikipedia Rate of Convergence
- NVIDIA — apex:Tools for easy mixed precision and distributed training in Pytorch
- Stochastic Gradient Descent (v.2)
机器学习
- 知乎专栏:Python相关的数百篇文章
- Introduction to Machine Learning (CS 590 and STAT 598A) Spring 2010
- Google 机器学习速成课程
- TensorFlow 官网教程
- 华校专 — 个人笔记
- 6.S191: Introduction to Deep Learning
- 斯坦福课程:CS 20: Tensorflow for Deep Learning Research
- 刘建平Pinard 博客园博客:特征工程之特征预处理
- CSDN — 使用 sklearn 做特征工程
- 美团技术团队-一-机器学习中的数据清洗与特征处理综述
- 来自 Kaggle 的数据科学教程
- ApacheCN — scikit-learn — 高斯混合模型
- 斯坦福课程:CS 109: Probability for Computer Scientists
- PyTorch / examples / mnist
- PyTorch — Blog
- Sebastian Ruder 的 GitHub 主页
- Distributed Training in TensorFlow
- CSDN 博客 —— 分布式 TensorFlow
- CSDN 博客 —— Distributed TensorFlow
- CSDN 博客 —— TensorFlow 分布式训练
- 优化器算法Optimizer详解:参考 https://arxiv.org/pdf/1609.04747.pdf
- Carrson C. Fung:optim
对抗生成网络
- 科学网 — 王飞跃:人工智能研究的新前线:生成式对抗网络
模型压缩
- 知乎收藏
- 腾讯出品:PocketFlow 腾讯 AI Lab 开源自动化模型压缩框架 PocketFlow
- 轻量化神经网络综述
- 模型压缩总览
- 漫谈Deep Compression(三)量化
- 二值神经网络(Binary Neural Network,BNN)
- XNOR-Net:二值化卷积神经网络
- 模型压缩加速论文汇总
- 五种CNN模型的尺寸,计算量和参数数量对比详解
- Group Convolution, Depthwise Convolution 和 Global Depthwise Convolution
- 卷积 | 深度可分离卷积、分组卷积、空洞卷积、转置卷积(反卷积)
- 变形卷积核、可分离卷积?卷积神经网络中十大拍案叫绝的操作
- 卷积层提速Factorized Convolutional Neural Networks
- 卷积神经网络(CNN)张量(图像)的尺寸和参数计算(深度学习):
百度搜索:CNN 卷积层和全连接层参数数量对比
- 知识蒸馏(Knowledge Distillation)
- 侯璐:基于损失函数的神经网络量化方法
- 何宜晖:Channel Pruning for Accelerating Very Deep Neural Networks
- AQN:一种通过交替量化对深度学习模型压缩以及加速推理的方法
- CSDN 博客:TensorFlow Quantization
- ICML 2018 notes for Quantized SGD and signSGD etc.
Miscellaneous
- Alex LEE's Blog —— 科研经验
- 【一些网站的收集】包含机器学习深度学习大牛主页等
- 七月在线 CSDN 博客
- 七月在线 从拉普拉斯矩阵说到谱聚类
- DeepMind:Relational inductive biases, deep learning, and graph networks
- 微信文章 GNN:讲述深度学习的因果推理
- R 语言相关:BST 140.776 Statistical Computing
- Jeff Erickson:Algorithms
- Python 装饰器:PEP 318 -- Decorators for Functions and Methods
- 知网论文:Self-adaptive algorithm for variational inequalities
- 微信文章:万字综述之生成对抗网络(GAN
- LIBSVM Data: Classification, Regression, and Multi-label
- Michael J. Neely 的主页
- Nvidia:Cuda ToolKit Documentation
- Welcome to TeXstudio
- Wiki / NNM-Club:https://nnmclub.ro/
- 王树义——毕业论文新手入坑手册
算法和编程
AI 会议网址
- IJCAI 2019:https://www.ijcai19.org
- ICML 2019 Accepted Papers
- ICLR 2019 OpenReview Papers
- NeurIPS 2019 HomePage