這系列產(chǎn)品文章最開始的
Learning fast approximations of sparse coding
K Gregor , Y Lecun icml2010
針對dictionary有相關(guān)性的壓縮感知的難題dl能帶來的benefit
Bo Xin, Yizhou Wang, Wen Gao, Baoyuan Wang, and David Wipf, "Maximal Sparsity with Deep Networks?," Advances in Neural Information Processing Systems (NIPS), 2016
Hao He, Bo Xin, Satoshi Ikehata, and David Wipf, "From Bayesian Sparsity to Gated Recurrent Nets," Advances in Neural Information Processing Systems (NIPS), 2017.
在圖象處理的運用
Deep ADMM-Net for compressive sensing MRI(nips16?,pami版本號改了frank Wolfe干了一個net)
收斂性的確保
Xiaohan Chen*, Jialin Liu*, Zhangyang Wang, Wotao Yin. “Theoretical Linear Convergence of Unfolded ISTA and its Practical Weights and Thresholds.” Advances in Neural Information Processing Systems (NIPS), 2018
arXiv:1808.05331 [pdf, other] cs.CV
On the Convergence of Learning-based Iterative Methods for Nonconvex Inverse Problems
Risheng Liu, Shichao Cheng, Yi He, Xin Fan, Zhouchen Lin, Zhongxuan Luo
在assumption很強【非常簡單的每日任務(wù)】上 learning沒有必需 手動式設(shè)計方案weight能夠 optimal
Jialin Liu*, Xiaohan Chen*, Zhangyang Wang, Wotao Yin. “ALISTA: Analytic Weights Are As Good As Learned Weights in LISTA.” In Proceedings of International Conference on Learning Representations (ICLR), 2019
包含假如考慮到下weijie su教師的
su,boyd,candes A Differential Equation for Modeling Nesterov's Accelerated Gradient Method: Theory and Insights
這一和近期火的neural ode也是一樣的構(gòu)思
Yiping Lu, Aoxiao Zhong, Quanzheng Li, Bin Dong. "Beyond Finite Layer Neural Network:Bridging Deep Architects and Numerical Differential Equations" Thirty-fifth International Conference on Machine Learning (ICML), 2018
操縱 提升 dl應當聯(lián)絡(luò)很深
用操縱角度觀察提升??
L. Lessard, B. Recht, and A. Packard. Analysis and design of optimization algorithms via integral quadratic constraints. SIAM Journal on Optimization, 26(1):57–95, 2016.
便是提升是最特別的梯度方向流,dl和操縱里許多現(xiàn)象都有可能并不是梯度方向流