从信息论和矩阵计算的角度理解神经网络
根据通用近似定理,深度神经网络是万能函数拟合器。然而,这种视角无法解释RNN、LSTM、GRU、以及Transformer等网络架构存在的原因。本篇文章尝试站在信息论和矩阵计算的角度理解深度神经网络。
根据通用近似定理,深度神经网络是万能函数拟合器。然而,这种视角无法解释RNN、LSTM、GRU、以及Transformer等网络架构存在的原因。本篇文章尝试站在信息论和矩阵计算的角度理解深度神经网络。
机器学习理论中关键词生硬难懂,因此对部分关键词进行解析。
假设空间:学习任务的设定决定假设空间,从而也决定了可学性、复杂度。
概念类:假设空间中一个可解决任务的模式,也可以理解为函数。
泛化误差是指全量数据的误差,而经验误差是指采样数据的误差。
深度神经网络对自然数据泛化上的成功与经典的模型复杂性概念不一致,且实验表明可拟合任意随机数据。论文On the Spectral Bias of Neural Networks通过傅立叶分析,研究深度神经网络的表达性,发现深度神经网络倾向于学习低频函数,也即是函数全局的变化无局部浮动。该特性与过参数化网络优先学习简单模式而泛化性强的特点一致。这种现象被称为场域偏差,不仅仅表现在学习过程,也表现在模型的参数化。
Rich Sutton
March 13, 2019
The biggest lesson that can be read from 70 years of AI research is that general methods that leverage computation are ultimately the most effective, and by a large margin. The ultimate reason for this is Moore's law, or rather its generalization of continued exponentially falling cost per unit of computation. Most AI research has been conducted as if the computation available to the agent were constant (in which case leveraging human knowledge would be one of the only ways to improve performance) but, over a slightly longer time than a typical research project, massively more computation inevitably becomes available.