DenseNet

ph
Admin (토론 | 기여)님의 2017년 7월 24일 (월) 11:46 판
이동: 둘러보기, 검색

arXiv:1608.06993

Abstract

Recent work has shown that convolutional networks can be substantially deeper, more accurate, and efficient to train if they contain shorter connections between layers close to the input and those close to the output.

they alleviate the vanishing-gradient problem, strengthen feature propagation, encourage feature reuse, and substantially reduce the number of parameters.

Introduction

ResNets, Highway Networks, Stochastic depth, FractalNets, etc ... they all share a key characteristic: they create short paths from early layers to later layers.

we connect all layers. (so all connections = \(\frac{L(L-1)}{2}\) where \(L\) is the number of all layers)

DenseBlock.png

in contrast to ResNets, we never combine features through summation before they are passed into a layer; instead, we combine features by concatenating them.