"Mxnet"의 두 판 사이의 차이
ph
잔글 (→etc) |
잔글 (→etc) |
||
21번째 줄: | 21번째 줄: | ||
%(name, suffix)) | %(name, suffix)) | ||
return act</pre> | return act</pre> | ||
− | * feature extraction | + | * feature extraction |
− | [http://mxnet.io/how_to/finetune.html#train How do I fine-tune pre-trained models to a new dataset?] | + | **[https://github.com/dmlc/mxnet/blob/master/docs/tutorials/python/predict_image.md Predict with pre-trained models] |
+ | ** [http://mxnet.io/how_to/finetune.html#train How do I fine-tune pre-trained models to a new dataset?] |
2017년 7월 4일 (화) 19:06 기준 최신판
뭘 이렇게들 만들어 대는지. tf가 맘에 안들기는 하지만.
etc
- batch normalization example
def ConvFactory(data, num_filter, kernel, stride=(1,1), pad=(0, 0),name=None, suffix=''): conv = mx.sym.Convolution(data=data, num_filter=num_filter, kernel=kernel, stride=stride, pad=pad, name='conv_%s%s' %(name, suffix)) bn = mx.sym.BatchNorm(data=conv, name='bn_%s%s' %(name, suffix)) act = mx.sym.Activation(data=bn, act_type='relu', name='relu_%s%s' %(name, suffix)) return act