"Machine Learning"의 두 판 사이의 차이
ph
잔글 (→ril) |
잔글 (→ril) |
||
7번째 줄: | 7번째 줄: | ||
* bias 붙여버리면 inv는 어케 구하나? D=0되지 않나? 구할필요 없나? | * bias 붙여버리면 inv는 어케 구하나? D=0되지 않나? 구할필요 없나? | ||
* [https://github.com/pytorch/examples pytorch examples ] | * [https://github.com/pytorch/examples pytorch examples ] | ||
+ | * [https://arxiv.org/pdf/1603.05027.pdf Identity Mappings in Deep Residual Networks] arxiv 2016 | ||
==[[Recommendation]]== | ==[[Recommendation]]== |
2017년 7월 17일 (월) 00:49 판
ril
- https://medium.com/technologymadeeasy/the-best-explanation-of-convolutional-neural-networks-on-the-internet-fbb8b1ad5df8
- http://nmhkahn.github.io/Casestudy-CNN
- https://stats.stackexchange.com/questions/205150/how-do-bottleneck-architectures-work-in-neural-networks
- https://www.quora.com/What-exactly-is-the-degradation-problem-that-Deep-Residual-Networks-try-to-alleviate
- dl with torch
- bias 붙여버리면 inv는 어케 구하나? D=0되지 않나? 구할필요 없나?
- pytorch examples
- Identity Mappings in Deep Residual Networks arxiv 2016