"Machine Learning"의 두 판 사이의 차이
ph
잔글 (→general) |
잔글 (→posts) |
||
8번째 줄: | 8번째 줄: | ||
* [[conv1d]] | * [[conv1d]] | ||
* [[naive gradient descent]] | * [[naive gradient descent]] | ||
+ | * [https://www.notion.so/nll-loss-57987c7b5f7342e4b6bc929c9b3be587 nll loss] | ||
==ril== | ==ril== |
2021년 7월 8일 (목) 16:44 판
by themes
posts
ril
- https://medium.com/technologymadeeasy/the-best-explanation-of-convolutional-neural-networks-on-the-internet-fbb8b1ad5df8
- http://nmhkahn.github.io/Casestudy-CNN
- https://stats.stackexchange.com/questions/205150/how-do-bottleneck-architectures-work-in-neural-networks
- https://www.quora.com/What-exactly-is-the-degradation-problem-that-Deep-Residual-Networks-try-to-alleviate
- dl with torch
- bias 붙여버리면 inv는 어케 구하나? D=0되지 않나? 구할필요 없나?
- pytorch examples
- Identity Mappings in Deep Residual Networks arXiv:1603.05027
- cutting edge deeplearning for coders
nets
- CNN
- AlexNet
- dilated cnn
- pathnet
- ResNet
- Fast RCNN
- Faster RCNN
- R-FCN
- Inception (GoogLeNet)
- fully convolutional networks
- FractalNets
- highway networks
- Memory networks
- DenseNet
- NIN
- DSN
- Ladder Networks
- DFNs
- YOLO
general
- Affinity Propagation, Apcluster sparse
CUDA기타설치- Learning to learn by GD by GD
- Generative Models (GAN, VAE, etc)
- Batch Normalization
- Mean Average Precision
- Essential Cheat Sheets for Machine Learning and Deep Learning Engineers
- What is surrogate loss?
- Exponential Linear Unit
- Neural net이 working하지 않는 37가지 이유
- deconvolution
- Sparse coding
- MXNet Model Zoo
- logistic regression
- information bottleneck
- Artificial Curiosity
- sklearn examples