"Machine Learning"의 두 판 사이의 차이
ph
잔글 (→general) |
잔글 |
||
(같은 사용자의 중간 판 33개는 보이지 않습니다) | |||
1번째 줄: | 1번째 줄: | ||
+ | == after AI == | ||
+ | * [[cot연구동향]] | ||
+ | |||
+ | |||
==by themes== | ==by themes== | ||
* [[Recommendation]] | * [[Recommendation]] | ||
* [[Face]] | * [[Face]] | ||
* [[Person re-identification]] | * [[Person re-identification]] | ||
+ | * [[segmentation]] | ||
+ | |||
+ | ==posts== | ||
+ | * [[conv1d]] | ||
+ | * [[naive gradient descent]] | ||
+ | * [https://www.notion.so/nll-loss-57987c7b5f7342e4b6bc929c9b3be587 nll loss] | ||
==ril== | ==ril== | ||
13번째 줄: | 23번째 줄: | ||
* [https://github.com/pytorch/examples pytorch examples ] | * [https://github.com/pytorch/examples pytorch examples ] | ||
* [https://arxiv.org/abs/1603.05027 Identity Mappings in Deep Residual Networks] arXiv:1603.05027 | * [https://arxiv.org/abs/1603.05027 Identity Mappings in Deep Residual Networks] arXiv:1603.05027 | ||
+ | * [http://www.fast.ai/2017/07/28/deep-learning-part-two-launch/ cutting edge deeplearning for coders] | ||
+ | |||
+ | ==nets== | ||
+ | * [[1118_Convolutional_Neural_Network|CNN]] | ||
+ | * [[AlexNet]] | ||
+ | * [[dilated cnn]] | ||
+ | * [[pathnet]] | ||
+ | * [[ResNet]] | ||
+ | * [[Fast RCNN]] | ||
+ | * [[Faster RCNN]] | ||
+ | * [[R-FCN]] | ||
+ | * [[GoogLeNet|Inception]] (GoogLeNet) | ||
+ | * [[fully convolutional networks]] | ||
+ | * [[FractalNets]] | ||
+ | * [[highway networks]] | ||
+ | * [[Memory networks]] | ||
+ | * [[DenseNet]] | ||
+ | * [[Network in Network|NIN]] | ||
+ | * [[Deeply Supervised Network|DSN]] | ||
+ | * [[Ladder Networks]] | ||
+ | * [[Deeply-Fused Nets|DFNs]] | ||
+ | * [[YOLO]] | ||
==general== | ==general== | ||
− | * [[ | + | * [[0811 Affinity Propagation|Affinity Propagation]], [[Apcluster sparse]] |
− | + | * <del>[[CUDA기타설치]]</del> | |
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | * | ||
− | |||
− | |||
* [[Learning to learn by GD by GD]] | * [[Learning to learn by GD by GD]] | ||
− | * [[Generative Models]] | + | * [[Generative Models]] (GAN, VAE, etc) |
* [[Batch Normalization]] | * [[Batch Normalization]] | ||
− | |||
* [[Mean Average Precision]] | * [[Mean Average Precision]] | ||
* [https://medium.com/@kailashahirwar/essential-cheat-sheets-for-machine-learning-and-deep-learning-researchers-efb6a8ebd2e5 Essential Cheat Sheets for Machine Learning and Deep Learning Engineers] | * [https://medium.com/@kailashahirwar/essential-cheat-sheets-for-machine-learning-and-deep-learning-researchers-efb6a8ebd2e5 Essential Cheat Sheets for Machine Learning and Deep Learning Engineers] | ||
* [http://fa.bianp.net/blog/2014/surrogate-loss-functions-in-machine-learning/ What is surrogate loss?] | * [http://fa.bianp.net/blog/2014/surrogate-loss-functions-in-machine-learning/ What is surrogate loss?] | ||
+ | * [[Exponential Linear Unit]] | ||
+ | * [[Neural net이 working하지 않는 37가지 이유]] | ||
+ | * [[deconvolution]] | ||
+ | * [[Sparse coding]] | ||
+ | * [http://mxnet.io/model_zoo/index.html MXNet Model Zoo] | ||
+ | * [[logistic regression]] | ||
+ | *[[0926 information bottleneck|information bottleneck]] | ||
+ | *[[0928 Artificial Curiosity|Artificial Curiosity]] | ||
+ | * sklearn examples | ||
+ | ** [[sklearn preprocessing]] | ||
+ | |||
+ | ==x== | ||
+ | * [[Isolation Forest]] |
2025년 3월 1일 (토) 00:16 기준 최신판
after AI
by themes
posts
ril
- https://medium.com/technologymadeeasy/the-best-explanation-of-convolutional-neural-networks-on-the-internet-fbb8b1ad5df8
- http://nmhkahn.github.io/Casestudy-CNN
- https://stats.stackexchange.com/questions/205150/how-do-bottleneck-architectures-work-in-neural-networks
- https://www.quora.com/What-exactly-is-the-degradation-problem-that-Deep-Residual-Networks-try-to-alleviate
- dl with torch
- bias 붙여버리면 inv는 어케 구하나? D=0되지 않나? 구할필요 없나?
- pytorch examples
- Identity Mappings in Deep Residual Networks arXiv:1603.05027
- cutting edge deeplearning for coders
nets
- CNN
- AlexNet
- dilated cnn
- pathnet
- ResNet
- Fast RCNN
- Faster RCNN
- R-FCN
- Inception (GoogLeNet)
- fully convolutional networks
- FractalNets
- highway networks
- Memory networks
- DenseNet
- NIN
- DSN
- Ladder Networks
- DFNs
- YOLO
general
- Affinity Propagation, Apcluster sparse
CUDA기타설치- Learning to learn by GD by GD
- Generative Models (GAN, VAE, etc)
- Batch Normalization
- Mean Average Precision
- Essential Cheat Sheets for Machine Learning and Deep Learning Engineers
- What is surrogate loss?
- Exponential Linear Unit
- Neural net이 working하지 않는 37가지 이유
- deconvolution
- Sparse coding
- MXNet Model Zoo
- logistic regression
- information bottleneck
- Artificial Curiosity
- sklearn examples