"Machine Learning"의 두 판 사이의 차이

ph
이동: 둘러보기, 검색
잔글 (Admin님이 Deep Learning 문서를 Learning 문서로 이동했습니다)
잔글
 
(같은 사용자의 중간 판 59개는 보이지 않습니다)
1번째 줄: 1번째 줄:
* #ril
+
== after AI ==
** https://medium.com/technologymadeeasy/the-best-explanation-of-convolutional-neural-networks-on-the-internet-fbb8b1ad5df8
+
* [[cot연구동향]]
** http://nmhkahn.github.io/Casestudy-CNN
+
 
** https://stats.stackexchange.com/questions/205150/how-do-bottleneck-architectures-work-in-neural-networks
+
 
** https://www.quora.com/What-exactly-is-the-degradation-problem-that-Deep-Residual-Networks-try-to-alleviate
+
==by themes==
** dl with torch
+
* [[Recommendation]]
* [[CUDA기타설치]]
 
* nets
 
** [[dilated cnn]]
 
** [[pathnet]]
 
** [[ResNet]]
 
** [[GoogLeNet]] (Inception)
 
* [[Learning to learn by GD by GD]]
 
 
* [[Face]]
 
* [[Face]]
 
* [[Person re-identification]]
 
* [[Person re-identification]]
* [[Generative Models]]
+
* [[segmentation]]
 +
 
 +
==posts==
 +
* [[conv1d]]
 +
* [[naive gradient descent]]
 +
* [https://www.notion.so/nll-loss-57987c7b5f7342e4b6bc929c9b3be587 nll loss]
 +
 
 +
==ril==
 +
* https://medium.com/technologymadeeasy/the-best-explanation-of-convolutional-neural-networks-on-the-internet-fbb8b1ad5df8
 +
* http://nmhkahn.github.io/Casestudy-CNN
 +
* https://stats.stackexchange.com/questions/205150/how-do-bottleneck-architectures-work-in-neural-networks
 +
* https://www.quora.com/What-exactly-is-the-degradation-problem-that-Deep-Residual-Networks-try-to-alleviate
 +
* dl with torch
 +
* bias 붙여버리면 inv는 어케 구하나? D=0되지 않나? 구할필요 없나?
 +
* [https://github.com/pytorch/examples pytorch examples ]
 +
* [https://arxiv.org/abs/1603.05027 Identity Mappings in Deep Residual Networks] arXiv:1603.05027
 +
* [http://www.fast.ai/2017/07/28/deep-learning-part-two-launch/ cutting edge deeplearning for coders]
 +
 
 +
==nets==
 +
* [[1118_Convolutional_Neural_Network|CNN]]
 +
* [[AlexNet]]
 +
* [[dilated cnn]]
 +
* [[pathnet]]
 +
* [[ResNet]]
 +
* [[Fast RCNN]]
 +
* [[Faster RCNN]]
 +
* [[R-FCN]]
 +
* [[GoogLeNet|Inception]] (GoogLeNet)
 +
* [[fully convolutional networks]]
 +
* [[FractalNets]]
 +
* [[highway networks]]
 +
* [[Memory networks]]
 +
* [[DenseNet]]
 +
* [[Network in Network|NIN]]
 +
* [[Deeply Supervised Network|DSN]]
 +
* [[Ladder Networks]]
 +
* [[Deeply-Fused Nets|DFNs]]
 +
* [[YOLO]]
 +
 
 +
==general==
 +
* [[0811 Affinity Propagation|Affinity Propagation]], [[Apcluster sparse]]
 +
* <del>[[CUDA기타설치]]</del>
 +
* [[Learning to learn by GD by GD]]
 +
* [[Generative Models]] (GAN, VAE, etc)
 
* [[Batch Normalization]]
 
* [[Batch Normalization]]
* [[Recommendation]]
+
* [[Mean Average Precision]]
* [[YOLO]]
 
 
* [https://medium.com/@kailashahirwar/essential-cheat-sheets-for-machine-learning-and-deep-learning-researchers-efb6a8ebd2e5 Essential Cheat Sheets for Machine Learning and Deep Learning Engineers]
 
* [https://medium.com/@kailashahirwar/essential-cheat-sheets-for-machine-learning-and-deep-learning-researchers-efb6a8ebd2e5 Essential Cheat Sheets for Machine Learning and Deep Learning Engineers]
 +
* [http://fa.bianp.net/blog/2014/surrogate-loss-functions-in-machine-learning/ What is surrogate loss?]
 +
* [[Exponential Linear Unit]]
 +
* [[Neural net이 working하지 않는 37가지 이유]]
 +
* [[deconvolution]]
 +
* [[Sparse coding]]
 +
* [http://mxnet.io/model_zoo/index.html MXNet Model Zoo]
 +
* [[logistic regression]]
 +
*[[0926 information bottleneck|information bottleneck]]
 +
*[[0928 Artificial Curiosity|Artificial Curiosity]]
 +
* sklearn examples
 +
** [[sklearn preprocessing]]
 +
 +
==x==
 +
* [[Isolation Forest]]

2025년 3월 1일 (토) 00:16 기준 최신판

after AI


by themes

posts

ril

nets

general

x