"Machine Learning"의 두 판 사이의 차이

ph
이동: 둘러보기, 검색
잔글 (→‎general)
잔글
 
(같은 사용자의 중간 판 32개는 보이지 않습니다)
1번째 줄: 1번째 줄:
 +
== after AI ==
 +
* [[cot연구동향]]
 +
 +
 
==by themes==
 
==by themes==
 
* [[Recommendation]]
 
* [[Recommendation]]
 
* [[Face]]
 
* [[Face]]
 
* [[Person re-identification]]
 
* [[Person re-identification]]
 +
* [[segmentation]]
 +
 +
==posts==
 +
* [[conv1d]]
 +
* [[naive gradient descent]]
 +
* [https://www.notion.so/nll-loss-57987c7b5f7342e4b6bc929c9b3be587 nll loss]
  
 
==ril==
 
==ril==
13번째 줄: 23번째 줄:
 
* [https://github.com/pytorch/examples pytorch examples ]
 
* [https://github.com/pytorch/examples pytorch examples ]
 
* [https://arxiv.org/abs/1603.05027 Identity Mappings in Deep Residual Networks] arXiv:1603.05027
 
* [https://arxiv.org/abs/1603.05027 Identity Mappings in Deep Residual Networks] arXiv:1603.05027
 +
* [http://www.fast.ai/2017/07/28/deep-learning-part-two-launch/ cutting edge deeplearning for coders]
 +
 +
==nets==
 +
* [[1118_Convolutional_Neural_Network|CNN]]
 +
* [[AlexNet]]
 +
* [[dilated cnn]]
 +
* [[pathnet]]
 +
* [[ResNet]]
 +
* [[Fast RCNN]]
 +
* [[Faster RCNN]]
 +
* [[R-FCN]]
 +
* [[GoogLeNet|Inception]] (GoogLeNet)
 +
* [[fully convolutional networks]]
 +
* [[FractalNets]]
 +
* [[highway networks]]
 +
* [[Memory networks]]
 +
* [[DenseNet]]
 +
* [[Network in Network|NIN]]
 +
* [[Deeply Supervised Network|DSN]]
 +
* [[Ladder Networks]]
 +
* [[Deeply-Fused Nets|DFNs]]
 +
* [[YOLO]]
  
 
==general==
 
==general==
* [[CUDA기타설치]]
+
* [[0811 Affinity Propagation|Affinity Propagation]], [[Apcluster sparse]]
* nets
+
* <del>[[CUDA기타설치]]</del>
** [[AlexNet]]
 
** [[dilated cnn]]
 
** [[pathnet]]
 
** [[ResNet]]
 
** [[GoogLeNet|Inception]] (GoogLeNet)
 
** [[fully convolutional networks]]
 
** [[FractalNets]]
 
** [[highway networks]]
 
** [[densely connected convolutional networks|DenseNet]]
 
** [[Network in Network|NIN]]
 
** [[Deeply Supervised Network|DSN]]
 
** [[Ladder Networks]]
 
** [[Deeply-Fused Nets|DFNs]]
 
 
* [[Learning to learn by GD by GD]]
 
* [[Learning to learn by GD by GD]]
* [[Generative Models]]
+
* [[Generative Models]] (GAN, VAE, etc)
 
* [[Batch Normalization]]
 
* [[Batch Normalization]]
* [[YOLO]]
 
 
* [[Mean Average Precision]]
 
* [[Mean Average Precision]]
 
* [https://medium.com/@kailashahirwar/essential-cheat-sheets-for-machine-learning-and-deep-learning-researchers-efb6a8ebd2e5 Essential Cheat Sheets for Machine Learning and Deep Learning Engineers]
 
* [https://medium.com/@kailashahirwar/essential-cheat-sheets-for-machine-learning-and-deep-learning-researchers-efb6a8ebd2e5 Essential Cheat Sheets for Machine Learning and Deep Learning Engineers]
 
* [http://fa.bianp.net/blog/2014/surrogate-loss-functions-in-machine-learning/ What is surrogate loss?]
 
* [http://fa.bianp.net/blog/2014/surrogate-loss-functions-in-machine-learning/ What is surrogate loss?]
 +
* [[Exponential Linear Unit]]
 +
* [[Neural net이 working하지 않는 37가지 이유]]
 +
* [[deconvolution]]
 +
* [[Sparse coding]]
 +
* [http://mxnet.io/model_zoo/index.html MXNet Model Zoo]
 +
* [[logistic regression]]
 +
*[[0926 information bottleneck|information bottleneck]]
 +
*[[0928 Artificial Curiosity|Artificial Curiosity]]
 +
* sklearn examples
 +
** [[sklearn preprocessing]]
 +
 +
==x==
 +
* [[Isolation Forest]]

2025년 3월 1일 (토) 00:16 기준 최신판

after AI


by themes

posts

ril

nets

general

x