"Generative Models"의 두 판 사이의 차이

ph
이동: 둘러보기, 검색
잔글 (→‎GAN)
잔글 (→‎GAN)
 
8번째 줄: 8번째 줄:
 
** [https://arxiv.org/abs/1606.00709 f-GAN: Training Generative Neural Samplers using Variational Divergence Minimization] Sebastian Nowozin, Botond Cseke, Ryota Tomioka. Jun 2016
 
** [https://arxiv.org/abs/1606.00709 f-GAN: Training Generative Neural Samplers using Variational Divergence Minimization] Sebastian Nowozin, Botond Cseke, Ryota Tomioka. Jun 2016
 
* https://blogs.nvidia.com/blog/2017/05/17/generative-adversarial-network
 
* https://blogs.nvidia.com/blog/2017/05/17/generative-adversarial-network
* [[Learning from Simulated and Unsupervised Images through Adversarial Training|SimGAN]] 2017 CVPR best paper. by Apple.
+
* [[SimGAN]] 2017 CVPR best paper. by Apple.
  
 
=VAE=
 
=VAE=

2017년 7월 27일 (목) 11:29 기준 최신판

GAN

VAE

  • http://kvfrans.com/variational-autoencoders-explained/
    • autoencoder와 동일하나 latent vector를 생성할 때 (unit) gaussian으로만 생성하도록 constraint를 줌. 그래서 unit gaussian random variable로부터 generate.
    • In practice, there's a tradeoff between how accurate our network can be and how close its latent variables can match the unit gaussian distribution.
    • latent vector를 바로 만들지도 않고 mean, std만 만들어낸다.
    • we can compare generated images directly to the originals, which is not possible when using a GAN.
  • VAE in tensorflow

etc

GAN, VAE, pixel-rnn (by OpenAI)

https://blog.openai.com/generative-models/

GAN vs VAE

https://www.reddit.com/r/MachineLearning/comments/4r3pjy/variational_autoencoders_vae_vs_generative/