"Generative Models"의 두 판 사이의 차이

ph
이동: 둘러보기, 검색
7번째 줄: 7번째 줄:
 
=VAE=
 
=VAE=
 
* http://kvfrans.com/variational-autoencoders-explained/
 
* http://kvfrans.com/variational-autoencoders-explained/
* autoencoder와 동일하나 latent vector를 생성할 때 (unit) gaussian으로만 생성하도록 constraint를 줌. 그래서 unit gaussian random variable로부터 generate.
+
** autoencoder와 동일하나 latent vector를 생성할 때 (unit) gaussian으로만 생성하도록 constraint를 줌. 그래서 unit gaussian random variable로부터 generate.
* In practice, there's a tradeoff between how accurate our network can be and how close its latent variables can match the unit gaussian distribution.
+
** In practice, there's a tradeoff between how accurate our network can be and how close its latent variables can match the unit gaussian distribution.
* latent vector를 바로 만들지도 않고 mean, std만 만들어낸다.
+
** latent vector를 바로 만들지도 않고 mean, std만 만들어낸다.
* we can compare generated images directly to the originals, which is '''''not possible''''' when using a GAN.
+
** we can compare generated images directly to the originals, which is '''''not possible''''' when using a GAN.
 
* [https://jmetzen.github.io/2015-11-27/vae.html VAE in tensorflow]
 
* [https://jmetzen.github.io/2015-11-27/vae.html VAE in tensorflow]
  
 
= GAN, VAE, pixel-rnn (by OpenAI)=
 
= GAN, VAE, pixel-rnn (by OpenAI)=
 
https://blog.openai.com/generative-models/
 
https://blog.openai.com/generative-models/

2017년 5월 4일 (목) 14:20 판

GAN

VAE

  • http://kvfrans.com/variational-autoencoders-explained/
    • autoencoder와 동일하나 latent vector를 생성할 때 (unit) gaussian으로만 생성하도록 constraint를 줌. 그래서 unit gaussian random variable로부터 generate.
    • In practice, there's a tradeoff between how accurate our network can be and how close its latent variables can match the unit gaussian distribution.
    • latent vector를 바로 만들지도 않고 mean, std만 만들어낸다.
    • we can compare generated images directly to the originals, which is not possible when using a GAN.
  • VAE in tensorflow

GAN, VAE, pixel-rnn (by OpenAI)

https://blog.openai.com/generative-models/