"Generative Models"의 두 판 사이의 차이
ph
2번째 줄: | 2번째 줄: | ||
* VAE : http://kvfrans.com/variational-autoencoders-explained/ | * VAE : http://kvfrans.com/variational-autoencoders-explained/ | ||
** autoencoder와 동일하나 latent vector를 생성할 때 (unit) gaussian으로만 생성하도록 constraint를 줌. 그래서 unit gaussian random variable로부터 generate. | ** autoencoder와 동일하나 latent vector를 생성할 때 (unit) gaussian으로만 생성하도록 constraint를 줌. 그래서 unit gaussian random variable로부터 generate. | ||
+ | ** In practice, there's a tradeoff between how accurate our network can be and how close its latent variables can match the unit gaussian distribution. | ||
* GAN, VAE, pixel-rnn (by OpenAI) : https://blog.openai.com/generative-models/ | * GAN, VAE, pixel-rnn (by OpenAI) : https://blog.openai.com/generative-models/ |
2017년 5월 4일 (목) 02:33 판
- GAN : http://kvfrans.com/generative-adversial-networks-explained/
- VAE : http://kvfrans.com/variational-autoencoders-explained/
- autoencoder와 동일하나 latent vector를 생성할 때 (unit) gaussian으로만 생성하도록 constraint를 줌. 그래서 unit gaussian random variable로부터 generate.
- In practice, there's a tradeoff between how accurate our network can be and how close its latent variables can match the unit gaussian distribution.
- GAN, VAE, pixel-rnn (by OpenAI) : https://blog.openai.com/generative-models/