"Generative Models"의 두 판 사이의 차이
ph
1번째 줄: | 1번째 줄: | ||
− | + | =GAN= | |
− | * | + | http://kvfrans.com/generative-adversial-networks-explained/ |
− | + | ||
− | + | =VAE= | |
− | + | * http://kvfrans.com/variational-autoencoders-explained/ | |
− | + | * autoencoder와 동일하나 latent vector를 생성할 때 (unit) gaussian으로만 생성하도록 constraint를 줌. 그래서 unit gaussian random variable로부터 generate. | |
− | * GAN, VAE, pixel-rnn (by OpenAI) | + | * In practice, there's a tradeoff between how accurate our network can be and how close its latent variables can match the unit gaussian distribution. |
+ | * latent vector를 바로 만들지도 않고 mean, std만 만들어낸다. | ||
+ | * we can compare generated images directly to the originals, which is '''''not possible''''' when using a GAN. | ||
+ | * [https://jmetzen.github.io/2015-11-27/vae.html VAE in tensorflow] | ||
+ | |||
+ | = GAN, VAE, pixel-rnn (by OpenAI)= | ||
+ | https://blog.openai.com/generative-models/ |
2017년 5월 4일 (목) 02:44 판
GAN
http://kvfrans.com/generative-adversial-networks-explained/
VAE
- http://kvfrans.com/variational-autoencoders-explained/
- autoencoder와 동일하나 latent vector를 생성할 때 (unit) gaussian으로만 생성하도록 constraint를 줌. 그래서 unit gaussian random variable로부터 generate.
- In practice, there's a tradeoff between how accurate our network can be and how close its latent variables can match the unit gaussian distribution.
- latent vector를 바로 만들지도 않고 mean, std만 만들어낸다.
- we can compare generated images directly to the originals, which is not possible when using a GAN.
- VAE in tensorflow