References
[3]. Denton, E. L., Chintala, S., & Fergus, R. (2015). Deep
generative image models using a laplacian pyramid of
adversarial networks. Advances in Neural Information
Processing Systems, 28, 1-9.
[7]. Kushal, L., Evgeny, K., Wei-Ning, H., Yossi, A., Adam, P.,
Tu-Anh, N., ... & Emmanuel, D. (2021). Generative spoken
language modeling from raw audio. Transactions of the
Association for Computational Linguistics, 9, 1336–1354.
[10]. Mathieu, M. F., Zhao, J. J., Zhao, J., Ramesh, A.,
Sprechmann, P., & LeCun, Y. (2016). Disentangling factors
of variation in deep representation using adversarial
training. Advances in Neural Information Processing
Systems, 29, 1-9.
[12]. Pathak, D., Krahenbuhl, P., Donahue, J., Darrell, T., &
Efros, A. A. (2016). Context encoders: Feature learning by
inpainting. In Proceedings of the IEEE Conference on
Computer Vision and Pattern Recognition (pp. 2536-2544).
[15]. Reed, S., Akata, Z., Yan, X., Logeswaran, L., Schiele,
B., & Lee, H. (2016, June). Generative adversarial text to
image synthesis. In International Conference on Machine
Learning (pp. 1060-1069). PMLR.
[16].
Salimans, T., Goodfellow, I., Zaremba, W., Cheung,
V., Radford, A., & Chen, X. (2016). Improved Techniques
for Training Gans. arXiv:1606.03498.
[20]. Vondrick, C., Pirsiavash, H., & Torralba, A. (2016).
Generating videos with scene dynamics. Advances in
Neural Information Processing Systems, 29, 1-9.
[21]. Wu, J., Zhang, C., Xue, T., Freeman, B., &
Tenenbaum, J. (2016). Learning a probabilistic latent space of object shapes via 3d generative-adversarial
modeling. Advances in Neural Information Processing
Systems, 29, 1-9.
[24]. Zhou, T., Krahenbuhl, P., Aubry, M., Huang, Q., &
Efros, A. A. (2016). Learning dense correspondence via
3d-guided cycle consistency. In Proceedings of the IEEE Conference on Computer Vision and Pattern
Recognition (pp. 117-126).
[26]. Zhu, J. Y., Park, T., Isola, P., & Efros, A. A. (2017).
Unpaired image-to-image translation using cycleconsistent adversarial networks. In Proceedings of the IEEE
International Conference on Computer Vision (pp. 2223-2232).