10 Matching Annotations
- Nov 2017
-
code.facebook.com code.facebook.com
-
Adversarial networks provide a strong algorithmic framework for building unsupervised learning models that incorporate properties such as common sense, and we believe that continuing to explore and push in this direction gives us a reasonable chance of succeeding in our quest to build smarter AI.
-
This demonstration of unsupervised generative models learning object attributes like scale, rotation, position, and semantics was one of the first.
-
DCGANs are also able to identify patterns and put similar representations together in some dimension space.
-
Practically, this property of adversarial networks translates to better, sharper generations and higher-quality predictive models.
-
The adversarial network learns its own cost function — its own complex rules of what is correct and what is wrong — bypassing the need to carefully design and construct one.
-
This cost function forms the basis of what the neural network learns and how well it learns. A traditional neural network is given a cost function that is carefully constructed by a human scientist.
-
While previous attempts to use CNNs to train generative adversarial networks were unsuccessful, when we modified their architecture to create DCGANs, we were able to visualize the filters the networks learned at each layer, thus opening up the black box.
-
This type of optimization is difficult, and if the model weren't stable, we would not find this center point.
-
Instead of having a neural network that takes an image and tells you whether it's a dog or an airplane, it does the reverse: It takes a bunch of numbers that describe the content and then generates an image accordingly.
-
An adversarial network has a generator, which produces some type of data — say, an image — from some random input, and a discriminator, which gets input either from the generator or from a real data set and has to distinguish between the two — telling the real apart from the fake.
-