3.10. Autoencoding

3.10.1. One-hot Encoder

In [1]:
import conx as cx
Using Theano backend.
Conx, version 3.6.0
In [2]:
size = 5
In [3]:
net = cx.Network("Autoencoder")
net.add(cx.Layer("input", size),
        cx.Layer("hidden", 5, activation="relu"),
        cx.Layer("output", size, activation="sigmoid"))
Out[3]:
'output'
In [4]:
net.connect()
In [6]:
net.compile(error='mse', optimizer="adam")
In [7]:
net.picture()
Out[7]:
Layer: output (output) output range: (0, 1) shape = (5,) Keras class = Dense activation = sigmoidoutputWeights from hidden to output output/kernel has shape (5, 5) output/bias has shape (5,)Layer: hidden (hidden) output range: (0, +Infinity) shape = (5,) Keras class = Dense activation = reluhiddenWeights from input to hidden hidden/kernel has shape (5, 5) hidden/bias has shape (5,)Layer: input (input) output range: (-Infinity, +Infinity) shape = (5,) Keras class = InputinputAutoencoder
In [9]:
patterns = [cx.onehot(i, size) for i in range(size)]
In [10]:
patterns[0]
Out[10]:
[1, 0, 0, 0, 0]
In [11]:
net.dataset.load([(p,p) for p in patterns])
In [12]:
net.picture()
Out[12]:
Layer: output (output) output range: (0, 1) shape = (5,) Keras class = Dense activation = sigmoidoutputWeights from hidden to output output/kernel has shape (5, 5) output/bias has shape (5,)Layer: hidden (hidden) output range: (0, +Infinity) shape = (5,) Keras class = Dense activation = reluhiddenWeights from input to hidden hidden/kernel has shape (5, 5) hidden/bias has shape (5,)Layer: input (input) output range: (0.0, 1.0) shape = (5,) Keras class = InputinputAutoencoder
In [13]:
net.dataset.info()

Dataset: Dataset for Autoencoder

Information: * name : None * length : 5

Input Summary: * shape : (5,) * range : (0.0, 1.0)

Target Summary: * shape : (5,) * range : (0.0, 1.0)

In [14]:
net.reset()
net.train(accuracy=1, epochs=10000, report_rate=200, tolerance=0.4, plot=True)
_images/Autoencoder_13_0.svg
========================================================
       |  Training |  Training
Epochs |     Error |  Accuracy
------ | --------- | ---------
# 1002 |   0.02607 |   1.00000
In [15]:
net.propagate(net.dataset.inputs[0])
Out[15]:
[0.8163065910339355,
 0.07763548195362091,
 0.1480734795331955,
 0.06849285215139389,
 0.14060361683368683]
In [16]:
net.test(tolerance=0.4, show=True)
========================================================
Testing validation dataset with tolerance 0.4...
# | inputs | targets | outputs | result
---------------------------------------
0 | [[1.00,0.00,0.00,0.00,0.00]] | [[1.00,0.00,0.00,0.00,0.00]] | [0.82,0.08,0.15,0.07,0.14] | correct
1 | [[0.00,1.00,0.00,0.00,0.00]] | [[0.00,1.00,0.00,0.00,0.00]] | [0.04,0.60,0.29,0.15,0.05] | correct
2 | [[0.00,0.00,1.00,0.00,0.00]] | [[0.00,0.00,1.00,0.00,0.00]] | [0.19,0.30,0.75,0.04,0.00] | correct
3 | [[0.00,0.00,0.00,1.00,0.00]] | [[0.00,0.00,0.00,1.00,0.00]] | [0.08,0.17,0.07,0.85,0.07] | correct
4 | [[0.00,0.00,0.00,0.00,1.00]] | [[0.00,0.00,0.00,0.00,1.00]] | [0.09,0.04,0.01,0.09,0.86] | correct
Total count: 5
      correct: 5
      incorrect: 0
Total percentage correct: 1.0
In [17]:
net.dashboard()

3.10.2. MNIST Autoencoding

In [22]:
net = cx.Network("MNIST-Autoencoder")
net.add(cx.ImageLayer("input", (28,28), 1),
        cx.Conv2DLayer("conv", 3, (5,5), activation="relu"),
        cx.MaxPool2DLayer("pool", pool_size=(2,2)),
        cx.FlattenLayer("flatten"),
        cx.Layer("hidden3", 25, activation="relu"),
        cx.Layer("output", (28,28,1), activation="sigmoid"))
net.connect()
net.compile(error="mse", optimizer="adam")
In [23]:
net.picture()
Out[23]:
Layer: output (output) output range: (0, 1) shape = (28, 28, 1) Keras class = Dense activation = sigmoidoutputWeights from hidden3 to output output/kernel has shape (25, 784) output/bias has shape (784,)Layer: hidden3 (hidden) output range: (0, +Infinity) shape = (25,) Keras class = Dense activation = reluhidden3Weights from flatten to hidden3 hidden3/kernel has shape (432, 25) hidden3/bias has shape (25,)Layer: flatten (hidden) output range: (-Infinity, +Infinity) Keras class = FlattenflattenWeights from pool to flattenLayer: pool (hidden) output range: (-Infinity, +Infinity) Keras class = MaxPooling2D pool_size = (2, 2)pool30Weights from conv to poolLayer: conv (hidden) output range: (0, +Infinity) Keras class = Conv2D activation = reluconv30Weights from input to conv conv/kernel has shape (5, 5, 1, 3) conv/bias has shape (3,)Layer: input (input) output range: (-Infinity, +Infinity) shape = (28, 28, 1) Keras class = InputinputMNIST-Autoencoder
In [24]:
net.dataset.get('mnist')
net.dataset.set_targets_from_inputs()
net.dataset.targets.reshape(0, (28 * 28))
net.dataset.info()
WARNING: network 'MNIST-Autoencoder' target bank #0 has a multi-dimensional shape, which is not allowed

Dataset: MNIST

Original source: http://yann.lecun.com/exdb/mnist/

The MNIST dataset contains 70,000 images of handwritten digits (zero to nine) that have been size-normalized and centered in a square grid of pixels. Each image is a 28 × 28 × 1 array of floating-point numbers representing grayscale intensities ranging from 0 (black) to 1 (white). The target data consists of one-hot binary vectors of size 10, corresponding to the digit classification categories zero through nine. Some example MNIST images are shown below:

MNIST Images

MNIST Images

Information: * name : MNIST * length : 70000

Input Summary: * shape : (28, 28, 1) * range : (0.0, 1.0)

Target Summary: * shape : (784,) * range : (0.0, 1.0)

In [25]:
net.dashboard()
In [27]:
net.propagate_to_features("pool", net.dataset.inputs[0], cols=3)
Out[27]:

Feature 0

Feature 1

Feature 2
In [28]:
image = net.dataset.inputs[0]
output = net.propagate_to_image("output", image)
output.size
Out[28]:
(28, 28)
In [29]:
net.propagate_to("hidden3", image)
Out[29]:
[0.22426757216453552,
 0.0,
 0.4452212154865265,
 0.0,
 0.0,
 0.0,
 0.07889281213283539,
 0.0,
 0.35440731048583984,
 0.0,
 0.0,
 0.10275307297706604,
 0.0,
 0.0,
 0.0,
 0.21153497695922852,
 0.24092230200767517,
 0.0,
 0.016354842111468315,
 0.0,
 0.5159655213356018,
 0.0,
 0.0,
 0.0,
 0.0]
In [31]:
net.dataset.slice(69900)
In [32]:
net.train(accuracy=0.5, epochs=1000, report_rate=100, tolerance=.4, plot=True)
_images/Autoencoder_26_0.svg
========================================================
       |  Training |  Training
Epochs |     Error |  Accuracy
------ | --------- | ---------
# 1000 |   0.00200 |   0.40000
In [33]:
net.test(show_inputs=False, show_outputs=False, show=True)
========================================================
Testing validation dataset with tolerance 0.4...
# | result
---------------------------------------
0 | X
1 | X
2 | correct
3 | correct
4 | correct
5 | X
6 | X
7 | X
8 | correct
9 | X
Total count: 10
      correct: 4
      incorrect: 6
Total percentage correct: 0.4