3.17. Plotting

First, let’s train up a simple network to explore. This one is trained to compute XOR:

In [1]:
from conx import Network, Layer, SGD

#net = Network("XOR Network", 2, 4, 1, activation="sigmoid")

net = Network("XOR Network")
net.add(Layer("input", shape=2))
net.add(Layer("hidden", shape=4, activation='sigmoid'))
net.add(Layer("output", shape=1, activation='sigmoid'))
net.connect()

dataset = [
    ([0, 0], [0], "1"),
    ([0, 1], [1], "2"),
    ([1, 0], [1], "3"),
    ([1, 1], [0], "4")
]
net.compile(loss='mean_squared_error', optimizer=SGD(lr=0.3, momentum=0.9))
net.dataset.load(dataset)
conx, version 3.4.3
Using Theano backend.
In [2]:
net.get_weights_as_image("hidden", None).size
Out[2]:
(4, 2)
In [3]:
net.get_weights_as_image("hidden", None).resize((400, 200))
Out[3]:
_images/Plotting_4_0.png
In [4]:
net.plot_layer_weights('hidden', cmap="RdBu")
_images/Plotting_5_0.png
In [5]:
net.reset(seed=3863479522)
net.train(epochs=2000, accuracy=1, report_rate=25, plot=True, record=True)
_images/Plotting_6_0.svg
========================================================================
       |  Training |  Training
Epochs |     Error |  Accuracy
------ | --------- | ---------
#  457 |   0.00886 |   1.00000
In [6]:
net.plot('loss', ymin=0)
_images/Plotting_7_0.png

3.17.1. plot_activation_map

This plotting function allows us to see the activation of a specific unit in a specific layer, as a function of the activations of two other units from an earlier layer. In this example, we show the behavior of the single output unit as the two input units are varied across the range 0.0 to 1.0:

In [7]:
net.plot_activation_map('input', (0,1), 'output', 0)
_images/Plotting_9_0.png

We can verify the above output activation map by running different input vectors through the network manually:

In [8]:
input=[1,1];net.propagate(input)[0]
Out[8]:
0.09831498563289642
In [9]:
# map of hidden[2] activation as a function of inputs
net.plot_activation_map('input', (0,1), 'hidden', 2, show_values=True)
_images/Plotting_12_0.png
----------------------------------------------------------------------------------------------------
Activation of hidden[2] as a function of input[0] and input[1]
rows: input[1] decreasing from 1.00 to 0.00
cols: input[0] increasing from 0.00 to 1.00

0.09 0.09 0.08 0.08 0.08 0.07 0.07 0.07 0.06 0.06 0.06 0.06 0.05 0.05 0.05 0.05 0.05 0.04 0.04 0.04
0.10 0.10 0.09 0.09 0.09 0.08 0.08 0.08 0.07 0.07 0.07 0.06 0.06 0.06 0.06 0.05 0.05 0.05 0.05 0.04
0.11 0.11 0.10 0.10 0.10 0.09 0.09 0.08 0.08 0.08 0.07 0.07 0.07 0.07 0.06 0.06 0.06 0.05 0.05 0.05
0.12 0.12 0.12 0.11 0.11 0.10 0.10 0.09 0.09 0.09 0.08 0.08 0.08 0.07 0.07 0.07 0.06 0.06 0.06 0.06
0.14 0.13 0.13 0.12 0.12 0.11 0.11 0.10 0.10 0.10 0.09 0.09 0.08 0.08 0.08 0.07 0.07 0.07 0.07 0.06
0.15 0.15 0.14 0.14 0.13 0.13 0.12 0.12 0.11 0.11 0.10 0.10 0.09 0.09 0.09 0.08 0.08 0.08 0.07 0.07
0.17 0.16 0.16 0.15 0.14 0.14 0.13 0.13 0.12 0.12 0.11 0.11 0.10 0.10 0.10 0.09 0.09 0.09 0.08 0.08
0.19 0.18 0.17 0.17 0.16 0.15 0.15 0.14 0.14 0.13 0.13 0.12 0.12 0.11 0.11 0.10 0.10 0.09 0.09 0.09
0.21 0.20 0.19 0.18 0.18 0.17 0.16 0.16 0.15 0.15 0.14 0.13 0.13 0.12 0.12 0.11 0.11 0.11 0.10 0.10
0.23 0.22 0.21 0.20 0.19 0.19 0.18 0.17 0.17 0.16 0.16 0.15 0.14 0.14 0.13 0.13 0.12 0.12 0.11 0.11
0.25 0.24 0.23 0.22 0.21 0.21 0.20 0.19 0.18 0.18 0.17 0.16 0.16 0.15 0.15 0.14 0.14 0.13 0.13 0.12
0.27 0.26 0.25 0.24 0.23 0.23 0.22 0.21 0.20 0.20 0.19 0.18 0.18 0.17 0.16 0.16 0.15 0.14 0.14 0.13
0.29 0.28 0.27 0.27 0.26 0.25 0.24 0.23 0.22 0.22 0.21 0.20 0.19 0.19 0.18 0.17 0.17 0.16 0.15 0.15
0.32 0.31 0.30 0.29 0.28 0.27 0.26 0.25 0.24 0.24 0.23 0.22 0.21 0.20 0.20 0.19 0.18 0.18 0.17 0.16
0.35 0.33 0.32 0.31 0.30 0.30 0.29 0.28 0.27 0.26 0.25 0.24 0.23 0.22 0.22 0.21 0.20 0.19 0.19 0.18
0.37 0.36 0.35 0.34 0.33 0.32 0.31 0.30 0.29 0.28 0.27 0.26 0.25 0.25 0.24 0.23 0.22 0.21 0.21 0.20
0.40 0.39 0.38 0.37 0.36 0.35 0.34 0.33 0.32 0.31 0.30 0.29 0.28 0.27 0.26 0.25 0.24 0.23 0.23 0.22
0.43 0.42 0.41 0.40 0.39 0.37 0.36 0.35 0.34 0.33 0.32 0.31 0.30 0.29 0.28 0.27 0.26 0.26 0.25 0.24
0.46 0.45 0.44 0.42 0.41 0.40 0.39 0.38 0.37 0.36 0.35 0.34 0.33 0.32 0.31 0.30 0.29 0.28 0.27 0.26
0.49 0.48 0.47 0.45 0.44 0.43 0.42 0.41 0.40 0.39 0.38 0.37 0.35 0.34 0.33 0.32 0.31 0.30 0.29 0.28
----------------------------------------------------------------------------------------------------
In [10]:
# map of output activation as a function of hidden units 2,3
net.plot_activation_map('hidden', (2,3), 'output', 0)
_images/Plotting_13_0.png

How does the network actually solve the problem? We can look at the intermediary values at the hidden layer by plotting each of the 4 hidden units in this manner:

In [11]:
for i in range(4):
    net.plot_activation_map('input', (0,1), 'hidden', i)
_images/Plotting_15_0.png
_images/Plotting_15_1.png
_images/Plotting_15_2.png
_images/Plotting_15_3.png
In [12]:
net.playback(lambda net, epoch:
             net.plot_activation_map(title="Epoch %s" % epoch, interactive=False))
_images/Plotting_16_1.svg

3.17.2. Adding Additional Hidden Layers

In [13]:
from conx import Network, Layer, SGD

#net = Network("XOR Network", 2, 4, 2, 1, activation="sigmoid")

net = Network("XOR Network")
net.add(Layer("input", shape=2))
net.add(Layer("hidden", shape=4, activation='sigmoid'))
net.add(Layer("hidden2", shape=2, activation='sigmoid'))
net.add(Layer("output", shape=1, activation='sigmoid'))
net.connect()

dataset = [
    ([0, 0], [0], "1"),
    ([0, 1], [1], "2"),
    ([1, 0], [1], "3"),
    ([1, 1], [0], "4")
]
net.compile(loss='mean_squared_error', optimizer=SGD(lr=0.3, momentum=0.9))
net.dataset.load(dataset)
In [14]:
net.reset(seed=3863479522)
net.train(epochs=2000, accuracy=1, report_rate=25, plot=True)
_images/Plotting_19_0.svg
========================================================================
       |  Training |  Training
Epochs |     Error |  Accuracy
------ | --------- | ---------
#  426 |   0.00691 |   1.00000
In [15]:
for i in range(2):
    net.plot_activation_map('hidden', (0,1), 'hidden2', i)
_images/Plotting_20_0.png
_images/Plotting_20_1.png

3.17.3. Plotting training error (loss) and training accuracy (acc)

In [16]:
net.plot("loss")
_images/Plotting_22_0.png
In [17]:
net.plot("acc")
_images/Plotting_23_0.png
In [18]:
net.plot(["loss", "acc"])
_images/Plotting_24_0.png
In [19]:
net.plot("all")
_images/Plotting_25_0.png

3.17.4. Plotting Your Own Data

In [1]:
from conx import plot, scatter, get_symbol
Using Theano backend.
conx, version 3.4.3
In [5]:
data = ["Type 1", [(0, 1), (1, 2), (2, .5)]]
scatter(data)
_images/Plotting_28_0.png
In [6]:
data = ["My Data", [1, 2, 6, 3, 4, 1]]
symbols = {"My Data": "rx"}
plot(data, symbols=symbols)
_images/Plotting_29_0.png
In [4]:
help(get_symbol)
Help on function get_symbol in module conx.utils:

get_symbol(label:str, symbols:dict=None) -> str
    Get a matplotlib symbol from a label.

    Possible shape symbols:

        * '-'   solid line style
        * '--'  dashed line style
        * '-.'  dash-dot line style
        * ':'   dotted line style
        * '.'   point marker
        * ','   pixel marker
        * 'o'   circle marker
        * 'v'   triangle_down marker
        * '^'   triangle_up marker
        * '<'   triangle_left marker
        * '>'   triangle_right marker
        * '1'   tri_down marker
        * '2'   tri_up marker
        * '3'   tri_left marker
        * '4'   tri_right marker
        * 's'   square marker
        * 'p'   pentagon marker
        * '*'   star marker
        * 'h'   hexagon1 marker
        * 'H'   hexagon2 marker
        * '+'   plus marker
        * 'x'   x marker
        * 'D'   diamond marker
        * 'd'   thin_diamond marker
        * '|'   vline marker
        * '_'   hline marker

    In addition, the shape symbol can be preceded by the following color abbreviations:

        * ‘b’   blue
        * ‘g’   green
        * ‘r’   red
        * ‘c’   cyan
        * ‘m’   magenta
        * ‘y’   yellow
        * ‘k’   black
        * ‘w’   white

    Examples:
        >>> get_symbol("Apple")
        'o'
        >>> get_symbol("Apple", {'Apple': 'x'})
        'x'
        >>> get_symbol("Banana", {'Apple': 'x'})
        'o'